But ZFS doesn't demand that users be aware of holes in files. You can just call `seek()` and `read()` to anywhere, and ZFS will transparently provide zeros to fill the holes. Linux also allows software to become "hole-aware" using `lseek()`, but that's an optimisation that software can opt into, but can equally just ignore.
The glitch in this case was a failure to correctly track dirty pages that have yet to be written to disk, and thus reading the on-disk data, rather than the data in-memory data within the dirty page. I just so happens this issue only appears in the code that's responsible for responding to queries about holes from software that's explicitly asking to know about the holes. ZFS itself never had any issues keeping track of the holes, the bookkeeping always converged on the correct state, it's just that during that convergence it was momentarily possible to be given old metadata about holes (i.e. what's currently on disk), rather than the current metadata about holes (i.e. what's currently only in-memory, and about to be written to disk).