I asked DJ (not on HN, but hangs out in our community Slack [3] where you can ask further if curious), who knows the disk side of things best, and he responds:
The OS is free to reorder writes (this is true for both io_uring and conventional IO).
In practice it does this for spinning disks, but not SSDs.
The OS is aware of the "geometry" of a spinning disk, i.e. what sectors are physically close to each other.
But for NVME SSDs it is typically handled in the firmware. SSDs internally remap "logical" addresses (i.e. the address from the OS point of view) to "physical" addresses (actual locations on the SSD).
e.g. if the application (or OS) writes to block address "1" then "2", the SSD does not necessarily store these in adjacent physical locations. (OSTEP explains this well [0].)
"Performance Analysis of NVMe SSDs and their Implication on Real World Databases" explains in more detail:
> In the conventional SATA I/O path, an I/O request arriving at the block layer will first be inserted into a request queue (Elevator). The Elevator would then reorder and combine multiple requests into sequential requests. While reordering was needed in HDDs because of their slow random access characteristics, it became redundant in SSDs where random access latencies are almost the same as sequential. Indeed, the most commonly used Elevator scheduler for SSDs is the noop scheduler (Rice 2013), which implements a simple First-In-First-Out (FIFO) policy without any reordering.
Applications can help performance by grouping writes according to time-of-death (per "The Unwritten Contract of Solid State Drives" [2]), but the SSD is free to do whatever. We are shortly going to be reworking the LSM's compaction scheduling to take advantage of this: https://github.com/tigerbeetledb/tigerbeetle/issues/269.
[0] https://pages.cs.wisc.edu/~remzi/OSTEP/file-ssd.pdf
[1] https://www.cs.binghamton.edu/~tameesh/pubs/systor2015.pdf
[2] https://pages.cs.wisc.edu/~jhe/eurosys17-he.pdf
[3] https://join.slack.com/t/tigerbeetle/shared_invite/zt-1gf3qn...