Could you provide reference information to support this background assertion? I'm not totally familiar with filesystems under the hood, but at this point doesn't storage hardware maintain an electrical representation relatively independent from the logical given things like wear leveling?
- You can reason about block offsets. If your writes are 512B-aligned, you can be ensured minimal write amplification.
- If your writes are append-only, log-structured, that makes SSD compaction a lot more straightforward
- No caching guarantees by default. Again, even SSDs cache writes. Block writes are not atomic even with SSDs. The only way to guarantee atomicity is via write-ahead logs.
- The NVMe layer exposes async submission/completion queues, to control the io_depth the device is subjected to, which is essential to get max perf from modern NVMe SSDs. Although you need to use the right interface to leverage it (libaio/io_uring/SPDK).
Not all devices use 512 byte sectors, an that is mostly a relic from low-density spinning rust;
> If your writes are append-only, log-structured, that makes SSD compaction a lot more straightforward
Hum, no. Your volume may be a sparse file on SAN system; in fact that is often the case in cloud environments; also, most cached RAID controllers may have different behaviours on this - unless you know exactly what your targeting, you're shooting blind.
> No caching guarantees by default. Again, even SSDs cache writes. Block writes are not atomic even with SSDs. The only way to guarantee atomicity is via write-ahead logs.
Not even that way. Most server-grade controllers (with battery) will ack an fsync immediately, even if the data is not on disk yet.
> The NVMe layer exposes async submission/completion queues, to control the io_depth the device is subjected to, which is essential to get max perf from modern NVMe SSDs.
Thats storage domain, not application domain. In most cloud systems, you have the choice of using direct attached storage (usually with a proper controller, so what is exposed is actually the controller features, not the individual nvme queue), or SAN storage - a sparse file on a filesystem on a system that is at the end of a tcp endpoint. One of those provides easy backups, redundancy, high availability and snapshots, and the other one you roll your own.
To say that that's not true would require more than cherry-picking examples of where some fileystem assumption may be tenuous, it would require demonstrating how a DBMS can do better.
> Not all devices use 512 byte sectors
4K then :). Files are block-aligned, and the predominant block size changed once in 40 years from 512B to 4K.
> Hum, no. Your volume may be a sparse file on SAN system
Regardless, sequential writes will always provide better write performance than random writes. With POSIX, you control this behavior directly. With a DBMS, you control it by swapping out InnoDB for RocksDB or something.
> Thats storage domain, not application domain
It is a storage domain feature accessible to an IOPS-hungry application via a modern Linux interface like io_uring. NVMe-oF would be the networked storage interface that enables that. But this is for when you want to outperform a DBMS by 200X, aligned I/O is sufficient for 10-50X. :)
The NVMe layer is not the same as the POSIX filesystem, there is no reason we need to throw that as part of knocking the POSIX filesystem off it's privileged position.
Overall you are talking about individual files, but remember what really distinguishes the filesystem is directories. Other database, even relational ones, can have binary blob "leaf data" with the properties you speak about.
I think "guaranteed" is too strong a word given the number of filesystems and flags out there, but "largely" you get aligned I/O.
> The NVMe layer is not the same as the POSIX filesystem, there is no reason we need to throw that as part of knocking the POSIX filesystem off it's privileged position.
I'd say that the POSIX filesystem lives in an ecosystem that makes leveraging NVMe layer characteristics a viable option. More along with the next point.
> Overall you are talking about individual files, but remember what really distinguishes the filesystem is directories. Other database, even relational ones, can have binary blob "leaf data" with the properties you speak about.
I think regardless of how you use a database, your interface is declarative. You always say "update this row" vs "fseek to offset 1048496 and fwrite 128 bytes and then fsync the page cache". Something needs to do the translation from "update this row" to the latter, and that layer will always be closer to hardware.
Mature database implementations also bypass a lot of kernel machinary to get closer to the underlying block devices. The layering of DB on top of FS is a failure.
Common usage does this by convention, but that's just sloppy thinking and populist extentional definitining. I posit that any rigorous, thought-out, not overfit intentional definition of a database will, as a matter of course, also include file systems.