I think you misunderstood the first half of my comment. Many of those things have an asterisk after “synchronous”, but in their default mode they often are in fact sync.
Saying “technically commands are posted and later observed, and that’s how we get security vulns” is an extraordinary claim. The vast, vast majority of program statements do not work thus. Memory operations immediately move instructions through the CPU to access data. Many IO operations request that the kernel immediately send an interrupt to a hardware device and wait for a response from that device to be written to memory—that’s synchronicity in the domain of electrical engineering, not just software. And sure, there’s periodicity and batching there (waiting for scheduler ticks and interrupt poll frequency and such), but none of that makes something less than synchronous; it just might slow it down. Unless you were referring only to timing attacks in your claim that security vulnerabilities result from not-really-synchronous actions, then I think that’s wrong twice over.
To expand on the examples: mkdir(2)’s durability (which is what we’re talking about when we refer to “synchronous-ness” of filesystem ops) depends—it depends on the filesystem and caching configuration of the system. But on many (I’d hazard most) file systems and configurations, new directories are persisted either immediately through the dentcache to the disk, or are during one of the next two calls to fsync(2), the next example I listed. And sure, there’s subtlety there! Your disk can lie, your RAID controller can lie and leave data in a write cache, exactly what is synchronous when you call fsync(2) depends on whether it’s the first or second call made in succession, and so on. But the fact remains that those calls do, in fact, block on the requested changes being made in many/most configurations. That’s far from your initial claim that “nothing is synchronous”.
Then consider the network examples. A connect(2) call isn’t like a socket send or filesystem write that might be cached or queued; that connect call blocks until a TCP negotiation with the target host’s network stack is performed. There’s an asterisk there as well (connection queues can make the call take longer, and we can have a semantic debate about atomicity vs synchronicity or “waiting for the thing to start” vs “waiting for the thing to finish” if you like), but the typical behavior of this action meets the bar for synchronicity as most people understand it.
The same is true further up the stack. A COMMIT RPC on most RDBMSes will indeed synchronously wait for the transaction’s changes to be persisted, or rolled back, to the database. The asterisks there are in the domain of two generals/client-server atomicity, or databases whose persistence settings don’t actually wait for things to be written to disk, but again: the majority of cases do, in fact, operate synchronously.
If “nothing is synchronous”, then how does read causality work? Like, my code can be relying on a hundred tiers of ephemeral and deceptive caches and speculatively executing out the ass, but a system call to read data from an IO source must necessarily be synchronous if I can observe that read when it finishes (either by seeing a read pointer advance on a file handle in the kernel, or just by using the data I read).
So … yeah, no. There’s nuance there, certainly. Deceptive and easy to mess up behavior, sure. But if that’s the complaint, say it—say “people do not understand the page cache and keep causing data loss because they assume write(2) works a certain way”. Say “people make wrong assumptions about the atomicity of synchronous operations”. Don’t say “nothing is synchronous”, because it isn’t true.
See https://rcrowley.org/2010/01/06/things-unix-can-do-atomicall..., https://datatracker.ietf.org/doc/html/rfc793.html#section-3...., and that amazing old … I think it was a JWZ article that compared data loss characteristics of Linux file systems given different power loss/fsync scenarios. Google isn’t helping me to find it, but perhaps someone who has it handy could link it here (unformatted HTML post that largely boiled down to “lots of drives and filesystems have durability issues; XFS and fsync are a good combination to achieve maximum durability”).