And if yank the cord before the package is fully unpacked? Wouldn't that just be the same problem? Solving that problem involves simply unpacking to a temporary location first, verifying all the files were extracted correctly, and then renaming them into existence. Which actually solves both problems.
Package management is stuck in a 1990s idea of "efficiency" which is entirely unwarranted. I have more than enough hard drive space to install the distribution several times over. Stop trying to be clever.
Not the same problem, it's half-written file vs half of the files in older version.
> Which actually solves both problems.
it does not and you would have to guarantee that multiple rename operations are executed in a transaction. Which you can't. Unless you have really fancy filesystem.
> Stop trying to be clever.
It's called being correct and reliable.
Not strictly. You have to guarantee that after reboot you rollback any partial package operations. This is what a filesystem journal does anyways. So it would be one fsync() per package and not one per every file in the package. The failure mode implies a reboot must occur.
> It's called being correct and reliable.
There are multiple ways to achieve this. There are different requirements among different systems which is the whole point of this post. And your version of "correct and reliable" depends on /when/ I pull the plug. So you're paying a huge price to shift the problem from one side of the line to the other in what is not clearly a useful or pragmatic way.
It makes no sense to trust that fsync() does what it promises but not that close() does what it promises. close() promises that when close() returns, the data is stored and some other process may open() and find all of it verbatim. And that's all you care about or have any business caring about unless you are the kernel yourself.