That doesn't sound right. Noatime turns off recording of the last access time, not modification.
This noatime thing is an old-wive's tale that needs to die.
AFAIK, most "modern" filesystems (XFS,BTRFS etc.) all default to relatime
relatime maintains atime but without the overhead
EDIT TO ADD:
Actually,I've just done a bit of searching .... relatime has been the kernel mount default since >= 2.6.30 ! [1]
[1] https://kernelnewbies.org/Linux_2_6_30 (scroll to 1.11. Filesystems performance improvements)
The cost of atime is an extra write every time you read something.
Relatime changes this to one atime update per day (by default), low enough that it usually doesn't matter.
However, that update per day may have significant impact when you are using Copy-on-Write filesystems (btrfs, zfs). Each time the atime field is updated you are creating a new metadata block for that file. Old blocks can be reclaimed by the garbage collector (at an extra cost), but not if they exist in some snapshot.
All of this means that if you use btrfs/zfs and have lots of small files and take snapshots at least once per day, there's a noticeable performance difference between relative and noatime.
I've been using noatime everywhere for several years and I've never noticed any downside. This is definitely my recommended solution.
I've used noatime by default, except for few cases where I know it is used, in professional settings for probably two decades. Hopefully you know what kind of appliciation you are running. There are many parameters in a system and this is just one of them.
The only times I've seen atime used has been for a two queues, and only in the case of "has this file changes since last it was read". And that is precisely what relatime is for, the daily update is just an optional extra.
I would not recommend doing that. It might work for now, but there's a high risk of the disk being seen as "empty" (since it has no partition table) by some tool (or even parts of the motherboard firmware), which could lead to data loss. Having an MBR, either the traditional MBR or the "protective MBR" used by GPT, prevents that, since tools which do not understand that particular partition scheme or filesystem would then treat the disk as containing data of an unknown type, instead of being completely empty; and the cost is just a couple of megabytes of wasted disk space, which is a trivial amount at current disk sizes (and btrfs itself probably "wastes" more than that in space reserved for its data structures). Nowadays, I always use GPT, both because of its extra resilience (GPT has a backup copy at the end of the disk) and the MBR limits (both on partition size and the number of possible partition types).
https://openzfs.github.io/openzfs-docs/Getting%20Started/Nix...
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/190...
[0] I lost 2 root filesystems to btrfs, probably because it couldn't handle space exhaustion. I'm paranoid now.
-Fill your rootpartionion as root with "dd if=/dev/urandom of=./blabla bs=3m"
-rm blabla && sync (we don't want to be unfair to such a fragile system)
-Reboot and end up with unbootable /
It's a mess, for a filesystem i would declare it as alpha stage.
Ideally reproduce it with mainline kernel or most recent stable. And post the details on the linux-btrfs list. They are responsive to bug reports.
If you depend on LTS kernels then it's, OK to also include in the report the problem happens with X kernel but not Y kernel. Upstream only backports a subset of fixes and features to stable. Maybe the fix was missed and should have gone to stable or maybe it was too hard to backports.
These are general rules for kernel development, it's not btrfs specific. You'll find XFS and i915 devs asking for testing with more recent kernels too.
But in any case, problems won't get fixed without a report on an appropriate list.
Make a VM OpenSUSE or Fedora (tested just these two) fill it and see it not boot anymore...it is trivial.
The mistake however is that even though it isn't practical to make theoretical guarantees that the filesystem won't end up full and broken, it is very possible to make such a thing only happen in exceeding unlikely cases. One runaway dd isn't that...
It's not dd, it's one process run by root who fills the filesystem with one big file. That's like the first thing i would test if it can destroy my filesystem.
It's really the filesystem responsibility, if it needs to reserve 30% so be it, if it need's more because i wrote billions of files so be it, (even if it says "sorry i told you i have 50GB free but because you wrote so many small files it's now just 45GB" after all they just can make a estimation) so be it. But it's the filesystem job to tell me how much ~free space that i have, and stop writing before it really/internally cant take anymore. And NOT to kill itself because i alloc 100% of it, there's is just no excuse. That's the filesystem's responsibility.
PS: The clever ZFS survives that "unlikely" test easily.
I've heard about this, but my understanding was that when this happens, performance becomes extremely poor. While that may be quite bad, it's still worlds apart from losing data.
There's also the fact that the user may have partitioned the drive in a such a way to prevent it from ever filling up. Even root can't fill the partition beyond its size. Here, you have to go out of your way to make sure the partition doesn't fill out, or else you have a bad time. Shit happens, so this does look like a FS bug to me, much more than PEBCAK.
But hey how about a quota behind the scenes?....you know like ZFS? AFS? ReFS?...you know so the filesystem tells the user "sorry cant take anymore" before it really cant take anymore? That would be some crazy enterprise level stuff....
You know, a Filesystem that immediately stops writing and instead cares more about the data that's already on the platter?
BTW: It was a DC-Harddisk
Everytime I read about it, someone is losing data.
Thank god Ubuntu makes zfs very easy to use. No reason to even consider touching btrfs.
And to other comments:
It was a DC-Harddisk, and NO, not even root should be capable destroying the filesystem by simply write to it, it's not 1970 anymore.
Calculating the "to reserve" Metadata-block should be rather trivial since it's ONE big file. And it's not dd that is the problem, it's btrfs that cannot handle a process that writes ONE BIG file.
I use nixos with zfs on /home, /nix and /persist. Everything else is tmpfs, including /etc. Mostly you can configure applications to read config from /persist, but when not, a bind mount from /etc/whatever to /persist/whatever works pretty well.
I will never use a computer any other way again.
Isn't this useless? My understanding is that compression is only done at file write time. When you "btrfs send" a snapshot, the data is streamed over without recompression, so there's no point in setting up a higher compression level in the backup disk.