That doesn't sound right. Noatime turns off recording of the last access time, not modification.
This noatime thing is an old-wive's tale that needs to die.
AFAIK, most "modern" filesystems (XFS,BTRFS etc.) all default to relatime
relatime maintains atime but without the overhead
EDIT TO ADD:
Actually,I've just done a bit of searching .... relatime has been the kernel mount default since >= 2.6.30 ! [1]
[1] https://kernelnewbies.org/Linux_2_6_30 (scroll to 1.11. Filesystems performance improvements)
The cost of atime is an extra write every time you read something.
Relatime changes this to one atime update per day (by default), low enough that it usually doesn't matter.
However, that update per day may have significant impact when you are using Copy-on-Write filesystems (btrfs, zfs). Each time the atime field is updated you are creating a new metadata block for that file. Old blocks can be reclaimed by the garbage collector (at an extra cost), but not if they exist in some snapshot.
All of this means that if you use btrfs/zfs and have lots of small files and take snapshots at least once per day, there's a noticeable performance difference between relative and noatime.
I've been using noatime everywhere for several years and I've never noticed any downside. This is definitely my recommended solution.
I would not recommend doing that. It might work for now, but there's a high risk of the disk being seen as "empty" (since it has no partition table) by some tool (or even parts of the motherboard firmware), which could lead to data loss. Having an MBR, either the traditional MBR or the "protective MBR" used by GPT, prevents that, since tools which do not understand that particular partition scheme or filesystem would then treat the disk as containing data of an unknown type, instead of being completely empty; and the cost is just a couple of megabytes of wasted disk space, which is a trivial amount at current disk sizes (and btrfs itself probably "wastes" more than that in space reserved for its data structures). Nowadays, I always use GPT, both because of its extra resilience (GPT has a backup copy at the end of the disk) and the MBR limits (both on partition size and the number of possible partition types).
https://openzfs.github.io/openzfs-docs/Getting%20Started/Nix...
[0] I lost 2 root filesystems to btrfs, probably because it couldn't handle space exhaustion. I'm paranoid now.
-Fill your rootpartionion as root with "dd if=/dev/urandom of=./blabla bs=3m"
-rm blabla && sync (we don't want to be unfair to such a fragile system)
-Reboot and end up with unbootable /
It's a mess, for a filesystem i would declare it as alpha stage.
Ideally reproduce it with mainline kernel or most recent stable. And post the details on the linux-btrfs list. They are responsive to bug reports.
If you depend on LTS kernels then it's, OK to also include in the report the problem happens with X kernel but not Y kernel. Upstream only backports a subset of fixes and features to stable. Maybe the fix was missed and should have gone to stable or maybe it was too hard to backports.
These are general rules for kernel development, it's not btrfs specific. You'll find XFS and i915 devs asking for testing with more recent kernels too.
But in any case, problems won't get fixed without a report on an appropriate list.
The mistake however is that even though it isn't practical to make theoretical guarantees that the filesystem won't end up full and broken, it is very possible to make such a thing only happen in exceeding unlikely cases. One runaway dd isn't that...
Everytime I read about it, someone is losing data.
Thank god Ubuntu makes zfs very easy to use. No reason to even consider touching btrfs.
I use nixos with zfs on /home, /nix and /persist. Everything else is tmpfs, including /etc. Mostly you can configure applications to read config from /persist, but when not, a bind mount from /etc/whatever to /persist/whatever works pretty well.
I will never use a computer any other way again.
Isn't this useless? My understanding is that compression is only done at file write time. When you "btrfs send" a snapshot, the data is streamed over without recompression, so there's no point in setting up a higher compression level in the backup disk.