If I’m not wrong, at least some of those sharp edges have been resolved. There was a famous very hard to reproduce bug causing problems with ZFS send receive of encrypted snapshots once in a blue moon, that was hunted down and fixed recently.
Still, ZFS needs better tooling. The user has two keys and an encrypted dataset, doesn’t care what is encryption root, and should be able to decrypt. ZFS should send all information required to decrypt.
The code for ZFS encryption hasn’t been updated since the original developer left, last I checked.
In my view, in this case, you could say ZFS nearly lost data: it ties dataset settings in a pool and doesn’t send the necessary settings for reproduction when one of them is replicated. The user is clearly knowledgeable in ZFS and still almost lost data.
This is the case of user changing password setting and and realizing you can't use them with old backups after accidentally destroying one dataset. zfs is intented for servers and sysdmins so it is not as friendly as some may expect, but it did not lose anything that user did not destroy. Author had to use logic to deduct what he did and walk it back.
That's unfair to the author. The backups were new, post-password change. And neither old nor new password worked on them. The thing that was old was an otherwise empty container dataset.
I also confirm that people snapshot their data, which is usually child datasets. If you don’t care about an empty folder, snapshotting and replicating it according to a careful schedule is not expected.
When I had to replace HDDs, the ops were very smooth. I don't mess with ZFS all that often. I rely to the documentation. I must say that IMO the CLI is a breath of fresh air compared to the other options we had in the past (ext3/4FS, ReiserFS, XFS, etc.). Now BTRFS might be easier to work with, I can't tell.
btw, this bug is well known amongst openZFS users. There are quite a few posts about it.
One that should not exist, of course, but certainly not a normal one.
This is a very old lesson that should have been learned by now :)
But yeah the rest of the points are interesting.
FWIW I rarely use ZFS native encryption. Practically always I use it on top of cryptsetup (which is a frontend for LUKS) on Linux, and GELI on FreeBSD. It's a practice from the time ZFS didn't support encryption and these days I just keep doing what I know.
I've used this in practice for many years (2020), and aside from encountering exactly this issue (though thankfully I did have a bookmark already in place), it's worked great. I've tested restores from these snapshots fairly regularly (~ quarterly), and only once had an issue related to a migration - I moved the source from one disk to another. This can have some negative effects on encryptionroots, which I was able to solve... But I really, really wish that ZFS tooling had better answers to it, such as being able to explicitly create and break these associations.
For backup purposes I also greatly prefer file by file encryption because one corruption will only break one file and not the whole backup.
What I do now is encrypt with encfs and store on a S3 glacier-style service.
Source: I work for a backup company that uses ZFS a lot.
If you enable compression on ZFS that runs on top of dmcrypt volume, it will naturally happen before encryption (since dmcrypt is the lower layer). It's also unclear how it could be much faster, since dmcrypt generally is bottlenecked on AES-NI computation (https://blog.cloudflare.com/speeding-up-linux-disk-encryptio...), which ZFS has to do too.
Also, that way I can have Linux and FreeBSD living on the same pool, seamlessly sharing my free space, without losing the ability to use encryption. Doing both LUKS and GELI would requiring partitioning and giving each OS its own pool.
I do manual backup checks, and so did the author, but those are going to be limited in number.
I do not understand making RAID and encryption so very hard, and then using some NAS in a box distribution like an admission you don't have the skills to handle it. A lot of people are using ZFS and "native encryption" on Archlinux (not in this case) when they should just be using mdadm and luks on Debian stable. It's like they're overcomplicating things in order to be able to drop trendy brand names around other nerds, then often dramatically denouncing those brand names when everything goes wrong for them.
If you don't have any special needs, and you don't know what you're doing, just do it the simple way. This all just seems horrific. I've got >15 year old mdadm+luks arrays that have none of their original disks, are 5x their original disk size, have survived plenty of failures, and aren't in their original machines. It's not hard, and dealing with them is not constantly evolving.
Reading this gives me childhood anxiety from when I compressed by dad's PC with a BBS pirated copy of Stacker so I would have more space for pirated Sierra games, it errored out before finishing, and everything was inaccessible. I spent from dusk to dawn trying to figure out how to fix it (before the internet, but I was pretty good at DOS) and I still don't know how I managed it. I thought I was doomed. Ran like a dream afterwards and he never found out.
ZFS is quite mature, the feature discussed in the article is not. As others have pointed out this could have been avoided by running ZFS on top of luks and would have hardly sacrificed any functionality.
Sure, but LUKS+ZFS provides all that too, and also encrypts everything (ZFS encryption, surprisingly, does not encrypt metadata).
As this article demonstrates, encryption really is an afterthought with ZFS. Just as ZFS rethought from first principles what storage requires and ended up making some great decisions, someone needs to rethink from first principles what secure storage requires.
You get these for free with btrfs
> There are very real reasons to use ZFS
I feel like, for the types of person GP is talking about, they likely don't really need to use ZFS, and luks+md+lvm would be just fine for them.
Like the GP, I have such a setup that's been in operation for 15-20 years now, with none of the original disks, probably 4 or 5 full disk swaps, starting out as a 4x 500GB array, which is now a 5x 8TB array. It's worked perfectly fine, and the only times I've come close to losing data is when I have done something truly stupid (that is, directly and intentionally ignored the advice of many online tutorials)... and even then, I still have all my data.
Honestly the only thing missing that I wish I had was data checksumming, and even then... eh.
I don't use ZFS-native encryption, so I won't speak to that, but in what way is RAID hard? You just `zpool create` with the topology and devices and it works. In fact,
> If you don't have any special needs, and you don't know what you're doing, just do it the simple way. This all just seems horrific. I've got >15 year old mdadm+luks arrays that have none of their original disks, are 5x their original disk size, have survived plenty of failures, and aren't in their original machines. It's not hard, and dealing with them is not constantly evolving.
I would write almost this exact thing, but with ZFS. It's simple, it's easy, it just keeps going through disk replacements and migrations.
As a zfs user employing encryption, that read like a horror story. Great read, and thanks for the takeaway.
The first thing on stackoverflow permanently made the data recoverable and it was only under the comment that people mentioned this...
My whole data of projects and what not got lost because of it and that just gave me the lesson of actually reading the whole thing.
I sometimes wonder if using AI would've made any difference or would it have even mattered because I didn't want to use AI and that's why I went to stackoverflow lol... But at the same point, AI makes hallucinations too but it was a good reality check for me as well to always read the whole thing before running commands.
AI is trained on stackoverflow and much, much worse support forums. At least SO has the comments below bad advice to warn others, AI will just say "Oops, you're entirely right, I made a mistake and now your data is permanently gone".
In the end I just asked it to flash it clean so that I can atleast use my HDD which was now in the state of a limbo and it couldn't even do that.
I was just wondering about in my comment if it would have originally given me a different command or not but there are a lot more chances that it would and gaslight me than give me the right command lol
So yes it got unrecoverable.
And then I just deleted that drive by flashing nix-os in that and trying that for sometimes, so maybe there is good in every bad and I definitely learnt something to always be cautious about what commands you run
zpool import -D
https://openzfs.github.io/openzfs-docs/man/master/8/zpool-im...
I haven't tried this, but I gather from the blog post that it would have been much simpler as it didn't require any of the encryption stuff.