The very quick high level explanation is that in storage you talk about "stretch factor". For every byte of file, how many bytes do you have to store to get the desired durability. If your approach to durability is you make 3 copies, that's a 3x stretch factor. Assuming you're smart, you'll have these spread across different servers, or at least different hard disks, so you'd be able to tolerate the loss of 2 servers.
With erasure encoding you apply a mathematical transformation to the incoming object and shard it up. Out of those shards you need to retrieve a certain number to be able to reproduce the original object. The number of shards you produce and how many you need to recreate the original are configurable. Let's say it shards to 12, and you need 9 to recreate. The amount of storage that takes up is the ratio 9:12, so that's a 1.3x. For every byte that comes in, you need to store just 1.3x bytes.
As before you'd scatter them across 12 shards and only needing any 9 means you can tolerate losing 3 hard disks (servers?) and still be able to retrieve the original object. That's better durability despite taking up 2.7x less storage.
The drawback is that to retrieve the object, you have to fetch shards from 9 different locations and apply the transformation to recreate the original object, which adds a small bit of latency, but it's largely negligible these days. The cost of extra servers for your retrieval layer is significantly less than a storage server, and you wouldn't need anywhere near the same number as you'd otherwise need for storage.
The underlying file system doesn't really have any appreciable impact under those circumstances. I'd argue ZFS is probably even worse, because you're spending more resources on overhead. You want something as fast and lightweight as possible. Your fixity checks will catch any degradation in shards, and recreating shards in the case of failure is pretty cheap.