I feel like the need for a triple-redundancy option in RAID is superseded by more “advanced” software “RAID” at the file system level such as ZFS or ButterFS (to an extent). Further, the increased availability and affordability of ECC RAM in non-enterprise hardware makes the call for additional redundancy even less-urgent.
There is a nicety to having that backup battery on a RAID card for write-through operations to finish in the event of a power outage, however this is easily-solved by a UPS. In the event of a power outage, not losing any data I might have been transferring to the array is nice, but I’m still losing my OS state and any unsaved things I may have been working on.
ZFS has triple-parity with RAID-Z3:
> The need for RAID-Z3 arose in the early 2000s as multi-terabyte capacity drives became more common. This increase in capacity—without a corresponding increase in throughput speeds—meant that rebuilding an array due to a failed drive could "easily take weeks or months" to complete.[38] During this time, the older disks in the array will be stressed by the additional workload, which could result in data corruption or drive failure. By increasing parity, RAID-Z3 reduces the chance of data loss by simply increasing redundancy.[40]
* https://en.wikipedia.org/wiki/ZFS#ZFS's_approach:_RAID-Z_and...
The use of software versus "hardware" (firmware) does not remove the need for extra copies of data.
NVMe SSDs, direct or over PCIe solve lots of the issue that people made RAID arrays for to begin with
sure in theory you can go even faster and redundant with RAID concepts on these nvme ssds, but tad overkill
I don't like the trend of SSDs (NVMe or otherwise) getting cheaper and cheaper because it's coming at the cost of reliability and endurance. Sure I can get 2TB for ~$100 but at this point I'm not convinced it will outlast spinning rust as has been the colloquial assumption since 2.5" SSDs first hit the scene circa 2008.
I've quickly destroyed (consumer-grade) SSDs before by running stuff that is constantly reading and writing to them. Microsoft's Azure Stack Development Kit (ASDK) is one example.
Therefore, I'm actually very receptive to RAID'ing SSDs, be it MDADM, ZFS or some other means. I do agree that to RAID (RAID0) NVMe drives is a bit ridiculous, but RAID1 definitely adds value.
SLC: 100K cycles
MLC: 10K cycles
TLC: 3K cycles
QLC: 1K cycles
https://www.kingston.com/en/blog/pc-performance/difference-b...
Now SSDs seem to be taking over the laptop world and HDDs are getting harder to find (and smaller, as the larger CMR drives get replaced with garbage SMR models). It seems like reliable storage in a laptop is quickly becoming a thing of the past.
That's a red herring; the magnitude of this effect is usually greatly exaggerated, and in reality never comes close to being as important as the fact that flash memory is simply too expensive to use for cold storage.
And of course, the very idea of cold storage is dangerous: if you really want to ensure your data lasts for decades, you should be verifying your backups at least annually and making plans to migrate your data off any media that is obsolete and at risk of becoming hard to read using commodity hardware. This also entirely eliminates the above flash memory data retention concerns.
RAID0 is handy in HPC for local scratch space.