I didn't say anything about speed of recovery because it's not relevant. Recovery can't happen at all if enough fragments aren't online. The maths says that with unncorelated fragment placement, and thus uncorrelated failures, and with enough data, you are basically guaranteed to lose data. Try doing the maths for an entire filesystem, where each file/block is individually erasure coded.
We stored hundreds of petabytes on cheap SATA drives with random fragment placement using reed solomon 6+3 coding (half the space of 3 replicas but same durability). Never lost a byte.
Speed of recovery is crucial, because that’s your window of vulnerability to multiple failures. For example. try raid 5 on giant drives. The chances of losing a second drive during recovery is very likely.
No need to be rude. EDIT: The offensive part was removed
What was the probability of failure of your drives? My guess is you just didn't hit the threshold for your failure rate. The maths checks out (PhD here). Seriously, do the calculation.