My anecdotal evidence is far from rigorous, but the Google data from ten years ago doesn't match up with my experience running thousands of ECC enabled servers up to a few years ago. Their rates seem a lot higher than what my servers experienced; we would page on any ram errors, correctable or not (uncorrectable would halt the machine, so we would have to inspect the console to confirm; when we knowingly tried machines with uncorrectable errors after a halt, they nearly all failed again within 24 hours, so those we didn't inspect the console of probably were counted on their second failure), and while there were pages from time to time, it felt like a lot less than 8% of the machine having a
There's a lot of variables that go into RAM errors, including manufacturing quality and condition of the ram, the dimm, the dimm slot, the motherboard generally, the power supply, the wiring, and the temperature of all of those. Google was known for cost cutting in their servers, especially early on; so I wouldn't be surprised if some of that resulted in higher bitflip rate than running in commercially available servers. Things like running bare motherboards, supported only on the edges cause excess strain and can impact resistance and capacitance of traces on the board (and in extreme cases, break the traces).