There are situations though, where you’re working on a document and the documents “save” format is a memory dump. Corruption for things of that type (Adobe RAW for example) would remove data.
It might present itself as a 1pixel colour difference, but it could be more damaging (incorrect finances, in accounting software for example). Software trusts memory; but memory can lie.
Well maybe. Rather than having to trust memory completely, it would just be better to use a binary format where each bit is verifiable so then at least a single bit flip would be immediately obvious. For example, a bit flip in a TLS session causes the whole session to fail rather than a random page element to change.
Perhaps consumer-grade software that needs guarantees of correctness should be using error correction in software. For example, database records for financial software, DNS, e-mail addresses, etc.
Why does it matter if it =HAD= checksum, the numbers would have been altered prior to save. It means you store one but you get two when read later. If the format calculates immediate checksums on blocks it'd detect memory corruption at best. The extreme downside is that such a part is untestable under normal conditions, hard to maintain, and it costs more than the ECC in development.
Those corner cases might occur rarely but are probably inconsequential given rate of occurrence versus rate of criticalness - it probably doesn't justify the markup for most. In a data center you're processing millions of transactions per minute so occurrence is much more impactful.
I would EASILY pay 12.5% more (that's the bit overhead) for memory that actually works.
If my data is fine being corrupted to save 12.5% on RAM costs, then why am I even bothering processing the data? Apparently it's worthless.
People today weigh the cost of maybe 16 vs 32GB on a mid-tier desktop. ~doubling the cost for twice the RAM. Yes, paying 12.5% more for ECC RAM is a no-brainer.