What if it is the difference between your schema working under high transaction rates or not? Or, the difference between buying much more database hardware and having smaller shards to avoid index contention. (which, is $$)
Facebooks mysql architecture (at least used to, likely still is) based on this: https://backchannel.org/blog/friendfeed-schemaless-mysql
Note the simple schema, and lack of FKs.
FKs are rarely present in extremely high transactions per second systems that operate on the same few tables. This is due to index contention and locking.
I say this with lots of experience working on different production systems that see hundreds of thousands of transactions per second. I have yet to see one arrive at these kind of numbers using foreign keys, unless it's something like a giant shared hosting platform, that is operating on hundreds and hundreds of different tables. (therefore, less index contention)
Often, in these shops, data correctness is not validated in realtime in this way, but often in a way that is not in the critical path of answering queries. (more like, eventually consistent). Some places have entire teams for this.
To your point on correctness, my experience ranges a lot in areas where correctness can be eventually consistent. For something like healthcare or banking, you would rather spend the money on way more hardware, because you can't afford correctness to be off. However, with things like globally scaled social apps, this is just not the case.
Most people tend to still use Oracle in those situations. Which is legions slower , and legions more expensive than mysql or pgsql.