Thomas' Digital Garden blog is not really the place to find good advice on this. Crypto researchers are the place to look. The quality of how Linux handles this is quite open to debate, and researchers routinely question the choies made. Here's [1] one of many. Use Google Scholar, enter urandom, and limit the search to recently to see more.
The way urandom does stretching does not provide entropy (which follows from the second law of thermo). And yes, there is a finite amount. Developers have decided to conflate actual entropy with "hard to compute," which is simply not true.
Yes, you can get 128 bits of true entropy, and then use a stream cipher to reuse it over and over, but it's not adding or making more entropy. It's simply that the stream cipher is not (yet?) broken, at which point you'll realize there is not enough entropy.
By the argument of entropy stretching, you could claim you only need 10, or 2, or 1 bit of entropy, and then use a stream cipher, and have unlimited entropy. Since that would be broken quickly, showing the lie, they choose 128 or 256 or so, and hope the cipher method doesn't break.
And yes, getting more and more from that stream weakens the system, and eventually, like all crpyto, the method will break.
>This is how we end up with, e.g., hashmap implementations that use non-cryptographic hash functions, and then an attacker DoS's your server with 10,000 keys
Conversely, you use an algorithm that takes 100x of the time to hash, then they simply DoS's your server by sending it any large batch of things to hash. For many hashes the PRNG is a significant amount of time used. This is why there is no one size fits all method.
I have worked in crypto (and high speed computing and numerics) for decades, and have patents in crypto stuff, and have even written articles about PRNGs (and given talks on how to break them via SAT solvers and Z3 stuff), so I am well aware of the uses of all these pieces.
If some area needs crypto, assuming an amateur will simply throw in a proper function call and now magically have security is terrible way to design or implement systems. If an area needs crypto, have a senior person decide what to do, and walk the person that is left guessing through how it works.
Your advice leads to people thinking (incorrectly) that since they called getentropy to seed, now they are secure, when there is a zillion other attacks they are still open to (timing, forgetting how to use block ciphers, choosing the wrong hash map type in the case of hash maps, not handling salt correctly, not using memory buffers properly, not ensuring contents don't leak, ensuring things running in the cloud don't leak, and on and on..., forgetting any of a ton of things needed to make the thing secure).
If you're worried about DDOS, paying 100x time for a PRNG call can lead to similar system failures - in gaming, rendering, simulation, science stuff.
So the same advice: use the one that is most suited. I always recommend default to faster since someone not knowing about crypto should ever be implementing things that accidentally need crypto at such a tiny scale. And the slowdowns hit all software.
>Except for a few niche applications
The majority of running code in the world is not facing attackers - it's stuff like embedded systems, medical devices, tools and toys, programs on my PC (only a tiny few face the world), data processing systems (finance, billing, logging, tracking, inventory...), and so on. The largest use of PRNG calls are simulation by far, which most certainly do not want to pay the 100x performance hit.
The niche applications are those needing a CSPRNG, which is truly the smaller fraction of pieces of code written.
>Or "random" automated tests that always use the same PRNG with the same seed, meaning certain code paths will never be taken
Repeatable tests are by far easiest to fix. I've seen tons of people do exactly what you say, and get an error they never see again. The correct answer here is not to rely on randomness executing a path in your code for testing. Make sure that the code is tested properly, not randomly. Relying on the luck of random number generators is simply bad advice. Use them to get more values, or to stress test for load testing, but expecting a lucky number pick to happen on test 1 of a trillion runs is not very solid advice compared to simply running the test, repeatably, with enough runs to provide the level of stress you desire. At least then you can redo the test and find the bug and fix it in reasonable time.
>that is no longer the case: we can have both
No, we do not. I regularly work on crypto code and often get called in to go over stuff with others. I also write lots of high performance code (mostly scientific, simulation, and the occasional rendering needs) that needs speed. There is no PRNG that meets both needs by a long shot.
> They should just be using the default RNG provided by their language,
Agreed.
> which should be a CSPRNG, not a PRNG
And that is not true for any language I am aware of for the reasons I mentioned.
Python: Mersenne Twister
C#: Xoshiro
JavaScript: xorshift128+
Go: Additive Lagged Fibonacci
C/C++: large range, from really bad to mostly bad
Java: LCG
And so on......[1] "An Empirical Study on the Quality of Entropy Sources in Linux Random Number Generator", https://ieeexplore.ieee.org/abstract/document/9839285