For as painful as the debugging story was I have spent vastly more amounts of time working around garbage collectors to ship performant code.
But there were / are also plenty of trading shops that paid Azul for their pauseless C4 GC. Nowadays there's also ZGC and Shenandoah, so if you want to both allocate a lot and also not have pauses, that tech is no longer expensive.
Well, I just trivialized it. However, in one case in mid 00s, I saw it disabled completely to avoid any pauses during trading hours.
> or a design that does not respect how GC works in the first place
It’s called shipping a 90 Hz VR game without dropping frames.
(if that is the case, I understand where the GC PTSD comes from)
To be fair, there are about 4 completely independent bad decisions that tend to be made together in a given language. GC is just one of them, and not necessarily the worst (possibly the least bad, even).
The decisions, in rough order of importance according to some guy on the Internet:
1. The static-vs-dynamic axis. This is not a binary decision, things like "functions tend to accept interfaces rather than concrete types" and "all methods are virtual unless marked final" still penalize you even if you appear to have static types. C++'s "static duck typing" in templates theoretically counts here, but damages programmer sanity rather than runtime performance. Expressivity of the type system (higher-kinded types, generics) also matters. Thus Java-like languages don't actually do particularly great here.
2. The AOT-vs-JIT axis. Again, this is not a binary decision, nor is it fully independent of other axes - in particular, dynamic languages with optimistic tracing JITs are worse than Java-style JITs. A notable compromise is "cached JIT" aka "AOT at startup" (in particular, this deals with -march=native), though this can fail badly in "rebuild the container every startup" workflows. Theoretically some degree of runtime JIT can help too since PGO is hard, but it's usually lost in the noise. Note that if your language understands what "relocations" are you can win a lot. Java-like languages can lose badly for some workflows (e.g. tools intended to be launched from bash interactively) here, but other workflows can ignore this.
3. The inline-vs-indirect-object axis - that is, are all objects (effectively) forced to be separate allocations, or can they be subobjects (value types)? If local variables can avoid allocation that only counts for a little bit. Java loses very badly here outside of purely numerical code (Project Valhalla has been promising a solution for a decade now, and given their unwieldy proposals it's not clear they actually understand the problem), but C# is tolerable, though still far behind C++ (note the "fat shared" implications with #4). In other words - yes, usually the problem isn't the GC, it's the fact that the language forces you to generate garbage in the first place.
4. The intended-vs-uncontrollable-memory-ownership axis. GC-by-default is an automatic loss here; the bare minimum is to support the widely-intended (unique, shared, weak, borrowed) quartet without much programmer overhead (barring the bug below, you can write unique-like logic by hand, and implement the others in terms of it; particularly, many languages have poor support for weak), but I have a much bigger list [1] and some require language support to implement. try-with-resources (= Python-style with) is worth a little here but nowhere near enough to count as a win; try-finally is assumed even in the worst case but worth nothing due to being very ugly. Note that many languages are unavoidably buggy if they allow an exception to occur between the construction of an object and its assignment to a variable; the only way to avoid this is to write extensions in native code.
[1] https://gist.github.com/o11c/dee52f11428b3d70914c4ed5652d43f... - a list of intended memory ownership policies. Generalized GC has never found a theoretical use; it only shows up as a workaround.