I was a diehard fan of dynamic languages for a decade, but eventually I saw the light.
I still use dynamic languages daily, but the bigger the project the more I want proper static typing. None of that mypy stuff, the real deal.
I have a 20k LOC distributed system, 40k if you include all the libraries I've written for it, running in prod, written in a (typechecked) dynamic language, and it's fine. I had very few typing errors in dev, and one typing error that I pushed to prod. Even this had no discernible customer effect. I discovered it months later when I was checking over my logs understand logical errors. Currently all of the reported errors are race conditions on startup (and sporadic network errors). But it's fine. The system tolerates it, and restarts the failing modules, instead of being pedantic, and so instead of spending weeks debugging concurrent startup, I get a no-hassle fast booting system (which also means nonblocking bootup and thus higher customer availability during failure and restart). The other nice thing about dynamic systems, is that if they're not otherwise terrible, (logical) debugging is a breeze, because all types are inspectable and I don't have to worry about implementing observerability hooks for every single datatype. I can just look at the data as it moves through my system. And that also means that telemetry and logging with structured metadata just "happens", and is highly composable with no hassle.
I guess my biggest problem with static typing absolutism is that it leads to this attitude that code can be correct. It can't. Even if it's provably correct (in the mathematical sense), if your axioms are violated by your system then your system can wind up in an undefined state. You must plan for failure. And sometimes the most efficent/effective plan is "do nothing".
Yes, and if your hardware is broken or a cosmic ray hits your system in just a bad way, then all the static type checking in the world won't help against that. But that's a different problem to solve, isn't it?
I'd postulate that for most applications, you'd usually be happy to have a software stack that's less likely to contain programming bugs in the first place. It's also true that you don't want faulty Airbag software in your car to affect your steering, but don't you want non-faulty Airbag software as well?
You still drive value based on how much of the correctness checking you can offload to the compiler.
I don't really get why programmers better than anyone understand the value of automating work with a computer, but fail to grasp that value applied to their own craft. Let the computer do more of the work in making sure your program is correct.
After seeing how much I got wrong in Python (but apparently worked) I realized I couldn’t really trust myself (even after a decade in Python!)