Is it a safety feature to type-check regular expressions using dependent types? Is Python a security vulnerability because the performance can be unpredictable?
I don’t know.
Rust, for that matter, doesn’t protect you from running out of memory from leaking data on the heap — or from running out of stack space because your infinitely recursive function doesn’t halt. Maybe that’s not part of memory safety — but that’s my point.
There’s a whole safety forest out there. Whenever I read an article about safety in software, it seems like a comfy blanket statement. “This is a nice definition which I will live in.”
I just don’t see how it’s so flat.
Because people like making wild and provocative claims to motivate writing a paper for which the conclusion was already decided.
Anyone who has used, I dunno, any of programming languages that are being discussed has a more nuanced take, and isn't spending time trying to force all things into Box A or Box B.
Elixir/Erlang has a pleasant concurrency model. It does some things well, it does other things less well. It eliminates a big class of bugs, and yet you can still write bugs in Elixir.
These sorts of papers are a waste of space on the internet imo.
“My synchronous reply timed out, so why I am I getting a message after the timeout?!”
“Why did deleting my build directory fix the compile error?!”
“How do I keep my app from crashing when my supervisor crashed too many times in a given timeframe?!”
So many adventures to be had.
Fourth is the wrong question to ask: if the failure handling mechanism is invoked too frequently, fix the failures, not the recovery.
Third is definitely my fault. Bug reports are appreciated whenever possible.
But generally speaking, yes, debugging is part of every programming language, and concurrency and distribution will add to the adventure whenever used, to varying degrees depending on the programming language.
In any case, the answer continues to be "it depends". But people will continue to look for "one weird trick" solutions to every problem.
Typically you would start with [0] and it's derivations for specific domains.
The core idea is "The fundamental concept is that any safety-related system must work correctly or fail in a predictable (safe) way."
The interesting question is what to do in corner cases you did not specify explicitly. Typically it is still considered "safe" if you don't do what you would have specified in hindsight, but fall into the defined failure reaction. You also try to make security problems take that path.
Checking that the system does what is specified is called "verification" and the domain of functional safety. Checkin that this is what you actually meant (it actually solves the user's problem) is called "validation" and especially for vehicles called SOTIF [1].
What if there is a bug in those well-specified semantics?
When that fails, usually one would try another journal or two, and after that it's usually some manner of blog posting and social media.
The disagreement here is rooted in the empirical worldview. Empirically, "rarely" and "never" cannot be (reliably) distinguished, and so adherents of this worldview fail to distinguish claims which are meant to distinguish them.