> That's not a substantiated claim.
The article we are commenting on substantiates it with, several, actual examples.
> interpreters
In what world do interpreters require unsafe code? A naive interpreter that recursively descends an AST doesn't need it, and a bytecode interpreter doesn't need it either. You'll probably need it if you want to make a fast GC, but that does not mean your entire codebase has to be unsafe.
> interfacing with external code
This is one reason unsafe exists, yes. You are supposed to hide the unsafe parts behind a safe interface. For example, Rust unavoidably has to deal with external code to do I/O - yet, the exposed std::fs interface is safe. This is a well established doctrine in the Rust community, and at least one prominent project has received hot hell for ignoring it.
And, again, the portions of code that are unsafe in a Rust codebase - even when required - are supposed to be minimal, well contained, and well tested. Running a suite of tests to check a small amount of code under Miri is not prohibitive at all. If someone is going to insist on using unsafe across their codebase then, yes, they are far better served by using a language that is unsafe to begin with.
I have done embedded Rust, and even there I have largely avoided unsafe code (the 3 lines I was forced to write happened to be for embedded).
> Rust's safety overhead has real trade-offs [...]
I never claimed otherwise. Those trade-offs have a purpose: fewer degrees of freedom result in higher degrees of certainty. Even Rust has too many degrees of freedom[1], but we don't sweep that under the rug, deflect it, or outright lie about the situation.
The Rust zeitgeist largely agrees your opinion (or rather: Andrew's opinion) of unsafe Rust, in a very oblique way. It's shit, we don't like using it. It is certainly not an accurate summary of Rust as a whole.
Leveraging unsafe Rust against Rust as a whole is a dishonest line of thinking and I'm not going to engage with it further.
[1]: https://github.com/Speykious/cve-rs