I have no idea what your definition of encapsulation is, but mine is not this.
It's really only encapsulated in the sense that if you have a finite and small set of unsafe blocks, you can audit them easier and be pretty sure that your memory safety bugs are in there. This reality really doesn't exist much anymore because of how much unsafe is often ued, and since you you have to audit all of them, whether they come from a library or not, it's not as useful to claim encapsulation as one thinks.
I do agree in theory that unsafe encapsulation was supposed to be a thing, but i think it's crazy at this point to not admit that unsafe blocks turned out to easily have much more global effects than people expected, in many more cases, and are used more readily than expected.
Saying "scaling reasoning" also implies someone reasoned about it, or can reason about it.
But the practical problem is the same in both cases - someone got the reasoning wrong and nothing flagged it.
Wanna go search github for how many super popular libraries using unsafe had global correctness issues due to local unsafe blocks that a human reasoned incorrectly about, but something like miri found? Most of that unsafety that turned out to be buggy also was done for (unnecessary) performance reasons.
What you are saying is just something people tell themselves to make them feel okay about using unsafe all over the place.
If you want global correctness, something has to verify it, ideally not-human.
In the end, the thing C lacks is tools like miri that can be used practically with low false-positives, not "encapsulation" of unsafe code, which is trivially easy to perform in C.
Let's not kid ourselves here and end up building an ecosystem that is just as bad as the C one, but our egos refuse to allow us to admit it. We should instead admit our problems and try to improve.
Unsafe also has legitimate use cases in rust, for sure - but most unsafe code i look at does not need to exist, and is not better than unsafe C.
I'll give you an example: There are entire popular embedded bluetooth stacks in rust using unsafe global mutable variables and raw pointers and ..., across threads, for everything.
This is not better than the C equivalent - in fact it's worse, because users think it is safe and it's very not.
At least nobody thinks the C version is safe. It will often therefore be shoved in a binary that is highly sandboxed/restricted/etc.
It would be one thing if this was in the process of being ported/translated from C. But it's not.
Using intrinsics that require alignment and the API was still being worked on - probably a reasonable use of unsafe (though still easy to cause global problems like buffer overflows if you screwed up the alignment)
The bluetooth example - unreasonable.
The `memchr` crate, for example, has an entirely safe API. Nobody needs to use `unsafe` to use any part of it. But its internals have `unsafe` littered everywhere. Could the crate have bugs that result in UB due to a particular use of the `memchr` API? Yes! Doesn't that violate encapsulation? No! A bug inside an encapsulated boundary does not violate the very idea of encapsulation itself.
Encapsulation is about blame. It means that if `memchr` exposes a safe API, and if you use `memchr` and you get UB as a result of some `unsafe` code inside of `memchr`, then that means the problem is inside of `memchr`. The problem is definitively not with the caller using the library. That is, they aren't "holding it wrong."
I'm surprised that someone with as much experience as you is missing this nuance. How many times have you run into a C library API that has UB, you report the bug and the maintainer says, "sorry bro, but you're holding that shit wrong, your fault." In Rust, the only way that ought (very specifically using ought and not is) to be true is if the API is tagged with `unsafe`.
Now, there are all sorts of caveats that don't change the overall point. "totally safe transmute" being an obvious demonstration of one of them[1] by fiddling with `/proc/self/mem`. And of course, Rust does have soundness bugs. But neither of these things change the fundamental idea of encapsulation.
And yes, one obvious shortcoming of this approach is that... well... people don't have to follow it! People can lie! I can expose a safe API, you can get UB and I can reject blame and say, "well you're holding it wrong." And thus, we're mostly back into how languages like C deal with these sorts of things. And that is indeed a bummer. And there are for sure examples of that in the ecosystem. But the glaring thing you've left out of your analysis is all of the crates that don't lie and specifically set out to provide a sound API.
The great thing about progress is that we don't have to perfect. I'm really disappointed that you seem to be missing the forest for the trees here.
[1]: https://github.com/ben0x539/totally-safe-transmute/blob/main...
Is it? I've written hundreds of thousands of lines of production Rust, and I've only sparingly used unsafe. It's more common in some domains than others, but the observed trend I've seen is for people to aggressively encapsulate unsafe code.
Unsafe Rust is quite difficult to write correctly. (The &mut provenance rules are a bit scary!) But once a safe abstraction has been built around it and the unsafe code has passed Miri, in practice I've seen people be able to not worry about it any more.
By the way I maintain cargo-nextest, and we've added support for Miri to make its runs many times faster [1]. So I'm doing my part here!
Bad C programmers though? Their stuff is more dangerous and they don't know when and don't call it out and should probably stick to Rust.
I take a hard line on this stuff because we can either keep repeating the fundamental mistake of believing things like "willpower" to write correct code are real, or we can move on and adopt better tooling.
People can write memory safe code, just not 100% of the time.