Adding features in particular is a breeze and automatically the compiler/language will track for you the places that use only old set of traits.
Tooling is still newer though and needs polish. Generic handling is interesting at times and there are related missing features for that in the language, vis a vis specializations in particular.
Basic concurrency handling is also quite different in Rust than other languages, but thus usually safer.
You won't understand it unless refactor some Rust programs.
Bunny summed it up rather well. He said in most languages, when pull on some thread, you end disappearing into a knot and you're changes are just creating a bigger knot. In Rust, when pull on a thread, the language tells you where it leads. Creating a bigger knot generally leads to compile errors.
Actually he didn't say that, but I can't find the quote. I hope it was something like that. Nonetheless he was 100% spot on. That complexity you bemoan about the language is certainly there - but it's different to what you have experienced before.
In most languages, complex features tend lead to complex code. That's what made me give up on Python in the end. When you start learning Python, it seems a delightfully simple yet powerful language. But then you discover metaclasses, monkey patching, and decorators, which all seem like powerful and useful tools, and you use them to do cool things. I twisted Python's syntax into grammar productions, so you could write normal looking python code that got turned into an LR(1) parser for example. Then you discover other peoples code that uses those features to produce some other cute syntax, and it has a bug, and when you look closely your brain explodes.
As you say C doesn't have that problem, because it's such a simple language. C++ does have that problem, because it's a very complex language. I'm guessing you are making deduction from those two examples that complex languages lead to hard to understand code. But Rust is the counter example. Rust's complexity is forces you to write simple code. Turns out it's the complexity of the code that matters, not the complexity of the language.
In C, you can make partial changes and accept a temporary inconsistency. This gives you a lot of flexibility that I find helpful.
Yes it limits refactoring. But all languages provide syntactic and semantic constraints you must operate within. You can't just add line noise to a C program and expect it to compile.
So your complaint isn't that there are limits, it's that there is less of them in C, so it's easier to make changes in C without putting too much thought into it. That is absolutely correct. It's also true that's is far easier to introduce a bug in C code when you refactor than it is in Rust, and that's so because it's harder to write buggy code in Rust that gets past the compiler.
Perhaps an example. Consider:
inline char foo(uint8_t i, char a[]) { return *(a + 1); }
What does this code do? In C or C++, it's impossible to answer. If i overflows, it's UB. If you go past the end of the array, it's UB. Since it's inlined, there is no way to know if either could happen. If compiler can prove some particular instance is UB, it can do whatever it damned well pleases, without informing the poor programmer. There are lots of worse examples, particularly in C++.In Rust, it's entirely predictable what happens for any given input, as Rust forbids UB in safe code. But in order to pull that off, Rust forces you to re-write the above function, perhaps into something like this:
fn foo(i: u8, a: &[u8]) -> u8 { a[(i + 1) as usize] }
or: fn foo(i: u8, a: &str) -> char { a.chars().skip((i as usize) + 1).next().unwrap() }
or many other variations depending what you actually need to do. Notice for example you where forced to say whether you're happy with incrementing 255 to 256. If you weren't, you would write it as (i + 1) as usize.Yes, it's more mental effort. In return, you don't get your arse handed to you on a platter because you recompiled with a newer, smarter version of the compiler that noticed you violated some language rule only a language lawyer would know, and took advantage of it.
But it does! To qoute my top-level comment:
> What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes?
In other words: you set some pointer to NULL, this is OK in that part of your program, but then the value travels across layers, you've skipped a NULL check somewhere in one of those layers, NULL crosses that boundary and causes UB in a function that doesn't expect NULL. And then that UB itself also manifests in weird non-local effects!
Rust fixes this by making nullability (and many other things, such as thread-safety) an explicit type property that's visible and force-checked on every layer.
Although, I agree that things like macros and trait resolution ("overloading") can be sometimes hard to reason about. But this is offset by the fact that they are still deterministic and knowable (albeit complex)
> in fact it helps because it allows compilers to turn it a trap without requiring it on weak platforms
The "shared xor mutable" rule in Rust also helps the compiler a lot. It basically allows it to automatically insert `restrict` everywhere. The resulting IR is easier to auto-vectorize and you don't need to micro-optimize so often (although, sometimes you do when it comes to eliminating bound checks or stack copies)
> Restrict is certainly dangerous, but also rarely used and a clear warning sign, compare it to "unsafe".
It's NOT a clear warning sign, compared to `unsafe`. To call an unsafe function, the caller needs to explicitly enter an `unsafe` block. But calling a `restrict` function looks just like any normal function call. It's easy to miss an a code review or when upgrading the library that provides the function. That the problem with C and C++, really. The `unsafe` distinction is too useful to omit.
This happens a lot in discussion about programming complexity. What you are doing is changing the original problem to a much simpler one.
Consider a parsing function parse(string) -> Option<Object>
This is the original problem, "Write a parsing function that may or may not return Object"
What a lot of people do is they sidetrack this problem and solve a much "simpler problem". They instead write parse(string) -> Object
Which "appears" to be simpler but when you probe further, they handwave the "Option" part to just, "well it just crashes and die".
This is the same problem with exceptions, a function "appears" to be simple: parse(string) -> Object but you don't see the myriads of exceptions that will get thrown by the function.
This is a line of thinking I used to see commonly when dynamic typing was all the rage. I think the difference comes from people who view primarily work on projects where they are the sole engineer vs ones where they work n+1 other engineers.
"just add assertions" only works if you can also sit on the shoulder of everyone else who is touching the code, otherwise all it takes is for someone to come back from vacation, missing the refactor, to push some code that causes a NULL pointer dereference in an esoteric branch in a month. I'd rather the compiler just catch it.
Furthermore, expressive type systems are about communcation. The contracts between functions. Your CPU doesn't case about types - types are for humans. IMO you have simply moved the complexity from the language into my brain.
Then we have the functions that might be re-entrant or not, in the presence of signals, threads,...