The second half of the sentence doesn't follow from the first. Take everyone's favorite example, signed integer overflow: all you have to do to avoid UB on signed integer overflow is check for overflow before doing the operation (and C23 finally adds features to do that for you).
Taking a step back, the fundamental thing about UB is that it is very nearly always a bug in your code (and this includes especially integer overflow!). Even if you gave well-defined semantics to UB, the semantics you'd give would very rarely make the program not buggy. Complaining that we can't prove programs free of UB is tantamount to complaining that we can't prove programs free of bugs.
It actually turns out that UB is actually extremely helpful for tools that try to help programmers find bugs in their code. Since UB is automatically a bug, any tool that finds UB knows that it found a bug; if you give it well-defined semantics instead, it's a lot trickier to assert that it's a bug. In a real-world example, the infamous buffer overflow vulnerability Heartbleed stymied most (all?) static analyzers for the simple reason that, due to how OpenSSL did memory management, it wasn't actually undefined behavior by C's definition. Unsigned integer overflow also falls into this bucket--it's very hard to distinguish between intentional cases of unsigned integer overflow (e.g., hashing algorithms) from unintentional cases (e.g., calculating buffer sizes).
I much prefer Rust's approach to arithmetic, where overflow with plain arithmetic operators is defined as a bug, and panics on debug-enabled builds, plus special operations in the standard library like wrapping_add and saturating_add for the special cases where overflow is expected.
All you have to do is add a check for overflow _that the compiler will not throw away because "UB won't happen"_. The very thing you want to avoid makes avoiding it very hard, and lots of bugs have resulted from compilers "optimizing" away such overflow checks.
…making your code practically unreadable, since you have to write ckd_add(ckd_add(ckd_mul(a,a),ckd_mul(ckd_mul(2,a),b)),ckd_mul(b,b)) instead of a * a + 2 * a * b + b * b.
This has always been the case. Standard C has always operated with the possibility that addition can overflow. The programmer or library writer is responsible to check if the used types are large enough. If you want to be perfectly sure you need to check for overflow. Making this UB has not changed the nature of the issue.
> is made harder because C doesn't define the size of the default integer types
They correctly made this implementation defined. But C now has different byte sized integer types if you want to be sure.
Honestly, I don't think so, and as computers get more powerful and the amount of the world which relies on their correct functioning grows, I feel the arguments for UB become increasingly difficult to justify.
Warning: The following list is not exhaustive. There is no formal model of Rust's semantics for what is and is not allowed in unsafe code, so there may be more behavior considered unsafe. The following list is just what we know for sure is undefined behavior. Please read the Rustonomicon before writing unsafe code.
After the warning was a list of many of the same types of things that are undefined behaviour in C. In addition, there’s a bunch more undefined behaviour related to improper usage of the unsafe keyword.
So I don’t think you get a free lunch with Rust here. What you get is a “safe” playground if you stay within the guard rails and avoid using the unsafe keyword. But then you are limited to writing programs which can be expressed in safe Rust, a proper subset of all programs you might want to write.
Furthermore, the lack of a formal specification for Rust is one area where it lags behind C, a standardized language. All of the undefined behaviour in C is decreed and documented by the standard, having been decided by the committee. Rust, on the other hand, may have weird and unpredictable behaviour that you just have to debug yourself, which may or may not be compiler bugs.
I often write programs that have unsafe code. However, the unsafe code is never more than 100 lines, which means I have a very small amount of code to reason about — Rust users expect (of course, you as a programmer has to enforce) that it should be possible to cause UB from safe code, so my “safe interface” to my unsafe code ensures my code can’t cause UB, no matter what I call.
On problem with Rust is generally when you mess up it panics — I think that’s better than buffer overflows and the like, but still not a good user experience.
This means there is a very small amount of code I have to really think about, while in C or C++, basically any place x[i] appears (regardless of if x is a pointer or a std::vector).
You can of course write safe C code, people do, but it’s hard, and it only takes one slip up anywhere in your program to blow it.
Your claim that the C Standard lists all undefined behavior is actually false. The C Standard only lists out the explicit list of undefined behavior, but it does not list out the implicit list of undefined behavior. There have been efforts to make just such a list but it's an incredibly difficult task.