> For example checking for signed overflow must be done carefully:
Right, but we're talking about a simple bounds check. There should be no need for any arithmetic, just comparison.
> "Design philosophy"...oh please! C was designed for transistor- and memory- scarce microcomputers.
Right. Hence its design philosophy.
> Nowadays there is defacto supercomputer in every phone and runtime bounds checks are cheap.
Cheap, but perhaps not cheap enough to dismiss entirely. Bounds checking costs a few percent of performance [0], enough to put some people off in some domains such as in the kernel.
It's a pity C makes it difficult to automate just about any kind of check. Checking whether a pointer overruns a buffer that was returned by free, for instance, requires quite a bit of cleverness, as the system has to track the size of the allocated block.
You have to rely on optional compiler features, elaborate static analysis tools (often proprietary and expensive), and dynamic analysis tools like Valgrind. Ada on the other hand enables all sorts of runtime checks by default, but it's easy to switch them all off if you're sure.
> CPU to know the size of memory chunk pointed to could enable optimization which would make the code actually faster (not even talking about security benefits)
What kind of optimisation do you have in mind? Pre-caching?
> But you C programmers insist tooth an nail against that...
'Fat pointers' of this sort have been tried with the C language [1] but I can't see the committee adding them to the standard. Part of C's virtue is that it's extremely slow moving.
I'm not advocating continued widespread use of C though. I hope safe-but-fast languages like Rust do well. We all pay a price for the problems associated with C and, perhaps to a lesser extent, C++. For what it's worth I haven't written serious C or C++ code for a long time.
[0] https://doi.org/10.1145/1294325.1294343 (An old source admittedly)
[1] http://libcello.org/learn/a-fat-pointer-library