It means eliminating undefined behavior, and unplanned interaction between distant parts of the program.
Don't get me wrong - less undefined behaviour is better, but drawing a binary line between some and none makes for a convenient talking point, but isn't necessarily the sweet spot for the complicated and context-dependent series of tradeoffs that is software correctness.
It might seem as though incrementing a signed integer past its maximum can't be as problematic as a use after free even though both are Undefined Behaviour, but nah, in practice in real C++ compilers today they can both result in remote code execution.
There is a place for Unspecified results, for example having it be unspecified whether a particular arithmetic operation rounds up or down may loosen things up enough that much faster machine code is generated and well, the numbers are broadly correct still. But that's not what Undefined behaviour does.
Furthermore, an unbounded blast radius isn't itself the direct problem. A bug that with some probability casues your program to crash and your disk to be deleted is far less dangerous than a bug that allows a remote attacker to relatively easily steal all your secrets. UBs also differ on that front.
And again, virtually all programs are not provably without UB. For example, a Java program still interacts with an OS or with some native library that might suffer from a UB. So clearly we do tolerate some probability of UB, and we clearly do not think that eliminating any possibility of UB is worth any price.
When a program is just code on the screen, it's just a mathematical object, and then it's easy to describe a UB - the loss of all program meaning - as the most catastrophic outcome. But software correctness goes beyond the relatively simple world of programming language semantics, and has to consider what happens when a program is running, at which point it is no longer a mathematical object but a physical one. If a remote attacker steals all our secrets, we don't care if it's a result of some bug in the program itself (due to UB or otherwise), in other software the program interacts with, some fault or weakness in the hardware, or human operator error. The probability of any of these things is never zero, and we have to balance the cost of addressing each of these possibilities.
To give an example in the context of Carbon, we know that old code tends to suffer from fewer severe bugs than new code. So, if we want to reduce the probability of bugs, it is possible that it may be more worthwhile to invest - say, in terms of language complexity budget - in interop with C++ code than in eliminating every possible kind of UB, including those that are less likely to appear, sneak past testing, and cause an easily exploitable vulnerability.