We've so far found that Claude code is fine as a kind of better Coverity for uncovering memory leaks and similar. You have to check its work very carefully because about 1 time in 5 it just gets stuff wrong. It's great that it gets stuff right 4 times in 5 and produces natural code that fits into the style of the existing project, but it's nothing earth-shattering. We've had tools to detect memory leaks before.
We had someone attempt to translate one of our existing projects into Rust and the result was just wrong at a fundamental level. It did compile and pass its own tests, so if you had no idea about the problem space you might even have accepted its work.
LLMs also make mistakes even way lower level than those one pagers allow you to control with the planning mode. Which I use all the time btw. And anyway, they throw the plan out of the window immediately when their tried solutions don't work during execution, for example when a generated test is failing.
Btw, changing the plan after its generation is painful. It happens more than not that when I decline it with comments it generates a worse version of it, because it either miss things from the previous one which I never mentioned, or changes the architecture to a worse one completely. In my experience, it's better to restart the whole thing with a more precise prompt.
The way it's going, the AI hyperscalers are buying such a big portion of the world's hardware, that it may very well happen that tomorrow's machines do get slower per dollar of purchase value.