> Without proper reasoning you can get some "heuristic"
Right, but the question is whether this is good enough. And what counts as "proper". A lot of what we call proper reasoning is still quite informal, and even mathematics is usually not formal enough to be converted directly into a formal language like Coq.
So this is a deep question: is talking reasoning? Humans talk (out loud, or in their heads). Are they then reasoning? Sure, some of what happens internally is not just self-talk, but the thought experiment goes: if the problem is not completely ineffable, then (a bit like Borges' library) there is some 1000-word text which is the best possible reasoned, witty, English-language 1000-word solution to the problem. In principle, an LLM can generate that.
If your goal is a reductio, ie my statement must be false since it implies models should write code for every problem - then I disagree, because while the ability to solve these problems might be a requirement to be deemed "an intelligence", nonetheless many other problems which require an intelligence don't require the ability to solve these problems.