> Let me calculate that. 729,278,429 × 2,969,842,939 = 2,165,878,555,365,498,631
Real answer is: https://www.wolframalpha.com/input?i=729278429*2969842939
> 2 165 842 392 930 662 831
Your example seems short enough to not pose a problem.
Long multiplication is a trivial form of reasoning that is taught at elementary level. Furthermore, the LLM isn't doing things "in its head" - the headline feature of GPT LLMs is attention across all previous tokens, all of its "thoughts" are on paper. That was Opus with extended reasoning, it had all the opportunity to get it right, but didn't. There are people who can quickly multiply such numbers in their head (I am not one of them).
LLMs don't reason.
presumably one of us is wrong.
therefore, humans don't reason.
LOL, talk about special pleading. Whatever it takes to reshape the argument into one you can win, I guess...
LLMs don't reason.
Let's see you do that multiplication in your head. Then, when you fail, we'll conclude you don't reason. Sound fair?
when someone says LLMs today they obviously mean software that does more than just text, if you want to be extra pedantic you can even say LLMs by themselves can’t even geenrate text since they are just model files if you don’t add them to a “system” that makes use of that model files, doh
This is true for solving difficult novel problems as well, with the addition of tools that an agent can use to research the problem autonomously.
I asked Gemini 3 Thinking to compute the multiplication "by hand." It showed its work and checked its answer by casting out nines and then by asking Python.
Sonnet 4.6 with Extended Thinking on also computed it correctly with the same prompt.