Like Heptapod B. The "next word" argument is pervasive and disappointingly ridiculous. If you present an LLM with a logic puzzle and it gives you the correct answer, how did it "predict the next word"? Yes, the output came in the form of additional tokens. But if those tokens required logical thought, it's a mistake to see the what as the how.