When albertgoeswoof reasons about a puzzle he models the actual actions in his head. He uses logic and visualization to arrive at the solution, not language. He then uses language to output the solution, or says he doesn't know if he fails.
When LLMs are presented with a problem they search for a solution based on the language model. And when they can't find a solution, there's always a match for something that looks like a solution.