Then there was this experiment: https://thegradient.pub/othello/. TL;DR: they took a relatively simple GPT model and trained it on tokens corresponding to Othello moves until it started to play well. Then they probed the model and found stuff inside the neural net that seems to correspond to the state of the board; they tested it by "flipping a bit" during activation, and observed the model make a corresponding move. So it did build an inner model of the game as part of its training by inferring it from the moves it was trained on. And it uses that model to make moves according to the current state of the board - that sure sounds like reasoning to me. Given this, can you explain why you are so certain that there isn't some equivalent inside ChatGPT?