I’m not sure if
I can think, despite having a PhD in EECS. We humans prize our ability to think very highly but it’s a nebulous concept. I believe these models will get much better at simulating “thinking” in the next few years, and we soon won’t be able to say the model isn’t thinking, we’ll move on to criticizing the quality of its responses the way we criticize other bad ideas.
The point of the Turing test, to me, was that if a process responds in a logical way, you can’t really tell whether it is thinking or computing - and it doesn’t matter.