If you are only interested in the most superficial tests and theories—like the Turing Test—then consider psychology conquered once you’ve tricked a human with your chat bot. Game Over. And what did you learn...?
What's the counterargument? What's a less superficial test that we can use instead, which conclusively shows that actually human minds aren't just like very sophisticated LLMs? There isn't one -- this is nothing but the same Chinese room problem which we've been discussing for decades. The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.
The Turing Test doesn’t test humans. So you cannot use it to show any properties about humans.
Next!
> The topmost poster is simply assuming that language models can't possibly understand the same way a human does without relying on any kind of "test" at all, which I think is the real scientific dead end here.
Sounds unfalsifiable. So yes.
Well said. I'm gonna steal this explanation.
Also reminds me of the famous Carbonara quote: "if my grandmother had wheels, then she would be a bike" [1]
Well it could be argued that she would be a bike. Its possible to be multiple things at once. If she had 2 wheels and could be ridden by other humans to a destination she might qualify has a bike. She would also continue to be your grandmother.
If you'd like to take a crack at a helpful answer, perhaps educate us all on what it WOULD take for you to consider a NN to actually "know" something in the same way that we say a human or other sentient animal does.
That is indeed often the kind of answer that a philosophical question deserves.
> If you'd like to take a crack at a helpful answer, perhaps educate us all on what it WOULD take for you to consider a NN to actually "know" something in the same way that we say a human or other sentient animal does.
How many angels can dance on the head of a pin?
Where?
You're overreaching quite a bit here, or I think you're misinterpreting what Parent said. I interpreted what they said as: it seems the difference in how we "know" something vs how an LLM "knows" something might actually be closer than some suspect. this certainly is not an "end of science".
A “scientist” looks out at his living room. My Roomba and my cat have their own lifes. Who’s to say that they are not in fact the same in kind (but not degreee)? Good luck with that, professor.
We could easily argue that birds are not a type of helicopter because for helicopter's we have a very specific set of flying properties required. It must have a main propeller for lift and a tail propeller to counter balance the main propeller from spinning the helicopter. If a bird flew with similar mechanism I would argue it was a helicopter.
We don't have a 100% accurate gauge for ToM as far as we know. This paper simply uses some of the best known tests for ToM and then states that either LLM can lead to emergent properties or that the current tests for ToM need to be re-thought.