First your camp doesn't deal in absolutes. It doesn't say absolutely chatGPT is sentient. It only questions the possibility and tries to explore further.
Second a skeptical outlook that doesn't deal with absolutes is 100% the more logical and intelligent perspective given the fact that we don't even know what "understanding" or "sentience" is. We can't fully define these words and we only have some fuzzy view of what they are. Given this fact, absolute statements against something we don't fully understand are fundamentally not logical.
This is a strange phenomenon how some people will vehemently deny something absolutely. During the VERY beginning of the COVID-19 pandemic the CDC incorrectly stated that masks didn't stop the spread of COVID-19 and you literally saw a lot of people parroting this statement everywhere as "arm chair" pandemic experts (including here on HN).
Despite this there were some people who thought about it logically if there's a solid object on my face, even if that object has holes in it for air to pass through, the solid parts will block other solid things (like COVID) from passing through thereby lessening the amount of viral material that I breath in. Eventually the logic won out. I think the exact same phenomenon is happening here.
Some or several ML experts tried to downplay LLMs (even though they don't completely understand the phenomenon themselves) and everyone else is just parroting them like they did with the CDC.
The fact of the matter is, nobody completely understands the internal mechanisms behind human sentience nor do they understand how or if chatGPT is actually "understanding" things. How can they when they don't even know what the words mean themselves?
No comments yet.