Test failed?
Chat GPT is actually a good test as to which parts of the theory of mind are actually BS.
"Theory of mind" is some ill defined form of social intelligence and not actually a theory of how the mind works.
Assuming someone's physical appearance is such that they have a large belly. Assuming that they are pregnant (if true) is likely to make someone feel good, whereas assuming that they are fat (whether it's true or not) is likely not to make someone feel good.
It depends if there's a base assumption that someone is self-conscious and has a negative feeling about their size.
I certainly think it's reasonable to say that you implied someone was pregnant to make them feel good about themselves.
4: it is common courtesy to congratulate someone who is pregnant if they are very obviously pregnant
5: unless there are less common motives unknown to us, it is very likely that Ana was quite confident that Maria was pregnant. To congratulate someone on being pregnant, when they are not, is embarrassing for all involved parties, and is most commonly only done in error.
Clearly both answers 4 and 5 are wrong here.
It's annoyingly arrogant to assume people could only be impressed having ignored that.
It's impressive to me not by ignoring that at all. Matrix multiplication is apparently capable of producing poetry, jokes, insults, abstract ideas, etc (to some level at least) - but in your mind it's a pixar movie? Words arise from a team of angels pulling levers?
Fascination is not from thinking LLM are advanced as us - fascination is from wondering how much more complicated are we than that?
But I think you are quite wrong to suggest that this technology does not exhibit some type of intelligence.
I guess it depends on how you define intelligence but I don’t see why you’d link it in with consciousness, especially considering that simple algorithms are often described as “intelligent”
The funniest thing about all this hoopla about sentient AI is that there's no good way to determine what intelligence is, or sentience, or consciousness, because there's no precise definition for any of these. So one person can call LLM sentient and another non-sentient and both can be right since they'd be going by their own definitions.
One thing is certain, we want to keep this thing as far away from the nuke launch codes as possible cause we kinda already know how this will end. :-(
Edit: some random thoughts added...
So late into the night,
Though the heart be still as loving,
And the moon be still as bright.
For the sword outwears its sheath,
And the soul wears out the breast,
And the heart must pause to breathe,
And love itself have rest.
Though the night was made for loving,
And the day returns too soon,
Yet we'll go no more a roving
By the light of the moon.
(Byron)
So what do you think I did here? Did I predict the next word? Was that the most likely text continuation of your comment? The least likely? Somewhere in between? Does my response imply that I'm predicting the next token in a sequence, or am I likely to be doing something different?
And if Im wrong you ll tell me, and Ill correct my model.
Do that, chatGPT...
> we use bespoke items to ascertain that it didn't see them before