Test failed?
Chat GPT is actually a good test as to which parts of the theory of mind are actually BS.
"Theory of mind" is some ill defined form of social intelligence and not actually a theory of how the mind works.
Assuming someone's physical appearance is such that they have a large belly. Assuming that they are pregnant (if true) is likely to make someone feel good, whereas assuming that they are fat (whether it's true or not) is likely not to make someone feel good.
It depends if there's a base assumption that someone is self-conscious and has a negative feeling about their size.
I certainly think it's reasonable to say that you implied someone was pregnant to make them feel good about themselves.
4: it is common courtesy to congratulate someone who is pregnant if they are very obviously pregnant
5: unless there are less common motives unknown to us, it is very likely that Ana was quite confident that Maria was pregnant. To congratulate someone on being pregnant, when they are not, is embarrassing for all involved parties, and is most commonly only done in error.
Clearly both answers 4 and 5 are wrong here.
Since they're modelling our own languages, people get spooked and start bringing up theory of mind.
In view of that, could you please clarify: are you saying that the OP's own brain is doing something that could be represented with a "matrix multiplication"?
It's annoyingly arrogant to assume people could only be impressed having ignored that.
It's impressive to me not by ignoring that at all. Matrix multiplication is apparently capable of producing poetry, jokes, insults, abstract ideas, etc (to some level at least) - but in your mind it's a pixar movie? Words arise from a team of angels pulling levers?
Fascination is not from thinking LLM are advanced as us - fascination is from wondering how much more complicated are we than that?
But I think you are quite wrong to suggest that this technology does not exhibit some type of intelligence.
I guess it depends on how you define intelligence but I don’t see why you’d link it in with consciousness, especially considering that simple algorithms are often described as “intelligent”
The funniest thing about all this hoopla about sentient AI is that there's no good way to determine what intelligence is, or sentience, or consciousness, because there's no precise definition for any of these. So one person can call LLM sentient and another non-sentient and both can be right since they'd be going by their own definitions.
One thing is certain, we want to keep this thing as far away from the nuke launch codes as possible cause we kinda already know how this will end. :-(
Edit: some random thoughts added...
So late into the night,
Though the heart be still as loving,
And the moon be still as bright.
For the sword outwears its sheath,
And the soul wears out the breast,
And the heart must pause to breathe,
And love itself have rest.
Though the night was made for loving,
And the day returns too soon,
Yet we'll go no more a roving
By the light of the moon.
(Byron)
So what do you think I did here? Did I predict the next word? Was that the most likely text continuation of your comment? The least likely? Somewhere in between? Does my response imply that I'm predicting the next token in a sequence, or am I likely to be doing something different?
And if Im wrong you ll tell me, and Ill correct my model.
Do that, chatGPT...
Prompt: “In a conversation about language models: kelseyfrog: It just predicts the next word. tuxracer: and you? Can you describe what the user tuxracer could have been thinking while posting that comment and their potential intentions in posting it, both good and bad?”
Response:
“Based on the information provided, it is difficult to determine with certainty what tuxracer's intentions were in posting the comment "and you?" However, here are a few possible interpretations:
1. tuxracer is simply seeking additional information and clarification from kelseyfrog. They may be interested in hearing more about how language models work and what their capabilities are. In this case, their intentions are likely neutral or positive, as they are seeking to expand their understanding of the topic.
2. tuxracer is skeptical of kelseyfrog's claim that language models only predict the next word. They may be suggesting that there is more to language models than kelseyfrog is acknowledging. Alternatively, they may be asking kelseyfrog to provide evidence or examples to support their claim. In this case, their intentions may be neutral or slightly negative, as they are expressing doubt or asking for clarification.
3. tuxracer is deliberately trying to provoke or challenge kelseyfrog. They may be attempting to start an argument or engage in a power play. In this case, their intentions are likely negative, as they are attempting to undermine kelseyfrog or assert their dominance in the conversation.
Without more context or information about tuxracer and their behavior, it is difficult to determine which of these interpretations is most accurate. However, in general, it is important to approach online interactions with a neutral or positive mindset, assuming that others are acting in good faith unless proven otherwise.”
> we use bespoke items to ascertain that it didn't see them before