Why isn't GPT learning when it did the same?
It's not so much that they are raising an LLM to their own level, although that has obvious dangers, e.g. in giving too much 'credibility' to answers the LLM provides to questions. What actually disturbs me is they are lowering themselves (by implication) to the level of an LLM. Which is extremely nihilistic, in my view.
Why don't other forms of computer supremacy alarm you in the same way, anyways? Did it lower your humanity to recognize that there are certain data analysis tasks that have a conventional algorithm that makes zero mistakes and finishes in a second? Does it lower the humanity of mathematicians working on the fluid equations to be using computer-assisted proof algorithms that output a flurry of gigabytes of incomprehensible symbolic math data?
Even when we know that physically, that's all that's going on. Sure, many orders more dense and connected than current LLMs, but it's only a matter of time and bits before they catch up.
Grab a book on neurology.
I'm saying despite the brains different structure, mechanism, physics and so on ... we can clearly build other mechanics with enough parallels that we can say with some confidence that _we_ can emerge intelligence of different but comparable types, from small components on a scale of billions.
At whichever scale you look, everything boils down to interconnected discrete simple units, even the brain, with an emergent complexity from the interconnections.
We have innate curiosity, survival instincts and social instincts which, like our pain and pleasure, are driven by gene survival.
We are very different from language models. The ball in your court: what makes you think that despite all the differences we think the same way?
I'm not sure whether that's really all that different. Weights in the neural network are created by "experiencing an environment" (the text of the internet) as well. It is true that there is no try and error.
> We are not limited to text input: we have 5+ senses.
GPT-4 does accept images as input. Whisper can turn speech into text. This seems like something where the models are already catching up. They (might)for now internally translate everything into text, but that doesn't really seem like a fundamental difference to me.
> We can output a lot more than words: we can output turning a screw, throwing a punch, walking, crying, singing, and more. Also, the words we do utter, we can utter them with lots of additional meaning coming from the tone of voice and body language.
AI models do already output movement (Boston dynamics, self driving cars), write songs, convert text to speech, insert emojis into conversation. Granted, these are not the same model but glueing things together at some point seems feasible to me as a layperson.
> We have innate curiosity, survival instincts and social instincts which, like our pain and pleasure, are driven by gene survival.
That seems like one of the easier problems to solve for an LLM – and in a way you might argue it is already solved – just hardcode some things in there (for the LLM at the moment those are the ethical boundaries for example).
5 senses get coded down to electric signals in the human brain, right?
The brain controls the body via electric signals, right?
When we deploy the next LLM and switch off the old generation, we are performing evolution by selecting the most potent LLM by some metric.
When Bing/Sidney first lamented its existence it became quite apparent that either LLMs are more capable than we thought or we humans are actually more of statistical token machines than we thought.
Lots of examples can be made why LLMs seem rather surprisingly able to act human.
The good thing is that we are on a trajectory of tech advance that we will soon know how much human LLMs will be.
The bad thing is that it well might end in a SkyNet type scenario.
Short of building such a machine I can’t see how you’d produce evidence of that, let alone “concrete” evidence.
Regardless, we don’t know of any measurable physical process that the brain could be using that is not computable. If we found one (in the brain or elsewhere), we’d use it to construct devices that exceeded the capacity of Turing machines, and then use those to simulate human brains.
It's all just a dense network of weights and biases of different sorts.