Perhaps our brains are doing exactly the same, just with more sophistication?
We know how current deep learning neural networks are trained.
We know definitively that this is not how brains learn.
Understanding requires learning. Dynamic learning. In order to experience something, an entity needs to be able to form new memories dynamically.
This does not happen anywhere in current tech. It's faked in some cases, but no, it doesn't really happen.
Ok then, I guess the case is closed.
> an entity needs to be able to form new memories dynamically.
LLMs can form new memories dynamically. Just pop some new data into the context.
No, that's an illusion.
The LLM itself is static. The recurrent connections form a soft-of temporary memory that doesn't affect the learned behavior of the network at all.
I don't get why people who don't understand what's happening keep arguing that AIs are some sci-fi interpretation of AI. They're not. At least not yet.
So you have mechanistic, formal model of how the brain functions? That's news to me.
Anyway, the question of whether computers can think is as interesting as the question whether submarines can swim.
Given the amount of ink spilled on the question, gotta disagree with you there.
It’s boring, and it’s also completely content-free. This particular instance doesn’t even make sense: how can it be exactly the same, yet more sophisticated?
Sorry.
I suspect those who can’t see this either
(a) are software engineers amazed that a chatbot can write code, despite it having been trained on an unimaginably massive (morally ambiguously procured) dataset that probably already contains something close to the boilerplate you want anyway
(b) don’t have the sufficient level of technical knowledge to ask probing enough questions to betray the weaknesses. That is, anything you might ask is either so open-ended that almost anything coherent will look like a valid answer (this is most questions you could ask, outside of seriously technical fields) or has already been asked countless times before and is explicitly part of the training data.
Considering that LLMs with a much smaller number of neurons than the brain are in many cases producing human-level output, there is some evidence, if circumstantial, that our brains may be doing something similar.
"A neuron in a neural network typically evaluates a sequence of tokens in one go, considering them as a whole input." -- ChatGPT
You could consider an RTX 4090 to be one neuron too.
They’re not, unless you blindly believe OpenAI press releases and crypto scammer AI hype bros on Twitter.