There are many things that pattern matching over large amounts of data can solve, like eventually we can probably get fully generated movies, music compositions, and novels, but the problem is that all of the content of those works will have to have been formalized into rules before it is produced, since computers can only work with formalized data. None of those productions will ever have an original thought, and I think that’s why GPT-3’s fiction feels so shallow.
So it boils down to a philosophical question, can human thought be formalized and written in rules? If it can, no human ever has an original thought either, and it’s a moot point.
To be honest, perhaps the language model works better without the evolutionary baggage.
That isn't to discount the other things we can do with our neural nets - for instance, it is possible to think without language - see music, instantaneous mental arithmetic, intuition - but these are essentially independent specialised models that we run on the same hardware that our language model can interrogate. We train these models from birth.
Whether intentional or not, AI research is very much going in the direction of replicating the human mind.
Their statement wasn’t that AGI is impossible, more that LLMs aren’t AGI despite how much they might emulate intelligence.
I have a sneaking suspicion that all that will be required for bypassing the upcoming road blocks is giving these machines:
1) existential needs that must be fulfilled
2) active feedback loops with their environments (continuous training)
We always thought that if AI can do X then it can do Y and Z. It keeps turning out that you can actually get really good at doing X without being able to do Y and Z, so it looks like we're moving the goalposts, when we're really just realizing that X wasn't as informative as we expected. The issue is that we can't concretely define Y and Z, so we keep pointing at the wrong X.
But all indication is that we're getting closer.
> “there are/are not, additional properties to human level symbol manipulation, beyond what GPT encapsulates.”
GPT does appear to do an awful lot, before we find the limits, of pattern extrapolation.
The notion of some sort of technological "singularity" is just silly. It is essentially an article of faith, a secular religion among certain pseudo-intellectual members of the chattering class. There is no hard scientific backing for it.
What, in your mind, should the goal posts be for AGI?