I wholly agree. Everyone is blinded by models - GPT4 this, LLaMA2 that - but the real source of the smarts is in the dataset. Why would any model, no matter how its architecture is tweaked, learn about the same ability from the same data? Why would humans be all able to learn the same skills when every brain is quite different. It was the data, not the model
And since we are exhausting all the available quality text online we need to start engineering new data with LLMs and validation systems. AIs need to introspect more into their training sets, not just train to reproduce them, but analyse, summarise and comment on them. We reflect on our information, AIs should do more reflection before learning.
More fundamentally, how are AIs going to evolve past human level unless they make their own data or they collect data from external systems?
It's both.
It's clearly impossible to learn how to translate Linear A into modern English using only content written in pure Japanese that never references either.
Yet also, none of the algorithms before Transformers were able to first ingest the web, then answer a random natural language question in any domain — closest was Google etc. matching on indexed keywords.
> how are AIs going to evolve past human level unless they make their own data?
Who says they can't make their own data?
Both a priori (by development of "new" mathematical and logical tautological deductions), and a posteriori by devising, and observing the results of, various experiments.
Same as us, really.
How does an AI language model devise an experiment and observe the results? The language model is only trained on what’s already known, I’m extremely incredulous that this language model technique can actually reason a genuinely novel hypothesis.
A LLM is a series of weights sitting in the ram of GPU cluster, it’s really just a fancy prediction function. It doesn’t have the sort of biological imperatives (a result of being complete independent beings) or entropy that drive living systems.
Moreover, if we consider how it works for humans, people have to _think_ about problems. Do we even have a model or even an idea about what “thinking” is? Meanwhile science is a looping process that mostly requires a physical element(testing/verification) to it. So unless we make some radical breakthroughs in general purpose robotics, as well as overcome the thinking problem I don’t see how AI can do some sort tech breakout/runaway.
> I don’t see how AI can do some sort tech breakout/runaway.
I'm expecting (in the mode, but with a wide and shallow distribution) a roughly 10x increase in GDP growth, from increased automation etc., not a singularity/foom.
I think the main danger is bugs and misuse (both malicious and short-sighted).
-
> How does an AI language model devise an experiment and observe the results?
Same way as Helen Keller.
Same way scientists with normal senses do for data outside human sense organs, be that the LHC or nm/s^2 acceleration of binary stars or gravity waves (or the confusingly similarly named but very different gravitational waves).
> The language model is only trained on what’s already known, I’m extremely incredulous that this language model technique can actually reason a genuinely novel hypothesis.
Were you, or any other human, trained on things unknown?
If so, how?
> A LLM is a series of weights sitting in the ram of GPU cluster, it’s really just a fancy prediction function. It doesn’t have the sort of biological imperatives (a result of being complete independent beings) or entropy that drive living systems.
Why do you believe that biological imperatives are in any way important?
I can't see how any of a desire to eat, shag, fight, run away, or freeze up… help with either the scientific method nor pure maths.
Even the "special sauce" that humans have over other animals didn't lead to any us doing the scientific method until very recently, and most of us still don't.
> Do we even have a model or even an idea about what “thinking” is?
AFAIK, only in terms of output, not qualia or anything like that.
Does it matter if the thing a submarine does is swimming, if it gets to the destination? LLMs, for all their mistakes and their… utterly inhuman minds and transhuman training experience… can do many things which would've been considered "implausible" even in a sci-fi setting a decade ago.
> So unless we make some radical breakthroughs in general purpose robotics
I don't think it needs to be general, as labs are increasingly automated even without general robotics.
At the least, it is a computable function (as we don’t have any physical system that would be more general than that, though some religions might disagree). Which already puts human brains ahead of LLM systems, as we are Turing-complete, while LLMs are not, at least in their naive application (their output can be feeded to subsequent invocations and that way it can be).
But also, that isn’t quite the whole story, since they can be arbitrarily precise in their approximation. Here[0] is a white paper addressing this issue which concludes attention networks are Turing complete.
Technically you may not want to call it Turing complete given the limited context window, but I'd say that's like insisting a Commodore 64 isn't Turing complete for the same reason.
Likewise the default settings may be a bit too random to be a Turing machine, but that criticism would also apply to a human.
Wrong, recurrent models were able to do this, just not as well.
Neural nets look much more competitive by that standard.