What does this mean ? Are you saying every human could have achieved this result ? Or this ? https://openai.com/index/new-result-theoretical-physics/
because well, you'd be wrong.
>, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.
Human intelligence was brute forced. Please let's all stop pretending like those billions of years of evolution don't count and we poofed into existence. And you can keep parroting 'simulacrum of intelligence' all you want but that isn't going to make it any more true.
Meaning however you (reasonably) define intelligence, if you compare humans to any AI system humans are overwhelmingly more capable. Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence. Or else we'd be talking about how my calculator is intelligent. Of course computers can compute faster than we can, that's aside the point.
> Human intelligence was brute forced.
No, I don't mean how the intelligence evolved or was created. But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional. Evolution is not an intentional process, unless you believe in God or a creator of sorts, which is totally fair but probably not what you were intending.
But my point is that LLM's essentially arrive at answers by brute force through search. Go look at what a reasoning model does to count the letters in a sentence, or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!).
If "brute force" worked for this, we wouldn't have needed LLMs; a bunch of nested for-loops can brute force anything.
The reason why LLMs are clearly "magic" in ways similar to our own intelligence (which we very much don't understand either) is precisely because it can actually arrive at an answer without brute force, which is computationally prohibitive for most non-trivial problems anyway. Even if the LLM takes several hours spinning in a reasoning loop, those millions tokens still represent a minuscule part of the total possible solution space.
And yes, we're obviously more efficient and smarter. The smarter part should come as no surprise given that our brains have vastly more "parameters". The efficient part is definitely remarkable, but completely orthogonal to the question of whether the phenomenon exhibited is fundamentally the same or not.
Really ? Every Human ? Are you sure ? because I certainly wouldn't ask just any human for the things I use these models for, and I use them for a lot of things. So, to me the idea that all humans are 'overwhelmingly more capable' is blatantly false.
>Defining "intelligence" as "solving a math equation" is not a reasonable definition of intelligence.
What was achieved here or in the link I sent is not just "solving a math equation".
>Or else we'd be talking about how my calculator is intelligent.
If you said that humans are overwhelmingly more capable than calculators in arithmetic, well I'd tell you you were talking nonsense.
>Of course computers can compute faster than we can, that's aside the point.
I never said anything about speed. You are not making any significant point here lol
>No, I don't mean how the intelligence evolved or was created.
Well then what are you saying ? Because the only brute-forced aspect of LLM intelligence is its creation. If you do not mean that then just drop the point.
>But if you want to make that argument you're essentially asserting we have a creator, because to "brute force" something means it was intentional.
First of all, this makes no sense sorry. Evolution is regularly described as a brute force process by atheist and religious scientists alike.
Second, I don't have any problem with people thinking we have a creator, although that instance still does necessarily mean a magic 'poof into existence' reality either.
>But my point is that LLM's essentially arrive at answers by brute force through search.
Sorry but that's just not remotely true. This is so untrue I honestly don't know what to tell you. This very post, with the transcript available is an example of how untrue it is.
>or the amount of energy it takes to do things humans can do with orders of magnitude less (our brain runs on %20 of a lightbulb!).
Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.
Yes, in many ways absolutely. Just because a model is a better "Google" than my dummy friend doesn't mean that this same friend is more capable at countless cases.
> Meaningless comparison. You are looking at two completely different substrates. Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.
Isn't that just more proof how efficient the human brain is? Especially that a wire has much better properties than water solutions in bags.
Here might be some definitions of intelligence for example:
> The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.
> "...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills".
> Goal-directed adaptive behavior.
> a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation
But even a housefly possesses levels of intelligence regarding flight and spacial awareness that dominates any LLM. Would it be fair to say a fly is more intelligent than an LLM? It certainly is along a narrow set of axis.
> Because the only brute-forced aspect of LLM intelligence is its creation.
I would consider statistical reasoning systems that can simulate aspects of human thought to be a form of brute force. Not quite an exhaustive search, but massively compressed experience + pattern matching.
But regardless, even if both forms of intelligence arrived via some form of brute force, what is more important to me is the result of that - how does the process of employing our intelligence look.
> This very post, with the transcript available is an example of how untrue it is.
The transcript lacks the vector embeddings of the model's reasoning. It's literally just a summary from the model - not even that really.
> Do you realize how much compute it would take to run a full simulation of the human brain on a computer ? The most powerful super computer on the planet could not run this in real time.
You're so close to getting it lol