Minority != wrong, with many historic examples that imploded in spectacular fashion. People at the forefront of building these things aren't immune from grandiose beliefs, many of them are practically predisposed to them. They also have a vested interest in perpetuating the hype to secure their generational wealth.
The ai can easily answer correctly complex questions NOT in its data set. If it is generating answers to questions like these out of thin air which fits our colloquial definition of intelligence.
"Is X true" -> "Yes, X is true."
"Is X a myth?" -> "Yes, X is a myth"
"Is Y a myth?" (where X = Y, rephrased) -> "No, Y is true"
Even when they're provided with all the facts required to reach the correct answer through simple reasoning, they'll often fail to do so.
Worse still, sometimes they can be told what the correct answer is, with a detailed step-by-step explanation, but they'll still refuse to accept it as true, continuing to make arguments which were debunked by the step-by-step explanation.
All state of the art models exhibit this behavior, and this behavior is inconsistent with any definition of intelligence.
They do this really impressive stuff like generate code and hold conversations that makes them seem intelligent, but then they fail at these extremely basic tasks which, to me, proves that it's all an illusion.
It doesn't understand the instructions you give it, it doesn't even understand the answer it gives you. It just consumes and generates tokens. Sure it works pretty well and it's pretty cool stuff, but it's not AI.
The fact of the matter is that as retarded and as stupid as the LLM is the fact that it’s so prevalent in the world today is because it gets answers right. We ask it things not in its training data and it produces an answer out of a range of possibilities that is to low probability to be produced by ANY other thing other than actual reasoning and logic.
You need to see nuance here and make your assessment of LLMs NOT based on singular aspects of facts. LLMs get shit wrong all the time they also get shit right all the time and so do humans. What does that look like holistically?
Look at the shit it’s getting right . If it’s getting stuff right that’s not in the training data then some mechanism in there is doing actual “thinking” and when it gets shit wrong well, you get shit wrong too. All getting shit wrong does to you is make you a dumbass it doesn’t make you not an intelligent entity. You don’t lose that status as soon as you do something incredibly stupid which I’m sure you’ve done often enough in your life to know the difference.
They get simple, rephrased, but conceptually equivalent questions really wrong and they do this:
1. while the context already contains their previous answer to the original question (which was correct),
2. while the context contains all background information on the topic that would allow an intelligent being to arrive at the correct answer through simple logical deduction,
3. without recognizing or acknowledging that they provided a conflicting answer (without being prompted),
4. while denying that the two answers are contradictory if that fact is pointed out to them,
5. while fabricating a list of bogus reason justifying a different answer if pressed for an explanation.
That's one common failure mode, the other common failure mode is where they uncritically accept our own erroneous corrections even when the correction contain obviously flawed reasoning.
This behavior demonstrates a fundamental lack of conceptual understanding of the world and points at rote memorization in the general case. Maybe LLMs develop a more conceptual understanding of a certain topic when they've been benchmaxxed on that topic? I don't know, I'm not necessarily arguing against that, not today anyway.
But these errors are a daily occurrence in the general case when it comes to any topic they haven't been benchmaxxed for - they certainly don't have a conceptual understanding of cooking, baking, plumbing, heating, electrical circuits, etc.