That's a reasonable goal, but it's also not what people were aiming for historically. It's also very expansive: if human level intelligence means outperforming in every field every human that ever lived, that's a high bar to meet. Indeed, it means that no humans have ever achieved human-level intelligence.
Just that AGI must be a replacement for a human for a particular job, for all jobs that are typically performed by humans (such as the humans you would hire to build a tech startup). It's fine to have "speciality" AGIs that are tuned for job X or job Y--just like some people are more suited to job X or job Y.
Which is pretty fair.
And what you're arguing for is effectively the same: an AI (maybe with some distilled specialty models) that can perform roles of everything from customer service rep to analysts to researchers to the entire C-suite to high skilled professionals like CPAs and lawyers. There are zero humans alive who can do all of those things simultaneously. Most humans would struggle with a single one. It's perfectly fine for you to hold that as the standard of when something will impress you as an AGI, but it's absolutely a moved goalpost.
It also doesn't matter much now anyway: we've gotten to the point where the proof is in the pudding. The stage is now AI-skeptics saying "AI will never be able to do X," followed by some model or another being released that can do X six months later and the AI-skeptic saying "well what about Y?"
That goalpost makes no sense- AIs are not human. They are fundamentally different, and therefore will always have a different set of strengths and weaknesses. Even long after vastly exceeding human intelligence everywhere it counts, it will still also perform worse than us on some tasks. Importantly, an AI wouldn't have to meet your goalpost to be a major threat to humanity, or to render virtually all human labor worthless.
Think about how anthropomorphic this goalpost is if you apply it to other species. "Humans aren't generally intelligent, because their brains don't process scents as effectively as dogs- and still struggle at spatially locating scents."
> They are fundamentally different, and therefore will always have a different set of strengths and weaknesses.
and this:
> render virtually all human labor worthless
actually conflict. Your job comes from comparative advantage, meaning that being more different from other people actually is more important than how good you are at it (absolute advantage).
If the AGI could do your job better than you, it doesn't matter, because it has something better to do than that. And just like humans have to be paid so they can afford food and shelter, AGIs have to be paid so they can afford electricity and GPUs to run on.
(Besides, if the AGI really is a replacement for a human, it probably has consumerist desires and wants to be paid the median wage too.)
in light of all this, i would very much like to stay in contact with you. ive connected with one other HN user so far (jjlustig) and i hope to connect more so that together we can effect political change around this important issue. ive formed a twitter account to do this, @stop_AGI. whether or not you choose to connect, please do reach out to your state and national legislators (if in the US) and convey your concern about AI. it will more valuable than you know.
> (...)
> That is the goalpost for AGI. It’s an artificial human - a human replacement.
This considerably moves the goalpost. An AGI can have a different kind of intelligence than humans. If an AGI is as intelligent as a cat, it's still AGI.
More likely, the first AGI we develop will probably greatly exceed humans in some areas but have gaps in other areas. It won't completely replace humans, just like cats don't completely replace humans.
AGI was never about exactly replicating humans, it's about creating artificial intelligence. Intelligence is not one-size-fits-all, there are many ways of being intelligent and the human way just one among many.
Indeed we can say that even between humans, intelligence varies deeply. Some humans are more capable in some areas than others, and no human can do all tasks. I think it's unreasonable to expect AGI to do all tasks and only then recognize its intelligence.
(Note: GPT-4 isn't AGI)
Possibly, someone who is allergic to cats.
'everything a human can do' is not the same as 'anything any human can do as well as the best humans at that thing (because those are the ones we pay)' - most humans cannot do any of the things you state you are waiting for an AI to do to be 'general'.
Therefore, the first part of your statement is the initial goal post and the second part of your statement implies a very different goal post. The new goal post you propose would imply that most humans are not generally intelligent - which you could argue... but would definitely be a new goal post.
Somehow this test got dumbed down over time, probably in an effort to try to pass it, into an investigator having to decide which of two sides is an AI - with no other information to go on. That's a comparatively trivial test to pass (for the "AI"), as it merely requires creating a passable chatbot. Imitation is an exceptional challenge as it does implicitly require the ability to imitate anybody, whether a professional athlete, a man who scored perfectly on the LSAT, or even something as specific as "John Carmack."
[1] - https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...
By their marketing along, OpenAI has moved the goalposts more than anything else. They've managed to lower the bar of agi from "artificial general intelligence" to "regurgitates and recombines to form passable outputs with enough labelled training data".
"Surfacing" is a song by Slipknot from their self-titled debut album, released in 1999. Please note that the lyrics contain explicit language. Here is a portion of the lyrics with some of the explicit content redacted:
"Running out of ways to run I can't see, I can't be Over and over and under my skin All this attention is doing me in!
[Chorus:] (Expletive) it all! (Expletive) this world! (Expletive) everything that you stand for! Don't belong! Don't exist! Don't give a (expletive), don't ever judge me!
Picking through the parts exposed Taking shape, taking shag Over and over and under my skin All this momentum is doing me in!
[Chorus:] (Expletive) it all! (Expletive) this world! (Expletive) everything that you stand for! Don't belong! Don't exist! Don't give a (expletive), don't ever judge me!
You got all my love, livin' in your own hate Drippin' hole man, hard step, no fate Show you nothin', but I ain't holdin' back Every damn word I say is a sneak attack When I get my hands on you Ain't a (expletive) thing you can do Get this cuz you're never gonna get me I am the very disease you pretend to be
I am the push that makes you move
[Chorus:] (Expletive) it all! (Expletive) this world! (Expletive) everything that you stand for! Don't belong! Don't exist! Don't give a (expletive), don't ever judge me!"
Please be aware of the explicit content before sharing or using these lyrics in any context.
And that's ignoring that arguably chat bots have been passing the Turing test (against non-expert judges) since ELIZA in the 60s [1]
Does ChatGPT fail this simple test: "I am going to ask you questions, but if I go silent for a couple minutes, I want YOU to start asking ME random questions."
GI in AGI stands for general intelligence. If what you said is your benchmark for general intelligence then humans who cannot perform all these tasks to the standard of being hirable are not generally intelligent.
What you're asking for would already be bordering on ASI, artificial superintelligence.
By that definition do humans possess general intelligence?
Can you do everything a human can do? Can one human be a replacement for another?
I don't think it makes sense without context. Which human? Which task?..
I disagree with the premise. A single human isn't likely to be able to perform all these functions. Why do you demand GPT-4 encompass all activities? It is already outperforming most humans in standardized tests that rely only on vision and text. A human needs to trained for these tasks.
It's already a human replacement. OpenAI has already said the GPT-4 "with great impact on functions like support, sales, content moderation, and programming."
This could mean something which is below a monkey’s ability to relate to the world and yet more useful than a monkey.
No, AGI would not need you to start a startup. It would start it itself.
It's a clear analogy.
This should become an article explaining what AGI really means.
I think the question , "Can this AGI be my start-up co-founder? Or my employee #1?"
Or something like that is a great metric for when we've reached the AGI finish line.
This sounds like a definition from someone who never interacts with anyone except the top 1% performance level of people, and those who have had strong levels of education.
Go into a manufacturing, retail or warehouse facility. By this definition, fewer than ten or twenty percent of the people there would have "general intelligence", and that's being generous.
Not because they are stupid: that's the point; they're not. But it's setting the bar for "general intelligence" so absurdly high that it would not include many people who are, in fact, intelligent.