We're not likely to all get on the same page exactly after one round of discussion, but I think it would help accelerate the process and help us to challenge our own assumptions and update our own mental models.
I also don't require it to do things that require embodiment, like "play a game of baseball" or whatever. Although I do see a time coming when robotics and AI will align to the extent that an AI powered robotic player will be able to "play a game of baseball" or suchlike.
To expand on this a bit: I don't over-emphasize the "general" part like some people do. That is, some people argue against AGI on the basis that "even humans aren't a general intelligence". That, to me, is mere pedantry and goal-post moving. I don't think anybody involved in AGI ever expected AGI to necessarily mean "most general possible problem solver that can exist in the state space of all possible general problem solvers" or whatever. Disclosure: I'm partially paraphrasing Ben Goertzel from a recent interview[1] I saw, in that previous sentence.
I definitely think the line of argumentation that even humans aren't a general intelligence is an unhelpful one that's also just wrong on an intuitive level.
Though what's becoming clear to me is that the effect that being part of a multi-agent system has on a single agents ability to generalize is enormous and likely to be quite important when thinking about AGI too.
Also what's becoming clear is that the possible state space of what might constitute something that could be called AGI is likely enormous.
I'm mostly interested in it from X-risk perspective of what properties of an AGI system are necessary and sufficient to pose existential risk and what are the visible thresholds you would need to cross on the path to such a system coming into existence?
I think it has to do with being able to teach itself arbitrary information, including how to learn better, and importantly recognize what it does and doesn't know.
LLMs feel like a massive step towards that. It feels similar to a single "thought process".
A "good / useful AGI" might have aspects such as: - Ability to coherently communicate with humans (hard to prove it's working without this) and other agents. - Ability to make use of arbitrary tools
This sounds very similar to AutoGPT (what people poke fun of as an llm in a while loop)- and if the brain was AGI- I think it'd work very well.
I think there's a critical difference between LLMs and AGI, which is metacognition.
If an LLM had proper metacognition maybe it would hallucinate, but then it would realize and say "actually I'm not sure and I just started hallucinating that answer- I think I need to learn more about xyz." And then (ideally) could go ahead and do that (or ask if it should).
Another piece I've thought about is subjective experience.
Inserting experiences into a vector store recalling them in triggering situations.
That is, humans can sense the surrounding world. (Some people call that sentience.) Humans can think about what they sense, organize it, categorize it, find patterns, think both inductively and deductively. (Some people call that sapience.) But humans can do something else - as they think, they can observe their own thinking, and then think about that. "How do I reason? Why did I decide that? How do I determine what evidence is accurate?" I don't know if "metacognition" the the word that people use for that, but it's part of what I think AGI is.
You could argue being able to observe and think about what was just said aligns with ReAct (https://arxiv.org/pdf/2210.03629.pdf) - maybe a tweak to directly assessing a previous thought, and modifying output / thought process based on the assessment accordingly would help, but I'm not sure that's quite enough.
If it can't ask "why" and step back through why it thinks something, I think we'll keep having the confident hallucination problem- rather than "I don't know".
But maybe that's touching on the quality of AGI.
Is "reasoning" a necessity for the base definition?
That's pretty close to the way the term is normally used.
FWIW:
I suspect it at least involves the combination of being able to continuously learn in response to novel stimuli and developing the goal of self-preservation.
2. Key cornerstone of a human intelligense is an ability to create something completely new that cannot be predicted or calculated in advance, another one is a will — none of those are even touched by current neural networks. DALLIE makes a nice imitation of the first point though.
"By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work."
https://www.linkedin.com/posts/maxirwin_as-discourse-continu...
“As discourse continues on the impact and potential of Artificial General Intelligence, I propose these 5 levels of AGI to use as a measurement of capability:
Level 1: “Write this marketing copy”, “Translate this passage” - The model handles this task alone based on the prompt
Level 2: “Research this topic and write a paper with citations” - 3rd party sources as input – reads and responds to the task
Level 3: “Order a pizza”, “Book my vacation” - 3rd party sources as input and output - generally complex with multiple variables
Level 4: “Buy this house”, “Negotiate this contract” - Specific long-term goal, multiple 3rd party interactions with abstract motivation and feedback
Level 5: “Maximize the company profit”, “Reduce global warming” - Purpose driven, unbounded 3rd party interactions, agency and complex reasoning required
I had the pleasure to present this yesterday afternoon on WXXI Connections (https://lnkd.in/gA9CugQR), and again in the evening during a panel discussion on ChatGPT hosted by TechRochester and RocDev (https://lnkd.in/gjYDEkBE).
This year, we will see products capable of levels 1, 2, and 3. Those three levels are purely digital, and aside from the operator all integration points are done through APIs and content. For some examples, level 1 is ChatGPT, level 2 is Bing, and level 3 is AutoGPT.
Levels 4 and 5 are what I call "hard AGI" - as they require working with people aside from the operator, and doing so on a longer timeline with an overall purpose. We will likely see attempts at this technology this year, but it will not be successful.
For technology to reach a given level, it must perform as well as a person who is an expert at the task. A broken buggy approach that produces a poor result does not qualify.
Thanks for reading, and if you would like to discuss these topics or work towards a solution for your business, contact me to discuss!”
It seemed fine to Alan Turing.
I truly believe the "economic impact" and by extension the "political impact" will be undeniable and profound long before people get tired of arguing whether or not it is AGI, or whether or not it is sentient, or whether or not it has a 'soul' (despite not being able to clearly define 'soul').