"Highly autonomous systems" and "most economically valuable work" aren't precise enough to be useful.
"Highly" implies that there is a continuum, so where does directed end and autonomy begin?
"Most economically valuable work"... each word in that has wiggle room, not to mention that any reasonable interpretation of it is a shifting goalpost as the work done by humans over history has shifted a great deal.
The point is that none of this is defined in a way so that people can agree that something has AGI/ASI/etc. or not. If people can't agree then there's no point in talking about it.
EDIT: interestingly, the OpenAI definition of AGI specifically means that a subset of humans do not have AGI.
* Which is a much larger class of jobs than just engineering. And also excludes field engineers and other types of engineers that need a physical body for interacting with customers, etc.**
** Though even then, you could in theory divvy up the engineering part and the customer interaction part of the job, where the human that's doing the interaction part is primarily a proxy to the engineering agent that's in his earbud.
> there's no reason we'd need to have humans working jobs that only involve typing stuff into a computer and going to meetings all day
I'm not sure I understand, and want to check. That really applies to a lot of jobs. That's all admins, accountants, programmers, probably includes lawyers, and probably includes all C-suite execs. It's harder for me to think of jobs that don't fit under this umbrella. I can think of some, of course[0], but this is a crazy amount of replacement with a wide set of skills.But I also think that's a bad line to draw. Many of those jobs include a lot more than just typing into a computer. By your criteria we'd also be replacing most scientists, as so many are not doing physical experiments and using the computer to read the work of peers and develop new models. But also does get definition intended to exclude jobs where the computer just isn't the most convenient interface? We should be including more in that case since we can then make the connection for that interface.
I think we need a much more refined definition. I don't like the broad strokes "is computer". Nor do I like skills based definitions. They're much easier to measure but easily hackable. I think we should try to define more by our actual understanding of what intelligence is. While we don't have a precise definition we have some pretty good answers already. I know people act like the lack of an exact definition is the same as having no definition but that's a crazy framing. If we had that requirement we wouldn't have any definitions as we know nothing with infinite precision. Even physics is just an approximation, but it's about the convergence to the truth [1]
[side note] the conventional way to do references or notes here is with brackets like I did. So you don't have to escape your asterisks. *Also* if it lead a paragraph with two spaces you get verbatim text
[0] farmer, construction worker, plumber, machinist, welder, teacher, doctors, etc
[1] https://hermiene.net/essays-trans/relativity_of_wrong.html
If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
Defining capabilities based on outcome rather than implementation should be very familiar to an engineer, of any kind, because that's how every unsolved implementation must start.
> If it can do things as good as or better than humans, then either the AI has a type of general intelligence or the human does not.
I don't buy that.By your definition every machine has a type of general intelligence. Not just a bog standard calculator, but also my broom. It doesn't matter if you slap "smart" on the side, I'm not going to call my washing machine "intelligent". Especially considering it's over a decade old.
I don't think these definitions make anything any clearer. If anything, they make them less. They equate humans to mindless automata. They create AGI by sly definition and let the proposer declare success arbitrarily.
> If it can do things as good as or better than humans, in general, then either the AI has a type of general intelligence ...
Are you asking for the current understanding of what specific parts of human intelligence are economically valuable?
But beyond that, part of the nature of that change over time is that things tend to be valuable because they're scarce.
So the definition from upthread becomes roughly "highly autonomous systems that outperform humans at [useful things where the ability to do those things is scarce]", or alternatively "highly autonomous systems that outperform humans at [useful things that can't be automated]".
Which only makes sense if the reflexive (it's dependent on the thing being observed) part that I'm substituting in brackets is pinned to a specific as-of date. Because if it's floating / references the current date that that definition is being evaluated for, the definition is nonsensical.