I completely agree. We can't even measure each other well, let alone machines.
Now I already hear you typing "but those roles should also be handles by AI if it's AGI" and I agree that an AI that can claim to be AGI should be able to handle those roles (as separate agents if necessary). But in a real setup it probably won't be the best choice to do those roles for cultural and legal reasons. Or it might simply not be cost effective. Not to mention that under most definitions of AGI there can still be humans more capable than the AI, as long as the AI hits the 50th percentile mark or something like that. So even if it's an AGI with the ability to do these roles we will still have humans in the loop for a long long time
But today you can't do those with AI, meaning the AI isn't AGI. I agree we will probably have humans in the loop here and there even after we achieve AGI for various reasons, but today you need to have humans in the loop it isn't an option not to.
> I am unaware of any definition of AGI that states AGI cannot have humans in the loop.
Its not the definition but its a trivial result of the most common definition that is "has human level intelligence".
AI as in "artificial intelligence", it isn't AS "artificial skills", doing one skill to the same level as a human is not AI, an AI need to be able to learn all skills humans can learn to the same levels.