In many ways, AGI and AI are opposites. We don’t actually “train” humans. We present them information. And a human can choose to ingest that information or - just as importantly - choose not to.
A 3yo learns English not through being force fed, but through their own curiosity and motivation to learn.
GPT and the like are proving to be amazing tools. The compute model of probabilistic algorithms (“plain” AI) is going to transform industries.
But the more sophisticated these tools get, the further from AGI they will become.
When we create AGI, it will begin with a blank slate, not the whole of human knowledge. It will study some topics deeply and be uninterested in others. It might draw its own conclusions instead of just taking the conclusions presented to it. Or it may decide not to study at all, preferring to instead become a hermit.
David Deutsch is the most compelling on this topic: https://open.spotify.com/episode/5nGOkhcJ7VszXm2WVgxKyC?si=8...