the posts keep shifting yes. the new definition of agi is much closer to super intelligence. However, depending on how close to experts the model is, there's room for more.
If the model is basically on par with experts then it's still human fallible. But...suppose a general intelligence that is the level at each task as the chess engines of today is at chess.
Basically the, "Oh you thought that was a mistake ?, no you just didn't understand the program" level of intelligence even for the smartest of humans.