Let's say I'm deep in a coding problem. A co-worker comes by and says "How did your team do in the game yesterday?". I say, "Um, uh... sorry, my head's not there right now." It takes us time to swap between mental "spaces".
So, if I have an AGI (defined as having a trained model for almost everything, even if that turns out to be a large number of different models), if it has to load the right model before it can reason on that topic, then that's pretty human-like. (As long as it can figure out which model to load...)
The one thing missing is that (at least some) humans can figure out linkages between different mental "spaces", to turn them into a more coherent holistic mental space, even if they don't have (all of) each space at front-of-mind at any moment. I'm not sure if this flavor of an AGI could do that - could see the connections between different models.