Memorizing is just one of the actions such an agent is able to perform. Another mental action besides memory would be attention. It would also need to be able to simulate the world, people and systems it is interacting with (to know how they behave) in order to be able to do reasoning and planning.
In short, an AGI would need: sensing (deep neural nets for vision, audio and other modalities), attention, memory, estimating the desirability and effects of various actions (a kind of imagination), an extensive database of common known facts, and the ability to act (for example by speech and movement).
Many of these systems have been demonstrated. Sensing, attention and memory are common place in ML papers. Creativity is demonstrated in generative models that can write text, music and paint. Ability to predict the future and reason about it was demonstrated in AlphaGo. Speech and motor control are under development. We have most of the necessary blocks, but nobody has put them together to form a functioning general AI yet.
My preferred one is "An AGI is one which knows which are sensible questions to ask".
That's because it seems to me that most "AI-lite"-type goals are procedural. AGI needs to have agency.