You observe: {new input}
You remember: {from previous output}
React to this in the following format:
My inner thoughts: [what do you think about the current state]
I want to remember: [information that is important for your future actions]
Things I do: [Actions you want to take]
Things I say: [What I want to say to the user]
...
Not sure if that would qualify as an AGI as we currently define it. Given a sufficiently good LLM with good reasoning capabilities such a setup might be able to It would be able to do many of the things we currently expect AGIs to be able to do (given a sufficiently good LLM with good reasoning capabilities), including planning and learning new knowledge and new skills (by collecting and storing positive and negative examples in its "memory"). But its learning would be limited, and I'm sure as soon as it exists we would agree that it's not AGI