My learning so far, to your point on memory being a limiting factor, is that the system is able to build on ideas over time. I'm not sure you'd classify that as 'self-learning', and I haven't really pushed it in the direction of 'introspection' at all.
Memory itself (in this form) does not seem to be a silver bullet, though, by any means. However, as I add more 'tools', or 'agents', its ability to make 'leaps of discovery' does improve.
For example, I've been (very cautiously) allowing cron jobs to review a day's conversation, then spawn headless Claude Code instances to explore ideas or produce research on topics that I've been thinking about in the chat history.
That's not much different from the 'regular tasks' that Perplexity (and I think OpenAI) offer, but it definitely feels more like a singular entity. It's absolutely limited by how smart the conversation history is, at this time, though.
The Memento analogy you used does feel quite apt - there is a distinct sense of personhood available to something with memory that is inherently unavailable to a fresh context window.