I've spent a few weeks building and using a terminal LLM client based on that RLM paper that was floating around a little while ago. It's single-conversation, with a tiny, sliding context window, and then a tool that basically fuzzy searches across our full interaction history. It's memory is 'better' than mine - but anything that is essentially RAG inherently will be.
My learning so far, to your point on memory being a limiting factor, is that the system is able to build on ideas over time. I'm not sure you'd classify that as 'self-learning', and I haven't really pushed it in the direction of 'introspection' at all.
Memory itself (in this form) does not seem to be a silver bullet, though, by any means. However, as I add more 'tools', or 'agents', its ability to make 'leaps of discovery' does improve.
For example, I've been (very cautiously) allowing cron jobs to review a day's conversation, then spawn headless Claude Code instances to explore ideas or produce research on topics that I've been thinking about in the chat history.
That's not much different from the 'regular tasks' that Perplexity (and I think OpenAI) offer, but it definitely feels more like a singular entity. It's absolutely limited by how smart the conversation history is, at this time, though.
The Memento analogy you used does feel quite apt - there is a distinct sense of personhood available to something with memory that is inherently unavailable to a fresh context window.
I think a hidden problem even if we solve memory is the curation of what gets into memory and how it is weighted. Even humans struggle with this, as it's easy to store things and forget the credibility (or misjudge the credibility) of the source.
I can envision LLMs getting worse upon being given a memory, until they can figure out how to properly curate it.