Yes, but it's not just memory hierarchy on which plain transformer-based LLMs are handicapped, there are many deficiencies. (For example, why must they do all their thinking upfront in thinking blocks rather than at any point when they become uncertain?) I'm not sure why you link memory to introspection.
This is why so many people (especially those that think they understand LLM limitations) massively underestimate the future progress of LLMs: people everywhere can see architectural problems and are working on fixing them. These aren't fundamental limitations of large DNN language models in general. Architecture can be adjusted. Turns out you can even put recurrence back in (SSMs) without worse scalability.