I was going to say that an LLM can't do this, because it loses everything at the end of the session. But... could an LLM write out its "state" or "understanding" so that you could recover that for the next session? Do any LLMs currently have that ability?
In theory maybe in some sense, but if we read Naur's definition of "theory" in a more strict or philosophical way, they can't in full. An LLM can't build a theory, because it doesn't have "real" experience, it's essentially just following rules. It also can't really argue or justify its choices like a person can.
This is discussed in the "Ryle's Notion of Theory" section of the original essay.
It's very common, but (like most things with LLMs) it's not as deterministic as you might imagine. A common technique for agents is to have them create a "handoff" document (usually markdown) that summarizes the previous session-- goals, important files/links, etc. There are dozens of proprietary ways of doing this, and Claude Code automates the process with its /compact command and even does auto-compaction as you reach your context limit. ChatGPT has been doing autocompaction since the beginning as it started out with a comically small context window.