It's very common, but (like most things with LLMs) it's not as deterministic as you might imagine. A common technique for agents is to have them create a "handoff" document (usually markdown) that summarizes the previous session-- goals, important files/links, etc. There are dozens of proprietary ways of doing this, and Claude Code automates the process with its /compact command and even does auto-compaction as you reach your context limit. ChatGPT has been doing autocompaction since the beginning as it started out with a comically small context window.
The problem with auto compaction is that you aren’t given the opportunity to review its compacted understanding to confirm that it’s correct or doesn’t contain large omissions. I try to avoid letting it compact whenever possible and stick to plans that I review because it seems to get extremely dumb after an auto compaction.