logoalt Hacker News

labradoryesterday at 9:55 PM1 replyview on HN

I’m curious how this applies to systems like ChatGPT, which now have two kinds of memory: user-configurable memory (a list of facts or preferences) and an opaque chat history memory. If context is the core unit of interaction, it seems important to give users more control or at least visibility into both.

I know context engineering is critical for agents, but I wonder if it's also useful for shaping personality and improving overall relatability? I'm curious if anyone else has thought about that.


Replies

simonwyesterday at 10:09 PM

I really dislike the new ChatGPT memory feature (the one that pulls details out of a summarized version of all of your previous chats, as opposed to older memory feature that records short notes to itself) for exactly this reason: it makes it even harder for me to control the context when I'm using ChatGPT.

If I'm debugging something with ChatGPT and I hit an error loop, my fix is to start a new conversation.

Now I can't be sure ChatGPT won't include notes from that previous conversation's context that I was trying to get rid of!

Thankfully you can turn the new memory thing off, but it's on by default.

I wrote more about that here: https://simonwillison.net/2025/May/21/chatgpt-new-memory/

show 1 reply