logoalt Hacker News

unshavedyaktoday at 3:15 AM7 repliesview on HN

Has any interface implemented a .. history cleaning mechanism? Ie with every chat message focus on cleaning up dead ends in the conversation or irrelevant details. Like summation but organic for the topic at hand?

Most history would remain, it wouldn’t try to summarize exactly, just prune and organize the history relative to the conversation path?


Replies

ithkuiltoday at 6:29 AM

"Every problem in computer science can be solved with another level of indirection."

One could argue that the attention mechanism in transformers is already designed to do that.

But you need to train it more specifically with that in mind if you want it to be better at damping attention to parts that are deemed irrelevant by the subsequent evolution of the conversation.

And that requires the black art of ML training.

While thinking of doing this as a hack on top of the chat product feels more like engineering and we're more familiar with that as a field.

nosefurhairdotoday at 3:21 AM

I've had success having a conversation about requirements, asking the model to summarize the requirements as a spec to feed into a model for implementation, then pass that spec into a fresh context. Haven't seen any UI to do this automatically but fairly trivial/natural to perform with existing tools.

show 1 reply
olalondetoday at 4:17 AM

Not sure if that's what you mean but Claude Code has a /compact command which gets triggered automatically when you exceed the context window.

The prompt it uses: https://www.reddit.com/r/ClaudeAI/comments/1jr52qj/here_is_c...

QuadmasterXLIItoday at 4:42 AM

the problem is that it needs to read the log to prune the log, and so if there is garbage in the log, which needs to be pruned to keep from poisoning the main chat, then the garbage will poison the pruning model, and it will do a bad job pruning.

hobofantoday at 6:32 AM

Not a history cleaning mechanism, but related to that, Cursor in the most recent release introduced a feature to duplicate your chat (so you can saveguard yourself against poisoning and go back to and unpoisoned point in history), which seems like an addmision of the same problem.

Benjammertoday at 3:30 AM

I mean, you could build this, but it would just be a feature on top of a product abstraction of a "conversation".

Each time you press enter, you are spinning up a new instance of the LLM and passing in the entire previous chat text plus your new message, and asking it to predict the next tokens. It does this iteratively until the model produces a <stop> token, and then it returns the text to you and the PRODUCT parses it back into separate chat messages and displays it in your UI.

What you are asking the PRODUCT to now do is to edit your and its chat messages in the history of the chat, and then send that as the new history with your latest message. This is the only way to clean the context because the context is nothing more than your messages and its previous responses, plus anything that tools have pulled in. I think it would be sort of a weird feature to add to a chat bot to have the chat bot, each time you send a new message, go back through the entire history of your chat and just start editing the messages to prune out details. You would scroll up and see a different conversation, it would be confusing.

IMO, this is just part of prompt engineering skills to keep your context clean or know how to "clean" it by branching/summarizing conversations.

show 1 reply
kqrtoday at 4:20 AM

Isn't this what Claude workbench in the Anthropic console does? It lets the user edit both sides of the conversation history.