logoalt Hacker News

Benjammertoday at 2:53 AM11 repliesview on HN

It's nice to see a paper that confirms what anyone who has practiced using LLM tools already knows very well, heuristically. Keeping your context clean matters, "conversations" are only a construct of product interfaces, they hurt the quality of responses from the LLM itself, and once your context is "poisoned" it will not recover, you need to start fresh with a new chat.


Replies

Helmut10001today at 3:10 AM

My experiences somewhat confirm these observations, but I also had one that was different. Two weeks of debugging IPSEC issues with Gemini. Initially, I imported all the IPSEC documentation from OPNsense and pfSense into Gemini and informed it of the general context in which I was operating (in reference to 'keeping your context clean'). Then I added my initial settings for both sides (sensitive information redacted!). Afterwards, I entered a long feedback loop, posting logs and asking and answering questions.

At the end of the two weeks, I observed that: The LLM was much less likely to become distracted. Sometimes, I would dump whole forum threads or SO posts into it, when it said "this is not what we are seeing here, because of [earlier context or finding]. I eliminated all dead ends logically and informed it of this (yes, it can help with the reflection, but I had to make the decisions). In the end, I found the cause of my issues.

This somewhat confirms what some user here on HN said a few days ago. LLMs are good at compressing complex information into simple one, but not at expanding simple ideas into complex ones. As long as my input was larger than the output (either complexity or length), I was happy with the results.

I could have done this without the LLM. However, it was helpful in that it stored facts from the outset that I had either forgotten or been unable to retrieve quickly in new contexts. It also made it easier to identify time patterns in large log files, which helped me debug my site-to-site connection. I also optimized many other settings along the way, resolving not only the most problematic issue. This meant, in addition to fixing my problem, I learned quite a bit. The 'state' was only occasionally incorrect about my current parameter settings, but this was always easy to correct. This confirms what others already saw: If you know where you are going and treat it as a tool, it is helpful. However, don't try to offload decisions or let it direct you in the wrong direction.

Overall, 350k Tokens used (about 300k words). Here's a related blog post [1] with my overall path, but not directly corresponding to this specific issue. (please don't recommend wireguard; I am aware of it)

    [1]: https://du.nkel.dev/blog/2021-11-19_pfsense_opnsense_ipsec_cgnat/
show 2 replies
Adambuildstoday at 9:10 AM

I agree—once the context is "poisoned," it’s tough to recover. A potential improvement could be having the LLM periodically clean or reset certain parts of the context without starting from scratch. However, the challenge would be determining which parts of the context need resetting without losing essential information. Smarter context management could help maintain coherence in longer conversations, but it’s a tricky balance to strike.Perhaps using another agent to do the job?

morsecodisttoday at 3:03 AM

This matches my experience exactly. "poisoned" is a great way to put it. I find once something has gone wrong all subsequent responses are bad. This is why I am iffy on ChatGPT's memory features. I don't notice it causing any huge problems but I don't love how it pollutes my context in ways I don't fully understand.

show 2 replies
b800htoday at 6:29 AM

I've been saying for ages that I want to be able to fork conversations so I can experiment with the direction an exchange takes without irrevocably poisoning a promising well. I can't do this with ChatGPT, is anyone aware of a provider that offers this as a feature?

show 6 replies
CobrastanJorjitoday at 5:06 AM

An interesting little example of this problem is initial prompting, which is effectively just a permanent, hidden context that can't be cleared. On Twitter right now, the "Grok" bot has recently begun frequently mentioning "White Genocide," which is, y'know, odd. This is almost certainly because someone recently adjusted its prompt to tell it what its views on white genocide are meant to be, which for a perfect chatbot wouldn't matter when you ask it about other topics, but it DOES matter. It's part of the context. It's gonna talk about that now.

show 5 replies
unshavedyaktoday at 3:15 AM

Has any interface implemented a .. history cleaning mechanism? Ie with every chat message focus on cleaning up dead ends in the conversation or irrelevant details. Like summation but organic for the topic at hand?

Most history would remain, it wouldn’t try to summarize exactly, just prune and organize the history relative to the conversation path?

show 7 replies
ameliustoday at 6:50 AM

I suppose that the chain-of-thought style of prompting that is used by AI chat applications internally also breaks down because of this phenomenon.

CompoundEyestoday at 3:55 AM

Agreed poisoned is a good term. I’d like to see “version control” for conversations via the API and UI that lets you rollback to a previous place or clone from that spot into a new conversation. Even a typo or having to clarify a previous message skews the probabilities of future responses due to the accident.

show 3 replies
djmipstoday at 4:27 AM

Happens with people too if you think about it.

show 1 reply
veunestoday at 6:33 AM

What surprised me is how early the models start locking into wrong assumptions

MattGaisertoday at 3:00 AM

Yep. I regretted leaving on memory as it is poisoned my conversations with irrelevant junk.

show 1 reply