logoalt Hacker News

pacjamlast Tuesday at 10:58 PM1 replyview on HN

IMO context poisoning is only fatal when you can't see what's going on (eg black box memory systems like ChatGPT memory). The memory system used in the OP is fully white box - you can see every raw LLM request (and see exactly how the memory influenced the final prompt payload).


Replies

handfuloflightlast Wednesday at 12:11 AM

That's significant, you can improve it in your own environment then.

show 1 reply