logoalt Hacker News

visargatoday at 8:22 AM1 replyview on HN

I have made 100MB of my own chat logs into a RAG memory and was surprised I didn't like using it much. Why? it floods the LLM with so much prior thinking that it loses the creative spark. I now realize the sweet spot is in the middle - don't recall everything, strategic disclosure to get the max out of AI. LLM memory should be like a sexy dress - not too long, not too short. You get the most creative outputs when you hide part of your prior thinking and let the mode infer it back.


Replies

zwnowtoday at 9:59 AM

I am not an AI enthusiast but I get what you're saying. I occasionally use ChatGPT due to Google being enshittified pretty much. I often do not like the things it tells me and I for sure do not like it complimenting everything I do, but thats something other people seem to like... In my experience starting a fresh chat after a while of back and forth can really help, so I agree with you. Having little to zero prior context is actually the point of view one needs sometimes.