logoalt Hacker News

ivzakyesterday at 11:15 PM1 replyview on HN

You’re right - poor compression can cause that. But skipping compression altogether is also risky: once context gets too large, models can fail to use it properly even if the needed information is there. So the way to go is to compress without stripping useful context, and that’s what we are doing


Replies

backscratchesyesterday at 11:22 PM

Edit your llm generated comment or at least make it output in a less annoying llm tone. It wastes our time.