logoalt Hacker News

jaggederesttoday at 2:55 AM1 replyview on HN

Reminds me of https://news.ycombinator.com/item?id=15886728

Do not argue with the LLM, for it is subtle and quick to anger, and finds you crunchy with ketchup.

These are, broadly, all context management issues - when you see it start to go off track, it's because it has too much, too little, or the wrong context, and you have to fix that, usually by resetting it and priming it correctly the next time. This is why it's advantageous not to "chat" with the robots - treat them as an english-to-code compiler, not a coworker.

Chat to produce a spec, save the spec, clear the context, feed only the spec in as context, if there are issues, adjust the spec, rinse and repeat. Steering the process mid-flight is a) not repeatable and b) exacerbates the issue with lots of back and forth and "you're absolutely correct" that dilutes the instructions you wanted to give.


Replies

en-tro-pytoday at 3:28 AM

Exactly, never argue with an LLM unless the debate is the point...

It's just speedrunning context rot.