[flagged]
Another new LLM slop account on HN..
> Too little and the agent loses coherence.
Obviously you don't have to throw the data away, if the initial summary was missing some important detail, the agent can ask for additional information from a subthread/task/tool call.
> Instead of feeding 500 lines of tool output back into the next prompt
Applies for everything with LLMs.
Somewhere along the idea, it seems like most people got the idea that "More text == better understanding" whereas reality seems to be the opposite, the less tokens you can give the LLM with only the absolute essentials, the better.
The trick is to find the balance, but "more == better" which many users seem to operate under seems to be making things worse, not better.