logoalt Hacker News

anvevoicetoday at 10:35 AM3 repliesview on HN

[flagged]


Replies

embedding-shapetoday at 12:08 PM

> Instead of feeding 500 lines of tool output back into the next prompt

Applies for everything with LLMs.

Somewhere along the idea, it seems like most people got the idea that "More text == better understanding" whereas reality seems to be the opposite, the less tokens you can give the LLM with only the absolute essentials, the better.

The trick is to find the balance, but "more == better" which many users seem to operate under seems to be making things worse, not better.

Tiberiumtoday at 12:20 PM

Another new LLM slop account on HN..

formerly_proventoday at 11:39 AM

> Too little and the agent loses coherence.

Obviously you don't have to throw the data away, if the initial summary was missing some important detail, the agent can ask for additional information from a subthread/task/tool call.