> Instead of feeding 500 lines of tool output back into the next prompt
Applies for everything with LLMs.
Somewhere along the idea, it seems like most people got the idea that "More text == better understanding" whereas reality seems to be the opposite, the less tokens you can give the LLM with only the absolute essentials, the better.
The trick is to find the balance, but "more == better" which many users seem to operate under seems to be making things worse, not better.