logoalt Hacker News

gpmyesterday at 1:45 PM0 repliesview on HN

Because the LLM is faster at typing the input, and faster at reading the output, than I am... the amount of input I have to give the LLM is less than what I have to give the search tool invocations, and the amount of output I have to read from the LLM is less than the amount of output from the search tool invocations.

To be fair it's also more likely to mess up than I am, but for reading search results to get an idea of what the code base looks like the speed/accuracy tradeoff is often worth it.

And if it was just a search tool this would be barely worth it, but the effects compound as you chain more tools together. For example: reading and running searches + reading and running compiler output is worth more than double just reading and running searches.

It's definitely an art to figure out when it's better to use an LLM, and when it's just going to be an impediment, though.

(Which isn't to agree that "context engineering" is anything other than "prompt engineering" rebranded, or has any staying power)