logoalt Hacker News

woriktoday at 1:17 AM1 replyview on HN

One reason it is concerning.

I am mostly worried that I am wrong, in my opinion, that "agents" is a bad paradigm for working with LLMs

I have been using LLMs since I got my first Open AI API key, I think "human in the loop" is what makes them special

I have massively increased my fun, and significantly increased my productivity using just the raw chat interface.

It seems to me that building agents to do work that I am responsible for is the opposite of fun and a productivity sink as I correct the rare, but must check for it, bananas mistakes these agents inevitably make


Replies

adastra22today at 5:43 AM

The thing is, the same agent that made the bananas mistake is also quite good at catching that mistake (if called again with fresh context). This results in convergence on working, non-bananas solutions.