logoalt Hacker News

rgloveryesterday at 7:39 PM1 replyview on HN

> Babysitter: Keep a human in the loop to catch errors before they propagate.

This is the only way to guarantee AI usage doesn't burn you. Any automation beyond this is just theater, no matter how much that hurts to hear/undermines your business model.

A bird sings, a duck quacks. You don't expect the duck to start singing now, do you?


Replies

kelseyfrogyesterday at 7:45 PM

I'm not sure I agree. Like all stochastic processes, LLM errors can be quantified. That makes each use case a risk-reward tradeoff where users can decide if the tradeoff makes sense for them or not. There are scenarios where errors are acceptable because the risks are low or errors are acceptable or the rewards make up for them. This is a process engineer problem where business and technology specifics matter.

show 1 reply