logoalt Hacker News

HiPhishtoday at 8:12 PM1 replyview on HN

"Hey, you know that thing no one understands how it works and has no guarantee of not going off the rails? Let's give it unrestricted access over everything!" Statements dreamed up by the utterly deranged.

I can see the value of agentic AI, but only if it has been fenced in, can only delegate actions to deterministic mechanisms, and if ever destructive decision has to be confirmed. A good example I once read about was an AI to parse customer requests: if it detects a request that the user is entitle to (e.g. cancel subscription) it will send a message like "Our AI thinks you want to cancel your subscription, is this correct?" and only after confirmation by the user will the action be carried out. To be reliable the AI itself must not determine whether the user is entitled to cancelling, it may only guess the the user's intention and then pass a message to a non-AI deterministic service. This way users don't have to wait until a human gets around to reading the message.

There is still the problem of human psychology though. If you have an AI that's 90% accurate and you have a human confirm each decision, the human's mind will start drifting off and treat 90% as if it's 100%.


Replies

Terr_today at 8:45 PM

Right, user-confirmed "translation" is the responsible way to put LLMs into general computing flows, as opposed to stuffing them in everything willy-nilly like informational asbestos mad-lib machines powered by hope an investor speculation.

Another example might be taking a layperson's description "articles about Foo but not about Bar published in the last two months" and using to suggest (formal, deterministic) search-parameters which the can view and hopefully understand before approving.

Granted, that becomes way trickier if the translated suggestion can be "evil" somehow, such as proposing SQL and the dataset has been poisoned so that it "recommends" something that destroys data or changes a password hash... But even that isn't nearly the same degree of malpractice as making it YOLO everything.