logoalt Hacker News

MarginalGainztoday at 7:01 PM4 repliesview on HN

This resonates with what I'm seeing in the enterprise adoption layer.

The pitch for 'Agentic AI' is enticing, but for mid-market operations, predictability is the primary feature, not autonomy. A system that works 90% of the time but hallucinates or leaks data the other 10% isn't an 'agent', it's a liability. We are still in the phase where 'human-in-the-loop' is a feature, not a bug.


Replies

ygjbtoday at 7:30 PM

> A system that works 90% of the time but hallucinates or leaks data the other 10% isn't an 'agent', it's a liability.

That strongly depends on whether or not the liability/risk to the business is internalized or externalized. Businesses take steps to mitigate internal risks while paying lip service to the risks with data and interactions where high risk is externalized. Usually that is done in the form of a waiver in the physical world, but in the digital world it's usually done through a ToS or EULA.

The big challenge is that the risks that Agentic AI in it's current incarnation or not well understood by individuals or even large businesses, and most people will happily click through thinking "I trust $vendor" to do the right thing, or "I trust my employer to prevent me doing the wrong thing."

Employers are enticed by the siren call of workforce/headcount/cost reductions and in some businesses/cases are happy to take the risk of a future realized loss as a result of an AI issue that happens after they move on/find a new role/get promoted/transfer responsibility to gain the boost of a good quarterly report.

barrenkotoday at 8:39 PM

I don't want an agent, I want a principal.

supriyo-biswastoday at 7:26 PM

[flagged]

witnessmetoday at 7:23 PM

Can't agree more