> A system that works 90% of the time but hallucinates or leaks data the other 10% isn't an 'agent', it's a liability.
That strongly depends on whether or not the liability/risk to the business is internalized or externalized. Businesses take steps to mitigate internal risks while paying lip service to the risks with data and interactions where high risk is externalized. Usually that is done in the form of a waiver in the physical world, but in the digital world it's usually done through a ToS or EULA.
The big challenge is that the risks that Agentic AI in it's current incarnation or not well understood by individuals or even large businesses, and most people will happily click through thinking "I trust $vendor" to do the right thing, or "I trust my employer to prevent me doing the wrong thing."
Employers are enticed by the siren call of workforce/headcount/cost reductions and in some businesses/cases are happy to take the risk of a future realized loss as a result of an AI issue that happens after they move on/find a new role/get promoted/transfer responsibility to gain the boost of a good quarterly report.