logoalt Hacker News

nemomarxyesterday at 3:50 PM4 repliesview on HN

To have that you need a human to take responsibility somewhere, right?

I think people want to assign responsibility to the "agent" to wash their hands in various ways. I can't see it working though


Replies

cdbladesyesterday at 4:37 PM

Exactly. I have said several times that the largest and most lucrative market for AI and agents in general is liability-laundering.

It's just that you can't advertise that, or you ruin the service.

And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).

Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.

If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).

arnonyesterday at 3:55 PM

If I am a company that builds agents, and I sell it to someone. Then, that someone loses money because this agent did something it wasn't supposed to: who's responsible?

Me as the person who sold it? OpenAI who I use below? Anthropic who performs some of the work too? My customer responsible themselves?

These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.

show 9 replies
IIAOPSWtoday at 2:07 AM

People assign responsibility to "agents" in contracts to wash their hands of thins in various ways all the time, and it usually works.

Wait is this still about AI?

Animatsyesterday at 6:55 PM

These people want to assign it to the customer. See above.