logoalt Hacker News

okinoktoday at 2:17 PM5 repliesview on HN

>all delegation involves risk. with a human assistant, the risks include: intentional misuse (she could run off with my credit card), accidents (her computer could get stolen), or social engineering (someone could impersonate me and request information from her).

One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.


Replies

oerstedtoday at 3:16 PM

Indeed, even if in principle AI and humans can do similar harm, we have very good mechanisms to make it quite unlikely that a human will do such an act.

These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.

It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.

Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.

In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.

show 2 replies
iepathostoday at 3:12 PM

Thought the same thing. There is no legal recourse if the bot drains the account and donates to charity. The legal system's response to that is don't give non-deterministic bots access to your bank account and 2FA. There is no further recourse. No bank or insurance company will cover this and rightfully so. If he wanted to guard himself somewhat he'd only give the bot a credit card he could cancel or stop payments on, the exact minimum he gives the human assistant.

skybriantoday at 3:56 PM

Banks will try to get out of it, but in the US, Regulation E could probably be used to get the money back, at least for someone aware of it.

And OpenClaw could probably help :)

https://www.bitsaboutmoney.com/archive/regulation-e/

show 2 replies
bobson381today at 3:13 PM

...Does this person already have a human personal assistant that they are in the process of replacing with Clawdbot? Is the assistant theirs for work?

show 1 reply
kaicianflonetoday at 3:34 PM

That liability gap is exactly the problem I’m trying to solve. Humans have contracts and insurance. Agents have nothing. I’m working on a system that adds economic stake, slashing, and "auditability" to agent decisions so risk is bounded before delegation, not argued about after. https://clawsens.us

show 3 replies