logoalt Hacker News

CaptainFever12/09/20242 repliesview on HN

Perhaps in certain cases requiring someone to sign off, and take the blame if anything happens, would help alleviate this problem. Much like how engineers need to sign off on construction plans.

(Layman here, obviously.)


Replies

jerf12/09/2024

If the legal system is not itself either fundamentally corrupted or completely razzle-dazzled by the AI hype... and I mean those as serious clauses that are at least somewhat in question... then there are going to be some very disappointed people losing a lot of money or even going to jail when they find out that as far as the legal system is concerned, there already is legally speaking some person or entity composed of persons (a corporation) responsible for these actions, and it is already not actually legally possible to act like a bull in a china shop and then cover it over by just pointing to your internal AI and disclaiming all responsibility.

The legal system already acts that way when the issue is in its own wheelhouse: https://www.reuters.com/legal/new-york-lawyers-sanctioned-us... The lawyers did not escape by just chuckling in amusement, throwing up their hands, and saying "AIs! Amimrite?"

The system is slow and the legal tests haven't happened yet but personally I see no reason to believe that the legal system isn't going to decide that "the AI" never does anything and that "the AI did it!" will provide absolutely zero cover for any action or liability. If anything it'll be negative as hooking an AI directly up to some action and then providing no human oversight will come to be ipso facto negligence.

I actually consider this one of the more subtle reasons this AI bubble is substantially overblown. The idea of this bubble is that AI will just replace humans wholesale, huzzah, cost savings galore! But if companies program things like, say, customer support with AIs, and can then just deploy their wildest fantasies straight into AIs with no concern about humans being in the loop and turning whistleblower or anything, like, making it literally impossible to contact humans, making it literally impossible to get solutions, and so forth, and if customers push these AIs to give false or dangerous solutions, or agree to certain bargains or whathaveyou, and the end result is you trade lots of expensive support calls for a company-ending class-action lawsuit, the utility of buying the AI services to replace your support staff sharply goes down. Not necessarily to zero. Doesn't have to go to zero. It just makes the idea that you're going to replace your support staff with a couple dozen graphics cards a much more incremental advantage rather than a multiplicative advantage, but the bubble is priced like it's hugely multiplicative.

wholinator212/09/2024

[flagged]

show 1 reply