logoalt Hacker News

koolbayesterday at 3:04 PM4 repliesview on HN

> The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.

What does it mean to “make and example”?

I’m for cleaning up AI slop as much as the next natural born meat bag, but I also detest a litigious society. The types of legal action that stops this in the future would immediately be weaponized.


Replies

aDyslecticCrowyesterday at 3:54 PM

If a humam published an article claiming this exact same thing as gemeni, the author could be sued and have a pretty good case.

But when gemeni does it its a "mistake by the algorithm". AI is a used as responsibility diversion machine.

This is a rather harmless example. But what about dangerous medical advice? What about openly false advertising? What about tax evasion? If an AI does it is it okay because nobody is responsibile?

If applying a proper chain of liability on ai output makes some uses of AI impossible; so be it.

show 2 replies
poulpy123yesterday at 3:23 PM

I don't like a litigious society, and I don't know if the case here would be enough to activate my threshold, but companies are responsible for the AI they provide, and should not be able to hide behind "the algorithm" when there are issues

Cthulhu_yesterday at 3:12 PM

> The types of legal action that stops this in the future would immediately be weaponized.

As it should; this is misinformation and/or slander. The disclaimer is not good enough. A few years ago, Google and most of the social media was united in fact checking and fighting "fake news". Now they push AI generated information that use authoritative language at the very top of e.g. search results.

The disclaimer is moot if people consider AI to be authoritative anyway.

recursiveyesterday at 3:06 PM

Weapons against misinformation are good weapons. Bring on the weaponization.

show 2 replies