Incendiary and false headline aside, no sane person would suggest that a hardware store that sold an axe that was used by an axe murderer should be held liable unless that store knew what was about to unfold.
Unless AI companies knowingly participate in murder plots, they should not be liable.
Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?
Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.
All of those are false equivalences. Let me give you a few better analogies.
Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.
Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.
Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.
Now let's consider a few relevant examples.
An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.
Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.
Or selling a financial trading AI that's known to make disastrous decisions at times.
Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.
I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.
People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.
Beautiful.
> Incendiary and false headline aside
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.