logoalt Hacker News

Topfitoday at 1:50 PM4 repliesview on HN

Quoting the original bill [0]:

> "Critical harm" means the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model, through either: (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (2) engaging in conduct that: (A) acts with no meaningful human intervention; and (B) would, if committed by a human, constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime.

I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.

> (a) A developer shall not be held liable for critical harms if the developer did not intentionally or recklessly cause the critical harms and the developer: (1) published a safety and security protocol on its website that satisfies the requirements of Section 15 and adhered to that safety and security protocol prior to the release of the frontier model; (2) published a transparency report on its website at the time of the frontier model's release that satisfies the requirements of Section 20. The requirements of paragraphs (1) and (2) do not apply if the developer does not reasonably foresee any material difference between the frontier model's capabilities or risks of critical harm and a frontier model that was previously evaluated by the developer in a manner substantially similar to this Act.

However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.

[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...


Replies

roywigginstoday at 1:55 PM

I think my favorite part is that, because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all. That makes very little sense unless you specifically want to make it illegal to not be OpenAI (et al).

Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?

show 3 replies
jmyeettoday at 2:46 PM

Shifting liabilities from corporations to the public coffer is what companies do. You'll often hear this described as "privatizing profits and socializing losses". Let me introduce you to the Price-Anderson Act of 1957 [1]. It's been repeatedly extended, most recently with the ADVANCE Act [2]. This limits liability for the nuclear power industry in a whole range of ways:

- It removes jurisdiction from state courts to the federal court. In recent weeks, the part of "states' rights" is doing similar to stop states regulating prediction markets, as an aside [3];

- All actions are consolidated into a single claim;

- That claim has an inflation-adjusted absolute limit, which is somewhere around $500 million (I'm not sure of the exact 2026 figure);

- Any damages beyond that are partially sharead by the industry and an industry self-funded insurance program;

- The industry as a whole has a total liability limit, also inflation-adjusted. I believe this is around $10 billion.

For context, the clean up from Fukushima is likely to take a century and the cost may well exceed $1 trillion for a single incident [4]. So if this happened in the US, the government would be on the hook for almost all of it.

So I have two points here:

1. If you oppose any effort to shift liability from AI companies to the government (as I do) with legislation such as this, how do you feel about the nuclear industry doing the exact same thing? and

2. Minor point but I noticed in searching for the latest details, Gemini made factual errors, stating that "the Act is set to expire in 2025" when it was extended in 2024 until 2045. Always check AI's work, people.

[1]: https://en.wikipedia.org/wiki/Price%E2%80%93Anderson_Nuclear...

[2]: https://en.wikipedia.org/wiki/ADVANCE_Act

[3]: https://www.pbs.org/newshour/politics/federal-government-sue...

[4]: https://cleantechnica.com/2019/04/16/fukushimas-final-costs-...

show 2 replies
fhdkweigtoday at 2:42 PM

My first thought was that this must be related to the automated weapons issue that got Anthropic on Trump's shitlist. It makes sense that a company that will eventually be asked to build weapons that choose their own targets will want to limit liability when it will inevitably kill the "wrong" person.

Also, I am disturbed by the fact that in all the discussions on this topic during the last month, no one has mentioned the magic word "Skynet". This is clearly a terrible idea. And if a company needs immunity from liability, they know it is a terrible idea.

Skynet's flaw wasn't that it killed humans. It was a military machine specifically designed to kill humans. If it only killed "the enemy", it would have been hailed a marvelous success. It was only considered a failure because it killed the wrong humans.

troupotoday at 1:55 PM

It's the "guns don't kill people" equivalent for AIs.

---

Before the pitchforks and downvotes:

- yes, it's a deliberate simplification

- yes, the issue is complex because you can also argue that you can't blame authors of encyclopedias and chemistry books for bombs and poisons, so why would we blame providers of LLMs

- and no, this bill is only introduced to cover everyone's assess when, not if, LLMs use results in large scale issues.

show 3 replies