It amuses me how contradictory the two bullet points from the article are.
- Strict limits on governmental regulation, wherein any restrictions must be demonstrably necessary and narrowly tailored to a compelling public safety or health interest.
- Mandatory safety protocols for AI-controlled critical infrastructure, including a shutdown mechanism and compulsory annual risk management reviews.
How were the necessity and scope of the second rule shown to satisfy the first rule?
> any restrictions must be demonstrably necessary and narrowly tailored to a compelling public safety or health interest
This should be the default policy on regulation. We shouldn't need a specific law to enact it.
The 2nd rule is clearly intended to be a shield and distraction. It's there to pretend the law serves the public, when in reality it's designed to defend datacenter builders from the public interest. Politicians can talk about meaningless sci-fi concepts like SkyNet and how it can defeat it with off switches, instead of real issues like noise pollution, tax giveaways, electricity prices and mass surveillance.
Probably one applies for individuals while the other, as described, applies for infrastructure.
Orwell called it “double speak”
You can read the actual bill here: https://legiscan.com/MT/text/SB212/id/3212152/Montana-2025-S...
In essence, it doesn't really mandate anything; it says you should have a plan, and only for "critical infrastructure facilities":
"Section 4. Infrastructure controlled by critical artificial intelligence system. (1) When critical infrastructure facilities are controlled in whole or in part by a critical artificial intelligence system, the deployer shall develop a risk management policy after deploying the system that is reasonable and considers guidance and standards in the latest version of the artificial intelligence risk management framework from the national institute of standards and technology, the ISO/IEC 4200 artificial intelligence standard from the international organization for standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems. A plan prepared under federal requirements constitutes compliance with this section."
So it's essentially lip service to AI safety, probably to quell some objections to a bill that otherwise limits regulation of tech platforms.