Archive: https://archive.is/j1XTl
I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.
Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.
There are several concrete proposals to regulate AI either proposed or passed. The most recent prominent example of a passed law is California SB53, whose summary you can read here: https://carnegieendowment.org/emissary/2025/10/california-sb...
You should ignore literally everything Musk says. He is incredibly unintelligent relative to his status.
Musk wants extreme law and order and will beat down any protests. His X account is full of posts that want to fill up prisons. This is the highlight so far:
https://xcancel.com/elonmusk/status/1992599328897294496#m
Notice that the retweeted Will Tanner post also denigrates EBT. Musk does not give a damn about UBI. The unemployed will do slave labor, go to prison, or, if they revolt, they will be hanged. It is literally all out there by now.
Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.
Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)
Algorithmic Accountability. Not just for AI, but also social media, advertising, voting systems, etc. Algorithm Impact Assessments need to become mandatory.
> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?
There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:
"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.
"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.