logoalt Hacker News

padolseytoday at 11:08 AM8 repliesview on HN

I'm surprised to see so little coverage of AI legislation news here tbh. Maybe there's an apathy and exhaustion to it. But if you're developing AI stuff, you need to keep on top of this. This is a pretty pivotal moment. NY has been busy with RAISE (frontier AI safety protocols, audits, incident reporting), S8420A (must disclose AI-generated performers in ads), GBL Article 47 (crisis detection & disclaimers for AI chatbots), S7676B (protects performers from unauthorized AI likenesses), NYC LL144 (bias audits for AI hiring tools), SAFE for Kids Act [pending] (restricts algorithmic feeds for minors). At least three of those are relevant even if your app only _serves_ people in NY. It doesn't matter where you're based. That's just one US state's laws on AI.

It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.


Replies

raincoletoday at 11:25 AM

> I'm surprised to see so little coverage of AI legislation news here tbh.

Because no one believes these laws or bills or acts or whatever will be enforced.

But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.

show 10 replies
mbreesetoday at 3:18 PM

None of those bills/laws involve legislating publishing though. This bill would require a disclaimer on something published. That’s a freedom of speech issue, so it going to be tougher to enforce and keep from getting overturned in the courts. The question here are what are the limits the government can have on what a company publishes, regardless of how the content is generated.

IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.

The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.

show 2 replies
Balinarestoday at 1:22 PM

Don't ding the amusingly scoped animosity, it's very convenient: we get to say stuff like "Sure, our laws may keep us at the mercy of big corps unlike these other people, BUT..." and have a ready rationalization for why our side is actually still superior when you look at it. Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.

show 1 reply
totetsutoday at 1:08 PM

Ai view from Simmons+simmons is a very good newsletter on the topic of ai regulation https://www.simmons-simmons.com/en/publications/clptn86e8002...

venkat223today at 1:37 PM

All video and other contests should have ai stamp as most of the YouTube is AI generated.Almost like memes

vascotoday at 11:12 AM

~Everything will use AI at some point. This is like requiring a disclaimer for using Javascript back when it was introduced. It's unfortunate but I think ultimately a losing battle.

Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.

show 1 reply
jMylestoday at 11:16 AM

> I'm surprised to see so little coverage of AI legislation news here tbh.

I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.

It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.

And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.