logoalt Hacker News

A new bill in New York would require disclaimers on AI-generated news content

395 pointsby giuliomagnificotoday at 9:56 AM148 commentsview on HN

Comments

padolseytoday at 11:08 AM

I'm surprised to see so little coverage of AI legislation news here tbh. Maybe there's an apathy and exhaustion to it. But if you're developing AI stuff, you need to keep on top of this. This is a pretty pivotal moment. NY has been busy with RAISE (frontier AI safety protocols, audits, incident reporting), S8420A (must disclose AI-generated performers in ads), GBL Article 47 (crisis detection & disclaimers for AI chatbots), S7676B (protects performers from unauthorized AI likenesses), NYC LL144 (bias audits for AI hiring tools), SAFE for Kids Act [pending] (restricts algorithmic feeds for minors). At least three of those are relevant even if your app only _serves_ people in NY. It doesn't matter where you're based. That's just one US state's laws on AI.

It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.

show 8 replies
jfengeltoday at 3:12 PM

What I'd really like to see is a label on original reporting.

Even beyond AI, the vast majority of news is re-packaging information you got from somewhere else. AI can replace the re-writers, but not the original journalists, people who spoke to primary sources (or who were themselves eyewitnesses).

Any factual document should reference its sources. If not, it should be treated skeptically, regardless of whether AI or a human is doing that.

An article isn't automatically valueless just because it's synthesized. It can focus and contextualize, regardless of whether it's human or AI written. But it should at the very least be able to say "This is the actual fact of the matter", with a link to it. (And if AI has hallucinated the link, that's a huge red flag.)

show 3 replies
kaicianflonetoday at 5:36 PM

This feels like a symptom of a deeper issue: we’re treating AI outputs as if they’re authoritative when they’re really just single, unaccountable generations. Disclaimers help, but they don’t fix the decision process that produced the content in the first place.

One approach we’ve been exploring is turning high-stakes AI outputs (like news summaries or classifications) into consensus jobs: multiple independent agents submit or vote under explicit policies, with incentives and accountability, and the system resolves the result before anything is published. The goal isn’t “AI is right,” but “this outcome was reached under clear rules and can be audited.”

That kind of structure seems more scalable than adding disclaimers after the fact. We’re experimenting with this idea on an open source CLI at https://consensus.tools if anyone’s interested in the underlying mechanics.

show 1 reply
Llamamoetoday at 10:57 AM

Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.

show 9 replies
dweeklytoday at 2:04 PM

I've begun an AI content disclosure working group at W3C if folks are interested in helping to craft a standard that allows websites to voluntarily disclose the degree to which AI was involved in creating all or part of the page. That would enable publishers to be compliant with this law as well as the EU AI Act's Article 50.

https://www.w3.org/community/ai-content-disclosure/

https://github.com/dweekly/ai-content-disclosure

show 1 reply
TheAceOfHeartstoday at 11:05 AM

I'm worried that this will lead to a Prop 65 [0] situation, where eventually everything gets flagged as having used AI in some form. Unless it suddenly becomes a premium feature to have 100% human written articles, but are people really going to pay for that?

> substantially composed, authored, or created through the use of generative artificial intelligence

The lawyers are gonna have a field day with this one. This wording makes it seem like you could do light editing and proof-reading without disclosing that you used AI to help with that.

[0] https://en.wikipedia.org/wiki/1986_California_Proposition_65

show 3 replies
NietTimtoday at 1:44 PM

New York also wants 3d printers to know when they are printing gun parts. Sure these initiatives have good meanings but also would only work when "the good ones" chose to label their content as AI generated/gun parts. There will _never_ be a 100% sure fire, non invasive, way to know if an article was (in part) AI generated or not, the same way that "2d printers" (lol) refuse to photo copy fiat currency, to circle back to the 3d printer argument.

IMO: it's already too late and effort should instead be focussed on recognition of this and quickly moving on to prevention through education instead of trying to smother it with legislation, it is just not going away.

RobotToastertoday at 12:41 PM

I can see this ending up like prop65 warnings. Every website will have in the footer "this website may contain content known to the state of New York to be AI generated"

show 1 reply
VMGtoday at 11:08 AM

Step 2: outlets slap this disclaimer on all content, regardless of AI usage, making it useless

Step 3: regulator prohibits putting label on content that is not AI generated

Step 4: outlets make sure to use AI for all content

Let's call it the "Sesame effect"

show 3 replies
ameliaquiningtoday at 5:22 PM

I'm not convinced that this law, if it passed, would survive a court challenge on First Amendment grounds. U.S. constitutional law generally doesn't look kindly on attempts to regulate journalism.

show 1 reply
wateralientoday at 11:04 AM

They need to enforce this with very large fines.

delichontoday at 12:27 PM

> In addition, the bill contains language that requires news organizations to create safeguards that protect confidential material — mainly, information about sources — from being accessed by AI technologies.

So clawdbot may become a legal risk in New York, even if it doesn't generate copy.

And you can't use AI to help evaluate which data AI is forbidden to see, so you can't use AI over unknown content. This little side-proposal could drastically limit the scope of AI usefulness over all, especially as the idea of data forbidden to AI tech expands to other confidential material.

show 1 reply
ericzawotoday at 5:49 PM

Good.

rektlessnesstoday at 12:40 PM

Broad, ambiguous language like 'substantially composed by AI' will trigger overcompliance rendering disclosures meaningless, but maybe that was the plan.

jaredklewistoday at 5:32 PM

This is so dumb. Name literally any problem caused by AI generated content (there are dozens to choose from) and I will explain why this law will make absolutely no impact on that issue.

Now articles from organizations with legitimate journalists and fact checkers like the NYT, WSJ, or the economist will need an “AI generated” badge because they used an AI assistant and they have risk adverse legal departments. This will be gleefully pointed out by every brain dead Twitter conspiracy theorist, Breitbart columnist, 911 truther substack writer, and Russian spam bot as they happily spew unbadged drivel out into the world. Thanks so much New York!

AI doesn’t make bad news content. Complete disregard for objective reality does. I’ll take an ai assisted human that actually cares about truth over an unassisted partisan hooligan every time.

If this is the best our legislatures can come up with we are so utterly fucked…

nomercy400today at 11:16 AM

You might as well place it next to the © 2026, on the bottom every page.

rasjanitoday at 10:59 AM

Finnish public broadcasting company YLE has same rule. Even if they do cleanups of still images, they need to mark that article has AI generated content.

show 1 reply
talkingtabtoday at 4:35 PM

I'm beginning to suspect HN also needs such a bill. Maybe it is not AI content, but so many prominent posts on HN feel like advertising. Perhaps that is the good thing about AI is that it decreases the trust level. Or is that really a good thing?

[Edit: spelling sigh]

chrisjjtoday at 1:32 PM

Why limit this to news? Equally deserving of protection is e.g. opinion.

bluebxrrytoday at 1:15 PM

How about instead of calling Claude a clanker again, which he can't control, how about we give everyone a fair shot this time with a bill that requires the news to not suck in the first place.

ddtaylortoday at 11:01 AM

Oregon kind of already has this they just don't enforce their laws.

show 1 reply
cmiles8today at 12:02 PM

This is a good idea. Although most AI written content is also stroll pretty obvious. It consistently has a certain feel that just seems off.

show 2 replies
TuringNYCtoday at 1:20 PM

What happens if I use linear regression on a chart? Where does one draw the line on "AI"?

show 1 reply
asahtoday at 11:05 AM

We've seen this movie - see California prop 65 warnings on literally every building.

It also doesn't work to penalize fraudulent warnings - they simply include a harmless bit of AI to remain in compliance.

show 2 replies
kgwxdtoday at 1:52 PM

AI Generated or News? You can't have both.

nh43215rgbtoday at 11:26 AM

Federal level would be the best, but this is a start.

seydortoday at 12:36 PM

That's the equivalent of having a disclaimer "This article was written using MS Word". Utterly useless in this day and age

PlatoIsADiseasetoday at 10:53 AM

In 10-20 years all this AI disclaimer stuff is going to be like 'don't use wikipedia, it could lie!'

Status Quo Bias is a real thing, and we are seeing those people in meltdown with the world changing around them. They think avoiding AI, putting disclaimers on it, etc... will matter. But they aren't being rational, they are being emotional.

The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.

show 6 replies
bill_joy_fanboytoday at 11:55 AM

LOL! As if human-generated news content is any more honest or accurate...

charcircuittoday at 11:08 AM

So literally every article will be labeled as AI assisted and it will be meaningless.

>The use of generative artificial intelligence systems shall not result in: (i) discharge, displacement or loss of position

Being able to fire employees is a great use of AI and should not be restricted.

> or (ii) transfer of existing duties and functions previously performed by employees or worker

Is this saying you can't replace an employee's responsibilities with AI? No wonder the article says it is getting union support.

show 6 replies