Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.
That could do more harm than good.
Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".
Keep it to generated news articles, and people might pay more attention to them.
Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.
What does that mean though? Photos taken using mobile camera apps are processed using AI. Many Photoshop tools now use AI.
How do we know what’s AI-generated vs. sloppy human work? Of course in some situations it is obvious (e.g., video), but text? Audio?
Where we put the line within AI-generate vs AI-assisted (aka Photoshop and other tools)?
> Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.
Does photoshop fall under this category?
Fully agreed.
Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.
I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?
You legislate these problems away.
Publishing is more than just authoring. You have research, drafts, edits, source verification, voice, formatting, multiple edits for different platforms and mediums. Each one of those steps could be done by AI. It's not a single-shot process.