> We’re introducing our video generation technology now to give society time to explore its possibilities and co-develop norms and safeguards that ensure it’s used responsibly as the field advances.
That's an interesting way of saying "we're probably gonna miss some stuff in our safety tools, so hopefully society picks up the slack for us". :)
Users, not tools, should be judged.
It is unlikely anyone is going to perform act of terrorism with this, or any kind of deep fakes that buy Easter European elections. The worst outcome is likely teens having a laugh.
The problem isn't whether we should regulate AI. It's whether it's even possible to regulate them without causing significant turmoil and damage to the society.
It's not hyperbole. Hunyuan was released before Sora. So regulating Sora does absolutely nothing unless you can regulate Hunyuan, which is 1) open source and 2) made by a China company.
How do we expect the US govt to regulate that? Threatening sanction China unless they stop doing AI research???
"to give society time to explore its possibilities and co-develop norms and safeguards"
Or, "this safety stuff is harder than we thought, we're just going to call 'tag you're it' on society"
Or,
-Oppenheimer : speaking "man, this nuclear safety stuff is hard, I'm just going to put it all out there and let society explore developing norms and safeguards".
-Society : Bombs Japan
-Oppenheimer : "No, not like that, oops".
"Climate Change is likely to mean more fires in the future, so we've lit a small fire at everyone's house to give society time to co-develop norms and safeguards."
Specially since they were originally supposed to be a non-profit focused on AI safety and Sam Altman single-handedly pivoting to a for-profit after taking all the donations and partnering with probably the single most evil corporation that has ever existed, Microsoft.
"We're releasing this like rats on a remote island, in hopes of seeing how the ecosystem is going to respond".
The onus will be on the rest of society to defend itself from all the grift that will result from this.
text, image, video, and audio editing tools have no 'safety' and 'alignment' whatsoever, and skilled humans are far more capable of creating 'unsafe' and 'unethical' media than generative AI will ever be.
somehow, the society had survived just fine.
the notion that generative AI tools should be 'safe' and 'aligned' is as absurd as the notion that tools like Notepad, Photoshop, Premiere and Audacity should exist only in the cloud, monitored by kommissars to ensure that proles aren't doing something 'unsafe' with them.
The irony is that users want more freedom and fewer safeguards.
But these companies are rightfully worried about regulators and legislatures, often led by a pearl-clutching journalists, so we can't have nice things.
Do we not want new stuff? If the answer is "Sure, but only if whoever invents the stuff does all the work and finds all rough edges" then the answer is actually just "No, thanks".
'when civilization collapses because all photo, audio and video evidence is 100% suspect, i mean, how could you blame us'
[dead]
Flashbacks to when they were cagey about releasing the GPT models because they could so easily be used for spam, and then just pretended not to see all the spam their model was making when they did release it.
If you happen to notice a Twitter spam bot claiming to be "an AI language model created by OpenAI", know that we have conducted an investigation and concluded that no you didn't. Mission accomplished!