logoalt Hacker News

mossTechniciantoday at 10:25 AM1 replyview on HN

If it makes sense to handle all of these issues, then couldn't these organizations just acknowledge all of these issues? If reducing harm is the goal, I don't see a reason to totally segregate different issues, especially not by drawing a dividing line between the ones OpenAI already acknowledges and the ones it doesn't. I've never seen any self-described "AI safety" organizations that tackles any of the present-day issues AI companies cause.


Replies

iNictoday at 11:16 AM

If you've never seen it then you haven't been paying attention. For example Anthropic (the biggest AI org which is "safety" aligned) released a big report last year on metal well being [1]. Also here is their page on societal impacts [2]. Here is PauseAI's list of risks [3], it has deepfakes as its second issue!

The problem is not that no one is trying to solve the issues that you mentioned, but that it is really hard to solve them. You will probably have to bring large class action law suits, which is expensive and risky (if it fails it will be harder to sue again). Anthropic can make their own models safe, and PauseAI can organize some protests, but neither can easily stop grok from producing endless CSAM.

[1] https://www.anthropic.com/news/protecting-well-being-of-user...

[2] https://www.anthropic.com/research/team/societal-impacts

[3] https://pauseai.info/risks

show 1 reply