If you've never seen it then you haven't been paying attention. For example Anthropic (the biggest AI org which is "safety" aligned) released a big report last year on metal well being [1]. Also here is their page on societal impacts [2]. Here is PauseAI's list of risks [3], it has deepfakes as its second issue!
The problem is not that no one is trying to solve the issues that you mentioned, but that it is really hard to solve them. You will probably have to bring large class action law suits, which is expensive and risky (if it fails it will be harder to sue again). Anthropic can make their own models safe, and PauseAI can organize some protests, but neither can easily stop grok from producing endless CSAM.
[1] https://www.anthropic.com/news/protecting-well-being-of-user...
[2] https://www.anthropic.com/research/team/societal-impacts
PauseAI's official proposal recommends[0]: "Only allow deployment of models after no dangerous capabilities are present." Their list of dangerous capabilities[1] does not include deepfakes, but it does include several unrealized ones that fit the description of this post here, including "a recursive loop of self-improvement, spinning rapidly out of control... called an intelligence explosion".
I appreciate you pointing out the Risks page though, as it does disprove my hyperbole about ignoring present-day harms completely, although I was disheartened that the page just appears to list things that they believe actions "could be mitigated by a Pause" (emphasis mine).
[0]: https://pauseai.info/proposal
[1]: https://pauseai.info/dangerous-capabilities