> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.