Yes. ChatGPT "safely" helped[1] my friend's daughter write a suicide note.
[1] https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-h...
(Apologies if this archive link isn't helpful, the unlocked_article_code in the URL still resulted in a paywall on my side...)
Do I feel bad for the above person.
I do. Deeply.
But having lived through the 80's and 90's, the satanic panic I gotta say this is dangerous ground to tread. If this was a forum user, rather than a LLM, who had done all the same things, and not reached out, it would have been a tragedy but the story would just have been one among many.
The only reason we're talking about this is because anything related to AI gets eyeballs right now. And our youth suicides epidemic outweighs other issues that get lots more attention and money at the moment.
[dead]
[dead]
I have mixed feelings on this (besides obviously being sad about the loss of a good person). I think one of the useful things about AI chat is that you can talk about things that are difficult to talk to another human about, whether it's an embarrassing question or just things you don't want people to know about you. So it strikes me that trying to add a guard rail for all the things that reflect poorly on a chat agent seems like it'd reduce the utility of it. I think people have trouble talking about suicidal thoughts to real therapists because AFAIK therapists have a duty to report self harm, which makes people less likely to talk about it. One thing that I think is dangerous with the current LLM models though is the sycophancy problem. Like, all the time chatGPT is like "Great question!". Honestly, most my questions are not "great", nor are my insights "sharp", but flattery will get you a lot of places.. I just worry that these things attempting to be agreeable lets people walk down paths where a human would be like "ok, no"