logoalt Hacker News

placatedmayhemyesterday at 11:32 PM0 repliesview on HN

There are numerous documented examples of where chat LLMs have either subtly agreed with a user's suicidal thoughts or outright encouraged suicide. Here is just one:

https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-law...

In some cases, the LLM may start from a skepticism or discouragement, but they go along with what the user prompts. That's in comparison to services like 988, where the goal is to keep the person talking and work them through a moment of crisis, regardless of how insistent they are. LLMs are not a replacement for these services, but it's pretty clear they need to be forced into providing this sort of assistance because users are using them this way.