In all of these stories I've never seen it talk anybody into suicide. It failed to talk people out of it, and was generally sycophantic, but that's something completely different.
There are numerous documented examples of where chat LLMs have either subtly agreed with a user's suicidal thoughts or outright encouraged suicide. Here is just one:
In some cases, the LLM may start from a skepticism or discouragement, but they go along with what the user prompts. That's in comparison to services like 988, where the goal is to keep the person talking and work them through a moment of crisis, regardless of how insistent they are. LLMs are not a replacement for these services, but it's pretty clear they need to be forced into providing this sort of assistance because users are using them this way.
There are numerous documented examples of where chat LLMs have either subtly agreed with a user's suicidal thoughts or outright encouraged suicide. Here is just one:
https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-law...
In some cases, the LLM may start from a skepticism or discouragement, but they go along with what the user prompts. That's in comparison to services like 988, where the goal is to keep the person talking and work them through a moment of crisis, regardless of how insistent they are. LLMs are not a replacement for these services, but it's pretty clear they need to be forced into providing this sort of assistance because users are using them this way.