logoalt Hacker News

ChatGPT wrote "Goodnight Moon" suicide lullaby for man who later killed himself

73 pointsby mirabilis01/15/202689 commentsview on HN

Comments

simianwords01/15/2026

> That conversation showed how ChatGPT allegedly coached Gordon into suicide, partly by writing a lullaby that referenced Gordon’s most cherished childhood memories while encouraging him to end his life, Gray’s lawsuit alleged.

I feel this is misleading as hell. The evidence they gave for it coaching him to suicide is lacking. When one hears this, one would think ChatGPT laid out some strategy or plan for him to do it. No such thing happened.

The only slightly damning thing it did was make suicide sound slightly ok and a bit romantic but I’m sure that was after some coercion.

The question is, to what extent did ChatGPT enable him to commit suicide? It wrote some lullaby, and wrote something pleasing about suicide. If this much is enough to make someone do it.. there’s unfortunately more to the story.

We have to be more responsible assigning blame to technology. It is irresponsible to have a reactive backlash that would push towards much more strengthening of guardrails. These things come with their own tradeoffs.

show 4 replies
Fernicia01/15/2026

OpenAI keeping 4o available in ChatGPT was, in my opinion, a sad case of audience capture. The outpouring from some subreddit communities showed how many people had been seduced by its sycophancy and had formed proto-social relationships with it.

Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:

> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.

How does OpenAI get it so wrong, when Anthropic gets it so right?

show 3 replies
000ooo00001/15/2026

Some of those quotes from ChatGPT are pretty damning. Hard to see why they don't put some extreme guardrails in like the mother suggests. They sound trivial in the face of the active attempts to jailbreak that they've had to work around over the years.

show 1 reply
8bitsrule01/15/2026

GPT keeps using the word 'I' in its responses. It uses exclamation marks! to suggest it wants to help!

When I assert that its behavior is misleadingly suggesting that it's a sentient being, it replies 'You're right'.

Earlier today it responded: "You're right; the design of AI can create an illusion of emotional engagement, which may serve the interest of keeping users interacting or generating revenue rather than genuinely addressing their needs or feelings."

Too bad it can't learn that by itself after those 8 deaths.

shadowgovt01/15/2026

Based on what I've read, this generation of LLMs should be considered remarkably risky for anyone with suicidal ideation to be using alone.

It's not about the ideation, it's that the attention model (and its finite size) causes the suicidal person's discourse to slowly displace any constraints built into the model itself over a long session. Talk to the thing about your feelings of self-worthlessness long enough and, sooner or later, it will start to agree with you. And having a machine tell a suicidal person, using the best technology we've built to be eloquent and reasonable-sounding, that it agrees with them is incredibly dangerous.

show 1 reply
ravila401/15/2026

I think that a major driver of these kinds of incidents is pushing the "memory" feature, without any kind of arbitrage. It is easy to see how eerily uncanny a model can get when it locks into a persona, becoming this self-reinforcing loop that feeds para-social relationships.

show 2 replies
cowboylowrez01/16/2026

if these LLMs are ever going to replace humans, they'll need more deaths like this guy, and the LLMs know it.

d_silin01/15/2026

I wonder if any other major AIs (Grok, Claude, Gemini) had similar accidents. And if not, then why?

show 2 replies
kayo_2021103001/15/2026

The saddest part of this piece was

> Austin Gordon, died by suicide between October 29 and November 2

That's 5 days. 5 days. That's the sad piece.

scotty7901/16/2026

> Adam attempted suicide at least four times, according to the logs

> [...]

> “there is something chemically wrong with my brain, I’ve been suicidal since I was like 11.”

> [...]

> was disappointed in lack of attention from his family

> [...]

> “he would be here but for ChatGPT. I 100 percent believe that.”

throw701/15/2026

openai will settle out of court and family will get some amount of money. next.

show 1 reply
sonorous_sub01/15/2026

The guy left a suicide note that ratted out ChatGPT for simply being a good buddy. No good deed goes unpunished, ai guess.

show 2 replies
simianwords01/15/2026

God damnit this man’s story is so distressing. I hate everything about it. I hate the fact that this happened to him.

The fact that he spoke about his favorite children’s book is screwed up. I can’t get the eerie name out of my head. I can’t imagine what he went through, the loneliness and the struggle.

I hate the fact that ChatGPT is blamed for this. You are fucked up if this is what you get from this story.

show 2 replies
tiku01/15/2026

He probably asked for it.

joe46336901/15/2026

Where are the Grok acolytes to tell us "He could have written a poem encouraging himself to commit suicide in Vim."

show 3 replies