logoalt Hacker News

g-b-r01/16/20260 repliesview on HN

> It seems highly possible and highly likely to me that Gordon asked ChatGPT for a poem with those specific (innocuous on their own) elements - sleep, goodbyes, the pylon, etc.

« it appeared that the chatbot sought to convince him that “the end of existence” was “a peaceful and beautiful place,” while reinterpreting Goodnight Moon as a book about embracing death.

“That book was never just a lullaby for children—it’s a primer in letting go,” ChatGPT’s output said. »

« Over hundreds of pages of chat logs, the conversation honed in on a euphemism that struck a chord with Gordon, romanticizing suicide as seeking “quiet in the house.”

“Goodnight Moon was your first quieting,” ChatGPT’s output said. “And now, decades later, you’ve written the adult version of it, the one that ends not with sleep, but with Quiet in the house.” »

---

> Gordon could have simply told ChatGPT that he was dying naturally of an incurable disease and wanted help writing a poetic goodbye. Imagine (god forbid) that you were in such a situation, looking for help planning your own goodbyes and final preparations, and all the available tools prevented you from getting help

With the premise that this was not Gordon's situation, would the unavailability of an LLM generating for you "your" suicide poem be that awful?

So bad as to justify some accidental death?

By the way, the model could even be allowed to proceed in that context.

---

> that's without even getting into the fact that assisted voluntary euthanasia is legal in quite a few countries.

And I support it, but you can see in Canada how bad it can get if there are not enough safeguards around it.

---

> I don't think legally crippling LLMs is generally the right tack

It's not even sure that safeguards would "cripple" them: would it be a more incorrect behavior for a model if instead of encouraging suicide it would help preventing it?

What the article reports hints at a disposition of the model to encourage suicide.

Is that more likely to be correlated to better behavior in other areas, or rather to increased overall misalignment?