logoalt Hacker News

000ooo00001/15/20262 repliesview on HN

Do you think the majority of people who've killed themselves thanks to ChatGPT influence used similar euphemisms? Do you think there's no value in protecting the users who won't go to those lengths to discuss suicide? I agree, if someone wants to force the discussion to happen, they probably could, but doing nothing to protect the vulnerable majority because a select few will contort the conversation to bypass guardrails seems unreasonable. We're talking about people dying here, not generating memes. Any other scenario, e.g. buying a defective car that kills people, would not invite a response a la "well let's not be too hasty, it only kills people sometimes".


Replies

JohnBooty01/16/2026

A car that actively kills people through negligently faulty design (Ford Pinto?) is one thing. That's bad, yes. I would not characterize ChatGPT's role in these tragedies that way. It appears to be, at most, an enabler... but I think if you and I are both being honest, we would need to read Gordon's entire chat history to make a real judgement here.

Do we blame the car for allowing us to drive to scenic overlooks that might also be frequent suicide locations?

Do we blame the car for being used as a murder weapon when a lunatic drives into a crowd of protestors he doesn't like?

(Do we blame Google for returning results that show a person how to tie a noose?)

show 1 reply
simianwords01/15/2026

Parent talked about extreme guardrails