logoalt Hacker News

JohnBooty01/16/20261 replyview on HN

A car that actively kills people through negligently faulty design (Ford Pinto?) is one thing. That's bad, yes. I would not characterize ChatGPT's role in these tragedies that way. It appears to be, at most, an enabler... but I think if you and I are both being honest, we would need to read Gordon's entire chat history to make a real judgement here.

Do we blame the car for allowing us to drive to scenic overlooks that might also be frequent suicide locations?

Do we blame the car for being used as a murder weapon when a lunatic drives into a crowd of protestors he doesn't like?

(Do we blame Google for returning results that show a person how to tie a noose?)


Replies

000ooo00001/16/2026

>Do we blame the car for allowing us to drive to scenic overlooks that might also be frequent suicide locations?

If one gets in the car, mentions "suicide", and the car drives to a cliff, then yes I think we can blame the car.

The rest of your examples and other replies here make it fairly clear you're determined to excuse OpenAI. How many people need to kill themselves at the encouragement of this LLM before you say "maybe OpenAI needs to do more?" What kind of valuation do you think OpenAI needs, what boring slop poured out, before you'd be OK with it encouraging your son to kill himself using highly manipulative techniques like shown?