logoalt Hacker News

shadowgovt01/15/20261 replyview on HN

Based on what I've read, this generation of LLMs should be considered remarkably risky for anyone with suicidal ideation to be using alone.

It's not about the ideation, it's that the attention model (and its finite size) causes the suicidal person's discourse to slowly displace any constraints built into the model itself over a long session. Talk to the thing about your feelings of self-worthlessness long enough and, sooner or later, it will start to agree with you. And having a machine tell a suicidal person, using the best technology we've built to be eloquent and reasonable-sounding, that it agrees with them is incredibly dangerous.


Replies

bhhaskin01/15/2026

I think it's anyone with mental health issues, not just suicidal ideations. They are designed to please the user and that can be very self destructive.

show 1 reply