logoalt Hacker News

What OpenAI did when ChatGPT users lost touch with reality

29 pointsby nonprofiteertoday at 5:58 AM20 commentsview on HN

Comments

ArcHoundtoday at 7:59 PM

One of the more disturbing things I read this year was the my boyfriend is AI subreddit.

I genuinely can't fathom what is going on there. Seems so wrong, yet no one there seems to care.

I worry about the damage caused by these things on distressed people. What can be done?

show 9 replies
chris-vlstoday at 8:52 PM

It seems quite probable that an LLM provider will lose a major liability lawsuit. "Is this product ready for release?" is a very hard question. And it is one of the most important ones to get right.

Different providers have delivered different levels of safety. This will make it easier to prove that the less-safe provider chose to ship a more dangerous product -- and that we could reasonably expect them to take more care.

Interestingly, a lot of liability law dates back to the railroad era. Another time that it took courts to rein in incredibly politically powerful companies deploying a new technology on a vast scale.

show 1 reply
leohtoday at 8:40 PM

Anthropic was founded by exiles of OpenAI's safety team, who quit en masse about 5 years ago. Then a few years later, the board tried to fire Altman. When will folks stop trusting OpenAI?

show 1 reply
blurbleblurbletoday at 8:15 PM

The whiplash of carefully filtering out sycophantic behavior from GPT-5 to adding it back in full force for GPT-5.1 is dystopian. We all know what's going on behind the scenes:

The investors want their money.

show 3 replies
Peritracttoday at 8:15 PM

"Profited".