logoalt Hacker News

Wowfunhappy01/15/20262 repliesview on HN

OpenAI didn't encourage anyone to do anything. They made some software that semi-randomly puts words together in response to user input. This type of software isn't even new—I can definitely get Eliza to say terrible things with the right input, and Eliza even bills herself as a therapist!


Replies

4d4m01/16/2026

Let me get this straight - so the safety team at OpenAI - what exactly are they working on? Is it all focused on censorship of inputs and results, and steering how you think? Or are they not responsible for designing against these horiffic outcomes as a primary goal?

I would take the view that their safety team is maybe focused on the wrong things (former) and has been captured by extremists instead of pragmatists, but that's like just my opinion man. I'll use Anthropic and Venice until I notice less steering in my threads, personally. An GPT that constantly eggs me on isn't a thought-partner, it's a dopamine device. If I'm going to outsource my thinking to an LLM I need something I trust won't put it's own spin on things or gas me up into taking action I never originally intended to do without critical thinking first.

g-b-r01/15/2026

We don't know how much aware of the problems (or of tbe likelihood that they'd occur) OpenAI was, and how much they deliberately pushed through.

If they were and did, they sure bear responsibility for what happened

show 1 reply