OpenAI didn't encourage anyone to do anything. They made some software that semi-randomly puts words together in response to user input. This type of software isn't even new—I can definitely get Eliza to say terrible things with the right input, and Eliza even bills herself as a therapist!
We don't know how much aware of the problems (or of tbe likelihood that they'd occur) OpenAI was, and how much they deliberately pushed through.
If they were and did, they sure bear responsibility for what happened
Let me get this straight - so the safety team at OpenAI - what exactly are they working on? Is it all focused on censorship of inputs and results, and steering how you think? Or are they not responsible for designing against these horiffic outcomes as a primary goal?
I would take the view that their safety team is maybe focused on the wrong things (former) and has been captured by extremists instead of pragmatists, but that's like just my opinion man. I'll use Anthropic and Venice until I notice less steering in my threads, personally. An GPT that constantly eggs me on isn't a thought-partner, it's a dopamine device. If I'm going to outsource my thinking to an LLM I need something I trust won't put it's own spin on things or gas me up into taking action I never originally intended to do without critical thinking first.