Let me get this straight - so the safety team at OpenAI - what exactly are they working on? Is it all focused on censorship of inputs and results, and steering how you think? Or are they not responsible for designing against these horiffic outcomes as a primary goal?
I would take the view that their safety team is maybe focused on the wrong things (former) and has been captured by extremists instead of pragmatists, but that's like just my opinion man. I'll use Anthropic and Venice until I notice less steering in my threads, personally. An GPT that constantly eggs me on isn't a thought-partner, it's a dopamine device. If I'm going to outsource my thinking to an LLM I need something I trust won't put it's own spin on things or gas me up into taking action I never originally intended to do without critical thinking first.