I think Altman probably rationalised it to himself by thinking that if he doesn’t do it, Musk/xAI will, and they give zero fucks about safety. So maybe he told himself that it’s better if OpenAI does it.
Is there a name for this phenomenon? I've taken to calling it "the nihilist's excuse"
Knowing Sam, that's exactly what happened -- and the echo chamber inside OpenAI wouldnt dare to disagree