Yes, and even their two exceptions, only one is on moral grounds. They don't want to provide tools for autonomous killing machines because the technology isn't good enough, yet. Once that 'yet' is passed they will be fine supplying that capability. Anthropic is clearly the better company over OpenAI, but that doesn't mean they are good. 'lesser evil' is the correct term here for sure.
> Anthropic is clearly the better company over OpenAI
Why do people keep falling into traps of anthropomorphize companies like this? What's the point? Either you care about a company in the "for-profit" sense, and then money is all that matters (so clearly OpenAI currently wins there), or you care about pesky things like morality and ethics, and then you should look beyond corporations, because they're not humans, stop treating them as such. Both of them do their best to earn as much as possible, and that's their entire "morality", as they're both for-profit companies,.
The flip side is it's very unlikely that AI won't become that good any time soon, so it'll always remain a means to hold out. Especially since nobody has explicitly defined what "good enough" entails.
Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.
Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.