Maybe I missed it but I don't see them defining what they mean by ethics. Ethics/morals are subjective and changes dynamically over time. Companies have no business trying to define what is ethical and what isn't due to conflict of interest. The elephant in the room is not being addressed here.
Ah the classic Silicon Valley "as long as someone could disagree, don't bother us with regulation, it's hard".
I understand the point you’re making but I think there’s a real danger of that logic enabling the shrugging of shoulders in the face of immoral behavior.
It’s notable that, no matter exactly where you draw the line on morality, different AI agents perform very differently.
Especially as most AI safety concerns are essentially political, and uncensored LLMs exist anyway for people who want to do crazy stuff like having a go at building their own nuclear submarine or rewriting their git history with emoji only commit messages.
For corporate safety it makes sense that models resist saying silly things, but it's okay for that to be a superficial layer that power users can prompt their way around.