Or "alignment" which means "let's ensure the AIs recommend launching nukes only when it makes sense to, based on our [assumed objective] values"