For anybody who thinks it's about Trump vs other administration: it's not, both AI surveillance of all people and using it for automatic fight was just bound to happen.
The only question is whether the safety of the models were really done well enough to protect the people and be a net positive force in the world.
I guess if they would be safely trained to do more good than bad (how Dario and SamA said), there wouldn't even be a need for the contract terms.
It would/will be extremely irresponsible to put non-deterministic and fallible models in charge of weapons. We are not close to having solved the problem of ensuring AI pursues good outcomes