I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
Because the current government wants unquestioning obedience, not a discussion (assuming they were capable of that level of nuanced thought in the first place). The position of this government is "just do what I say or I will hit you with the first stick that comes to hand".
A vendor doesn't want to do something you need, you find another vendor (there are others).
This is just petty.
If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.
> What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.
I can see both sides as pertains to Trump's initial decision to stop working with Claude, but now, this over-the-top "supply chain risk" designation from Hegseth is something else. It's hard to square it with any real principle that I've seen the admin articulate.
> What I don’t understand is why the two parties couldn’t reach agreement.
Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.
> it cannot allow private companies to control the use of military equipment.
The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."
We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."
> Surely autonomous murderous robots is something U.S. government has interest in preventing
After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.