I think the whole entire point of this is they shouldn't be excluding Anthropic as an entity, they should be excluding all suppliers on equal terms on the basis of whether they satisfy requirements or not. If it is a requirement that they be able to conduct mass domestic surveillance then they should put that into their contract with Palantir, not "You can't use Anthropic".
So I agree with you, it ought to be illegal for them to tell a supplier what other suppliers to use. But that is exactly the larger point here in the first place that they should not be doing that at all.
The government cannot conduct massive domestic surveillance in any case, that’s illegal. Other vendors are mature and serious enough to understand that the government is subject to American law and must operate under American law. They’re mature and serious enough to understand that it is the exclusive right of the judicial branch to make determinations around whether the law has been violated or not. They’re mature and serious enough to understand that the DoD has a mandate to pursue its mission to the fullest extent allowable by the law, and it is the sole responsibility of the DoD legal team to determine whether they are operating safely within the bounds of the law.
Anthropic is uniquely interested in introducing itself as an external enforcer of US law, a sort of belt-and-suspenders approach, where the Department is not only subject to operate under the constitution and the laws from the legislative branch, but also subject to anthropics interpretation of whether they are operating under the constitution and the laws from the legislative branch.
The department of defense does not want to engage in massive domestic surveillance beyond what the law allows them to do. They have signed agreements with OpenAI and other vendors which reiterate that they do not wish to use AI systems for massive domestic surveillance. These terms were unsatisfactory for Anthropic, for whatever reason.
The problem is not the terms of the agreement. It’s the people and the way they conduct business. It’s the fact that they’ve expressed a willingness to hold their product (or future products) hostage, at the cost of DoD operational excellence. It’s the fact that they’re training a specific model variant for government usage with extra guardrails and limitations and values.
Above all else, it’s the fact that they want to leverage their position as a leading AI company to influence government policy. This is not how a serious reliable partner of the government behaves. The problem from the DoDs perspective is the company itself and the people in charge of it.