logoalt Hacker News

I am directing the Department of War to designate Anthropic a supply-chain risk

1234 pointsby jacobedawsonyesterday at 10:31 PM990 commentsview on HN

https://xcancel.com/secwar/status/2027507717469049070

https://www.cnbc.com/2026/02/27/trump-anthropic-ai-pentagon....


Comments

_fat_santayesterday at 11:14 PM

The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.

I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remove those safeguards to which Anthropic is (rightly so) saying no.

Am I missing something here?

EDIT: Re-reading Dario's post[1] from this morning I'm not missing anything. Those use cases were never part of the original contacts:

> Two such use cases have never been included in our contracts with the Department of War

So yeah this seems pretty cut and dry. Dow signed a contract with Anthropic and agreed to those terms. Then they decided to go back and renege on those original terms to which Anthropic said no. Then they promptly threw a temper tantrum on social media and designated them as a supply chain risk as retaliation.

My final opinion on this is Dario and Anthropic is in the right and the DoW is acting in bad faith by trying to alter the terms of their original contracts. And this doesn't even take into consideration the moral and ethical implications.

[1]: https://www.anthropic.com/news/statement-department-of-war

show 16 replies
pinkmuffinereyesterday at 11:57 PM

Wow, and the only restrictions Anthropic asked for are (1) no mass domestic surveillance and (2) require human-in-the-loop for killing [1]. Those seem exceptionally reasonable, and even rather weak, lol :|

[1] https://www.anthropic.com/news/statement-department-of-war

show 6 replies
techblueberryyesterday at 10:35 PM

So they are such a risk to national security that no contractor that works with the federal government may use them, but they're going to keep using them for six more months? So I guess our national security is significantly at risk for the next six months?

show 15 replies
lukewritesyesterday at 10:40 PM

I admire Anthropic for sticking to their principles, even if it affects the bottom line. That’s the kind of company you want to work for.

show 10 replies
labradoryesterday at 10:43 PM

Good. I'd rather not have my favorite AI from a company working on AGI to have murder and spying in it's DNA.

In fact, as a patriotic American veteran, I'd be ok with Anthropic moving to Europe. It might be better for Claude and AGI, which are overriding issues for me.

Rutger Bregman @rcbregman

This is a huge opportunity for Europe. Welcome Anthropic with open arms. Roll out the red carpet. Visa for all employees.

Europe already controls the AI hardware bottleneck through ASML. Add the world's leading AI safety lab and you have the foundations of an AI superpower.

https://x.com/rcbregman/status/2027335479582925287

show 12 replies
Someone1234yesterday at 10:50 PM

Topics like this are where I struggle with HN philosophy. Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.

But how do you even begin to discuss that Tweet or this topic without talking about ideology and to contextualize this with other seemingly unrelated things currently going on in the US?

I genuinely don't think I'm conversationally agile enough to both discuss this topic while still able to avoid the political/ideological rabbit-hole.

show 12 replies
0xbadcafebeeyesterday at 10:56 PM

McCarthyism began in 1947, with Truman demanding goverment employees be "screened for loyalty". They wanted to remove anyone who was a member of an "organization" they didn't like. It began with hearings, and then blacklists, and then arrests and prison sentences. It lasted until 1959. (https://en.wikipedia.org/wiki/McCarthyism)

This is the new McCarthyism. Do what the administration says, or you will be blacklisted, or worse.

show 3 replies
nickysielickiyesterday at 11:15 PM

This could kill Anthropic.

The designation says any contractor, supplier, or partner doing business with the US military can’t conduct any commercial activity with Anthropic. Well, AWS has JWCC. Microsoft has Azure Government. Google has DoD contracts. If that language is enforced broadly, then Claude gets kicked off Bedrock, Vertex, and potentially Azure… which is where all the enterprise revenue lives. Claude cannot survive on $200/mo individual powerusers. The math just doesn’t math.

show 8 replies
rushcaryesterday at 11:19 PM

"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

This is authoritarian behavior. You're having trouble negotiating a contract, so instead of just canceling it - you basically ban all of F500 from doing business with that firm.

show 4 replies
eastonyesterday at 10:46 PM

> Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.

I’m sure the lawyers just got paged, but does this mean the hyperscalers (AWS, GCP) can’t resell Claude anymore to US companies that aren’t doing business with the DoD? That’s rough.

show 10 replies
NickAndresenyesterday at 10:42 PM

"They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." from Dario's statement (https://www.anthropic.com/news/statement-department-of-war)

show 9 replies
eckelhestenyesterday at 10:39 PM

Hard decision by Anthropic, but at least they can sleep well at night knowing their products doesn’t kill human beings around the world.

show 5 replies
kilroy123yesterday at 10:46 PM

Strange times. I truly feel these are the last days of our Republic. Especially if more aren't willing to take a stand.

show 4 replies
readitalreadyyesterday at 10:52 PM

I’m just laughing at the possibility of it he US military being forced to use Chinese open source AI models because every US model provider refuses to work with them.

show 4 replies
cmiles8yesterday at 11:27 PM

As written this would be the end of Anthropic. AWS, Microsoft et al are all suppliers of the DoW and as written they must immediate stop doing business with Anthropic. Will be interesting to see how this unfolds.

show 1 reply
hoppolitoday at 12:15 AM

American people: latinamerican here. Maybe it's silly to root for a country in the world hegemony arena. I've usually been partial to the USA over China. Now I'm not rooting for your country anymore. As far as I'm concerned, I'd rather have China being the foremost power, at least they seem to be less keen on invading or heavily strong-arming latinamerica

show 4 replies
general1465yesterday at 10:51 PM

Ukrainians and Russians are experimenting with FPV drones using AI for target acquisition and homing. Not yet economically viable because it is cheaper to give your FPV fiber spool instead of Nvidia Jetson to bypass jamming.

When we have first politician blown to bits by autonomous AI FPV there will be sheer panic of every politician in the world to put the genie back into the bottle. It will be too late at that point.

Anthropic is correct with its no killbot rule.

show 2 replies
txrx0000today at 1:00 AM

This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

Open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run a 100% transparent organization so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. Diffuse it as much as possible. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, aligned with millions of different individuals. It is a necessary condition for humanity's survival.

show 1 reply
getpokedagainyesterday at 11:11 PM

Why does everyone associated with this administration sound like a 17 year old who got dumped when they post on twitter.

show 3 replies
avaeryesterday at 11:43 PM

Remember to vote in this year's midterms (Nov 3) if you're eligible. I don't think it's off-topic.

cube00yesterday at 11:46 PM

Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight [1]

So OpenAI will also be marked as a supply chain risk too, right?

[1]: https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...

show 2 replies
linuxhansltoday at 12:59 AM

Hats off to Anthropic for not wavering here.

Supply-chain risks means "the potential for adversaries to sabotage, subvert, or disrupt the integrity and delivery of defense systems, including software, hardware, and services, to degrade national security".

So now Anthropic is an adversary, because it does not want "fully autonomous weapons" or automated mass surveillance? Sure thing, DoD. Go use Grok or whatever, I'm sure that will go great.

dangyesterday at 11:05 PM

Recent and related:

Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1508 comments)

leapisyesterday at 10:42 PM

Decades of speculative science fiction, thought experiments, and discourse led to this. It’s gratifying to see that we’ve garnered enough concern, a major AI lab risking this to reign in the potential of runaway AI disasters. Hopefully we see other labs follow.

phs318utoday at 1:14 AM

The discussion here underlines the reality that one can never make a “deal” with a powerful state, just as Lando Calrisian famously found out in Empire Strikes Back.

Dario is Lando, complaining “We had a deal!” Only to be told, “I’m altering the deal. Pray I don’t alter it any further.”

bnycumyesterday at 10:57 PM

It's nice to see Anthropic sticking to their terms. I just have one question in all this. Why is Anthropic being singled out when it seems all the other big players are down to play with the DoD? Is this just a pissing match, or have the Anthropic models been proven the real winner for them?

show 1 reply
cannabis_samtoday at 12:25 AM

A drunkard, ex-fox news host, wants mass surveillance and automated killing, what could go wrong?

I wish I thought enough Americans had the spine required to stand up to this, and I know for a fact that a lot do... the solution is literally written into your constitution.

garbawarbyesterday at 11:13 PM

This sounds like a message to would-be founders: don't base your company in the US. The strongest markets to do business are the ones with the most freedom from government meddling. In the US, big government is happy to use its power to crush private enterprise that it doesn't like.

show 2 replies
qgintoday at 1:04 AM

So they're essentially admitting they want to use Claude to mass surveil Americans and/or build autonomous weapons with no humans in the loop. Kind of nuts.

liuliuyesterday at 10:49 PM

It may not be obvious. But this is actually a good thing when we looking back in a few years. I always feel weird that executive branch can just destroy private enterprise with "Supply-chain Risk" / "Terrorist List" without Due Process.

show 2 replies
cpetersoyesterday at 10:46 PM

Good PR for Anthropic: the DoD already has contracts with OpenAI and xAI, but is still so eager to use Claude that they must threaten Anthropic.

kylecazaryesterday at 11:46 PM

There is clearly a need to codify into all of these historical acts that they can't be invoked unless there is a declaration of war (or some other appropriate prerequisite).

This administration consistently exploits what were designed to be emergency powers because no such requirement exists. Leave no room for interpretation.

show 1 reply
johnhamlintoday at 12:08 AM

Labeling a company that refused to comply with nakedly authoritarian orders is a true New Speak moment

pugworthyyesterday at 11:37 PM

I imagine I'm not the only one to switch over to giving Claude my money today. I'm sure the "Other" comments for the cancellation were often as blunt as mine.

Q: "Is there anything we could do to change your mind?"

A: "Yes! Stand up to the current administration."

Avicebronyesterday at 11:13 PM

How many layers deep does this go? Does Microsoft using Claude to develop their Word products mean the US government has to switch to linux?

show 1 reply
WesleyJohnsonyesterday at 10:46 PM

What player is going to step in and do what Anthropic wouldn't? Or, worse, will the DoW try to author its own AI to go where private AI won't?

show 2 replies
seaniebyesterday at 11:29 PM

> "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

Does this mean Azure & AWS will have to stop offering Claude as a model?

show 2 replies
pm90today at 12:30 AM

Does Anthropic have standing to sue to Government for libel? I don’t think the Government is allowed to arbitrarily designate a company a supply chain risk without good cause.

pinkmuffineretoday at 1:53 AM

It's fascinating to me that this decision was set for 5 pm ET on a friday, and I think it may be more responsible to set big deadlines like this for a time while the stock market is open. I imagine this will negatively impact confidence in the US economy at large, and stock markets will reflect that. But since the market is closed, we'll have to wait till Monday, with the tension/anticipation of a drop building. If the deadline had been set for say, midday thursday, the market would have responded immediately, but at least you wouldn't have the building anxiety over the weekend. Of course the result wasn't known ahead of time, and I imagine some people will argue that the weekend will give investors time to cool off instead of following their gut reaction. But personally I don't find those arguments very convincing.

show 1 reply
dataflowyesterday at 10:57 PM

Given that Anthropic is clearly risking their entire business just to stand up for what they believe is right, which appears to be what everyone here agrees with, is everyone who is supporting them here planning to also start using Anthropic and switch away from other vendors until they follow suit? Or are folks planning to just use whatever regardless?

Edit: I should perhaps clarify I'm more interested in paid users, rather than free. It's harder to tell if free users switching would help them or hurt them... curious if anyone has thoughts on that too.

show 4 replies
Keyframeyesterday at 10:52 PM

Anthropic’s stance is fundamentally incompatible with American principles.

Come to EU guys, we'll prepare a warm welcome!

show 2 replies
daxfohlyesterday at 10:41 PM

Probably used Claude to write the tweet.

show 1 reply
drumheadyesterday at 10:45 PM

Under normal circumstances this would end up in court, but when this administration ignores court orders it doesnt like Anthropic would effectively have no legal recourse.

show 1 reply
joshuaheardtoday at 12:38 AM

Should military contractors put conditions on the use of their weapons? Here's our tank, but you can't invade Iran with it? We think your invasion of Venezuela is illegal, we're activating the kill switch on your jets. That's a real dangerous proposition.

show 2 replies
DavidPiperyesterday at 11:29 PM

> Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.

Kesha tried to hug Jerry Seinfeld vibes.

> Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.

Strange way of saying "this vendor doesn't meet our software requirements".

> they have attempted to strong-arm the United States military into submission

Err... You approached them?

> a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.

It's an orthogonal point, but "Silicon Valley ideology" has made up a significant portion of the USA's GDP for the last however many years.

> Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.

Again... You approached them?

> I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security.

Like most companies in the world I imagine. They just haven't been approached yet.

> to allow for a seamless transition to a better and more patriotic service.

Internally re-framing all the recent "EU moving away from American tech!" articles as "EU builds more patriotic services!"

> This decision is final.

Nothing says "final" like a Tweet. The most uncontroversial and binding mechanism of all communication.

show 1 reply
vvpanyesterday at 11:58 PM

"Department of War" - I suppose one could give them credit for being honest but what bastards...

show 1 reply
owenthejumperyesterday at 11:27 PM

I got downvoted for this in the other thread, but this is basically an attempt at bankrupting Anthropic. No US company has ever been designated a supply chain risk, and the foreign companies that are on that list are now doing 0 business in the US. Very large portion of the US economy relies on some contracts with the US government, Anthropic cannot survive this if this holds.

I don't think it will hold, in the end this is mafia behavior, but if it does, we are yet again in uncharted waters.

show 1 reply
hedoratoday at 12:39 AM

This is good news all around, especially with OpenAI's statement siding with Anthropic.

Anthropic folks: I've been a bit salty on HN about bugs in Claude Code, but I feeling pretty warm and fuzzy about sending you my cash this month.

trelaneyesterday at 11:21 PM

https://x.com/PalmerLuckey/status/2027500334999081294

It is an interesting point. What's the difference between this use license and others?

show 5 replies

🔗 View 50 more comments