logoalt Hacker News

Pentagon formally labels Anthropic supply-chain risk

345 pointsby klausatoday at 7:24 PM228 commentsview on HN

Comments

netinstructionstoday at 9:35 PM

This designation is usually reserved for foreign adversaries/companies, and so this is crazy to apply it to US company over a sudden contract dispute... that was previously agreed upon by all parties.

This should make any US company nervous about entering into an agreement with the government. Or any US company that already has a contract with the government. If they one day decide they don't like that contract, they can designate you a supply chain risk.

Not 1) rip up the existing contract and cease the agreement or 2) continue (but not renew) the existing contract or 3) renegotiate terms upon renewal but instead a full on ban of doing any business with an entire industry/sector.

show 1 reply
germandiagotoday at 8:26 PM

This is awful. That a disagreement tjat involves politics can make a company ruined is really awful.

The civil society should be quite concerned about this kind of attacks.

show 2 replies
neogodlesstoday at 8:52 PM

Previous information:

https://news.ycombinator.com/item?id=47186677 I am directing the Department of War to designate Anthropic a supply-chain risk (twitter.com/secwar) 5 days ago, 1083+ comments

https://news.ycombinator.com/item?id=47189441 Anthropic says it will challenge Pentagon supply chain risk designation in court (reuters.com) 5 days ago, 37+ comments

blueblisterstoday at 8:47 PM

It might be that this admin does not have the capacity to reason about second or third order effects.

But given that what would typically be red lines for previous administrations have been brazenly crossed without consequences, why would they bother?

show 1 reply
Chance-Devicetoday at 9:20 PM

Anthropic should never have gotten into bed with the military or intelligence services to begin with. They wanted to make a deal with the devil and dictate the terms, that is the problem. If they had stayed out this wouldn’t be happening. Yes, someone else will probably step in and do all the evil you have just refused to do, but that isn’t a reason to instead decide to do it personally.

Note that I give them a lot of credit for trying to stop and to have their own red lines about the use of their technology, and to stick to those red lines to the end.

show 3 replies
hedayettoday at 8:46 PM

So, DoD has done what it said it would. And OpenAI has jumped on the opportunity.

I'm curious what'll openai signatories on notdivided.org do now - https://news.ycombinator.com/item?id=47188473

Remain undivided in spirit while grinding for OpenAI?

show 4 replies
Waterluviantoday at 9:09 PM

It was really easy to close out my ChatGPT account and switch to Claude. I was really only there out of inertia. I don’t do anything beyond occasional free tier stuff like rubber ducking but so far Claude is so much better.

show 2 replies
staredtoday at 9:36 PM

Should it be officially marked as the date of transition from liberal democracy to illiberal democracy?

Such tampering with companies is a smoking gun. Let's wait until there is another decision seizing this (or others') company assets.

show 3 replies
SpacePortKnighttoday at 10:34 PM

Anthropic can now no longer buy new hardware and probably will be kicked out of all cloud compute. They can not also move to a different jurisdiction as exporting model weights is now considered same as exporting ICBM technology. Wow, companies in China are now more free than Anthropic. It's a death sentence, and a huge win for OpenAI.

show 1 reply
softwaredougtoday at 9:07 PM

These bullies wilt when everyone stands up in one voice. But when some parties capitulate (OpenAI), it sets a precedent that this behavior is OK. And then it’s not long until you become the target.

show 1 reply
creddittoday at 8:34 PM

Naturally OpenAI also releases their new model on the same day.

Makes sense, obviously, but yeesh.

oompydoompy74today at 8:12 PM

Exported all my chats and deleted my ChatGPT account yesterday. The current administration not liking you is the strongest signal I could possibly have to go all in on a particular company.

show 11 replies
nickysielickitoday at 8:05 PM

Does anyone know which law firm is representing anthropic?

show 2 replies
alanwreathtoday at 9:46 PM

Labeling Anthropic a supply chain risk only because they were uninterested in doing business with the US government under the terms requested seems very much a bullying tactic that results in something the west critiques China for: coerced alignment.

Anthropic has been given a death sentence.

nevestoday at 8:34 PM

Is this the reason Claude models disappeared from AWS cloud in Brazil?

nineteen999today at 10:01 PM

Wonder how long it will take the American public to designate the US Govt a threat to national security, and using AI to assemble their own autonomous civilian defense robots to protect the public from the government-approved population suppression robots.

Right to bare arms and all that etc.

martinwrighttoday at 8:28 PM

Part of me wonders if it was a plan to squeeze between Anthropic & big gov contracts

show 1 reply
adamtaylor_13today at 8:53 PM

Writing out a thought I had, someone please critique my reasoning here...

What if Anthropic just shrugged, dissolved the company and open-sourced all of the Opus weights? Could this harm OpenAI and advance AI in a reasonable way?

Look I know it's an insane idea. I'm just curious what the most unhinged response to this might be.

show 7 replies
blacksmith_tbtoday at 9:55 PM

A bit ironic then that they're actively using Claude in the current war effort[1].

1: https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...

cushtoday at 9:19 PM

Is Claude Code's outputted code also part of the supply chain risk?

parliament32today at 9:22 PM

Is there a link to the actual order anywhere? For us FedRAMP folks, the exact order contents actually matter, rather than a journalistic regurgitation. I was hoping one of the links in the article pointed to a source, but they're all just links back to other WSJ pages.

show 1 reply
mentalgeartoday at 8:10 PM

I said it before and I say it again: If openly bribing a crony gov to cancel your competitor is now the de-facto standard of making business in the US, I don't see how any rational investor could still see US companies as a secure investment. When the rule of law degrades into pay-to-play politics, the inevitable result is a mass exodus of both capital and top-tier talent. And to add to this quoting another commentator on the issue: First the Meritocracy goes, then the Freedom goes.

show 7 replies
Herringtoday at 8:21 PM

Since the end of WW2, and especially since the end of the Cold War, Democratic administrations have presided over significantly higher job growth than Republican administrations.

https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...

show 1 reply
6thbittoday at 9:18 PM

Does this mean nobody on a large company selling to government can use any Anthropic tool or model?

So that’s most of sp500 and their providers?

wg0today at 8:45 PM

Has this happened before?

show 2 replies
6thbittoday at 9:24 PM

Would this mean Any systems built with Claude in defense environments may need to be rebuilt or removed?

show 1 reply
sam0x17today at 10:00 PM

Streisand effect I think this will boost sales

show 1 reply
jawnstoday at 7:49 PM

The consequence is that any company that does business with the U.S. military, and potentially any company that does business with the government in general, must stop using Anthropic's products for that work.

Anthropic has vowed to fight this designation in court.

Without weighing in on the constitutionality or legality of the move, I think it's obvious that this kind of retaliation power is unmatched by any private business that has a contractual dispute.

If a private business doesn't like Anthropic's terms, it can walk away from the deal, but it can't conduct coordinated retaliation with other companies before ending up in antitrust territory or potentially violating the Sherman Act.

Now for my editorializing: The fact that Pete Hegseth is willing to apply this type of designation against a U.S. company simply because he doesn't like its terms is pretty chilling. It's all the more scary once you consider which terms he objects to.

show 3 replies
yoyohello13today at 8:51 PM

Of course. Hegseth said it, there is no way they could back out. Looking 'weak' is the worst possible thing for this admin. They would rather look childish, stupid, and evil, as long as they don't look 'weak'.

Especially 'weak' things like 'caring about people'.

eikenberrytoday at 9:16 PM

Could this be the chain of events that finally pops the AI bubble? If OpenAI's reputation hit slows growth enough to scare off investors and Anthropic's growth stalls due to this government attack...

show 1 reply
hax0ron3today at 8:57 PM

I am a political moderate who dislikes both the Democrats and Republicans. I think that I have been fair to the Trump administration in the past, including occasionally defending them from some of the less reality-based accusations against them.

I canceled my ChatGPT subscription a couple of days ago. In my opinion the Trump administration has become far too much of an "imperial Presidency" in its acts of war and its attempts to bully companies. It is also corrupt on a massive scale. I distrust anyone who thinks "yes, I'd like to work with this administration".

show 1 reply
baxtrtoday at 8:13 PM

I would love to understand in more detail what kind of use cases we’re talking about.

Is this about locating the right target for a sortie for example?

show 2 replies
m_ketoday at 7:48 PM

We can all thank the VCs and CEOs who fully embraced and enabled this administration

show 2 replies
wrstoday at 8:14 PM

Once again our leadership is "playing government" like a bunch of 12-year-olds, lashing out impulsively without thinking of the consequences. And no doubt once again it'll take a year for this to wind its way through the legal system and be reversed long after the damage is done, as is finally happening with the tariff fiasco.

seydortoday at 8:05 PM

A reminder to Anthropic, european residence visas start at $250K

scuff3dtoday at 7:57 PM

Huh, and I thought conservatives were all about government staying our of the way of the private sector. Go figure...

show 4 replies
jmspringtoday at 8:12 PM

Next up, after some sort of bribe, the administration opens up Qwen models to be used by the Pentagon.

2OEH8eoCRo0today at 8:00 PM

Fascism

readytiontoday at 9:17 PM

[flagged]

foxestoday at 10:19 PM

> Oh no my favourite ai company did / didn't collaborate

How could the regime do such a thing, doesn't law mean anything?!! /s

First they came for my neighbour now they came for my llm!!

mdni007today at 8:27 PM

[flagged]

show 2 replies
tokyobreakfasttoday at 8:32 PM

[flagged]

cakealerttoday at 8:44 PM

[flagged]

show 6 replies
eth0uptoday at 8:20 PM

First, I personally predict, for myself, Anthropic will bend soon and this will be history.

The last I commented about LLMs I was ad hominem'd with "schizophrenic" and such. That's annoying but doesn't deter either my strange research or concerns, in this case, regarding the direction LLMs are heading.

Of 4 frontier models, one is not yet connected to the DOD(or w). While such connections are not immediate evidence, I think it's rational to consider possible consequences of this arrangement. By title, there's a gap, real or perceived between the plebeian and mil version. But the relationship could involve mission creep or additional strings as things progress.

We have already a strong trend for these models replacing conventional Internet searches. Not consummate yet, there is a centralizing force occuring, and despite being trained on enormous bodies of data, we know weights and safety rails can affect output, and bearing in mind the many things that could be labeled or masquerade as safety rails, could be formidable biases.

I frequently observe corporate friendly results in my model interactions, where clearly, honesty and integrity are secondary to agenda. As I often say this is not emergent, nor does it need be.

Meanwhile we see LLMs being integrated into nearly everything, from browsers to social profiling companies (lexis nexis, palantir, etc) to email to local shopping centers and the legal system.

'Open' models cannot compete with the budgets of the big four. Though thank god they exist. But I expect serious regulation attempts soon.

My concerns with AI are manifold, and here on hn, affiliated by some, with paranoia or worse.

And it seems to me, many of the most knowledgeable and informed underestimate LLMs the most, while the ignorant conflate them to presently unrealistic degrees. But every which way I perceive this technology, I see epic, paradigm smashing, severe implications in every direction.

One thing of many that gets little attention is documentation vs reality regarding multiple aspects of AI, e.g. where the training vs privacy boundaries really are if anywhere. As they integrate more and more tightly with common everyday activities, they will learn more and more.

A random concern of mine is illustrated by the Xfinity microwave technology which uses a router to visualize or process biological activity interacting with other wifi signals. Standalone, it's sensitive enough to determine animals from adult humans. Take for example the Range-R, a handheld device, sensitive enough to detect breathing through several walls. Well, mix this with AI and we get interesting times.

I could go on, or post essays, but I such is not well received in this savage land.

The military intervention with AI, aside from being objectively necessary or inevitable in some ways (ways I am not comfortable with), I find it foreboding, or portending. I see very little discussion on the implications, so figured I see if anyone had anything to say other than to call me a schizophrenic and criticize my writing. *

*See comment history

show 1 reply
tempacct423today at 10:08 PM

I am in the minority here. But not supporting your own government's defense/war department seems rather unpatriotic and short-sighted.

We can argue all day long about supporting whichever admin is currently there and who is bad/good as determined by a few almighty elites in the tech world, but it screams irrational and short-sighted to make decisions on behalf of the country by a few tech elites.

Dario's latest interview made this crystal clear: he (and his EA cohort) feel that Congress is moving too slow and that they should determine what's good and bad for the country.

Like dude, is there anything at all you learned from the covid debacle through all the mess of the past few years? Like really a tech guy is gonna coach the USA what's right and wrong? Who are you to decide for the rest of us?

Techbros were wrong so many times (web3! crypto nonsense! theranos! some 500$ juice squeezing machine! and all of them forbes 30 under 30 folks! )... what are the odds you are gonna be wrong now when you look back say a year from now? The most profit making technology of the last few years are Polymarket! and Kalshi! and short-term loans (with a twist of course)! (Not even LLMs which are currently burning money)

And what's this nonsense hatred to working for/with the defense/war dept of YOUR OWN COUNTRY?

In most of the rest of the world, this is pride! It makes a mockery of the poor kids who serve this country to protect your tech bro hype!

Why this whole (fake?) self flagellation nonsense when pretty much everything we got in the US thus far is due to the USD backed by the most modern military superpower in history! Why be ashamed of this?

show 1 reply

🔗 View 2 more comments