logoalt Hacker News

OpenAI agrees with Dept. of War to deploy models in their classified network

371 pointsby eoskxtoday at 2:59 AM213 commentsview on HN

https://xcancel.com/sama/status/2027578652477821175

https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...


Comments

Imnimotoday at 3:36 AM

I don't see how OpenAI employees who have signed the We Will Not Be Divided letter can continue their employment there in light of this. Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement. The only plausible explanation is that there is an understanding that OpenAI will not, in practice, enforce the red lines.

show 9 replies
blueblisterstoday at 5:07 AM

My knee-jerk reaction to this was looks like an opportunistic maneuver that Sam is known for and I'm considering canceling my subscriptions and business with OpenAI

But what's the most charitable / objective interpretation of this?

For example - https://x.com/UnderSecretaryF/status/2027594072811098230

Does it suggest that determination of "lawful use" and Dario's concerns falls upon the government, not the AI provider?

Other folks have claimed that Anthropic planned to burn the contentious redlines into Claude's constitution.

Update: I have cancelled my subscriptions until OpenAI clarifies the situation. From an alignment perspective Anthropic's stand seems like the correct long-term approach. And at least some AI researchers appear to agree.

show 2 replies
tintortoday at 6:39 AM

Difference from Anthropic's deal is:

- OpenAI is ok with use of their AI for autonomous weapons, as long as there is "human responsibility"

- Anthropic is not ok with use of their AI for autonomous weapons

quantumwannabetoday at 4:27 AM

More details on the difference between the OpenAI and Anthropic contracts from one of the Under Secretaries of State:

>The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.

https://x.com/UnderSecretaryF/status/2027566426970530135

> For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected.

> Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.

> It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here

https://x.com/UnderSecretaryF/status/2027594072811098230

show 3 replies
fiatpandastoday at 5:50 AM

>human responsibility for the use of force, including for autonomous weapon systems

So there’s the difference, and an erasure of a red line. OpenAI is good with autonomous weapon systems. Requiring human responsibility isn’t saying much. Theres already military courts, rules of engagement, and international rules of war.

cube00today at 3:45 AM

If the redlines are the same how'd this deal get struck?

ChatGPT maker OpenAI has the same redlines as Anthropic when it comes to working with the Pentagon, an OpenAI spokesperson confirmed to CNN.

https://edition.cnn.com/2026/02/27/tech/openai-has-same-redl...

show 3 replies
push0rettoday at 3:24 AM

So they agreed to the same red lines that had earlier led to the fallout with Anthropic? Kind of strange.

show 8 replies
spprashanttoday at 5:01 AM

Just uninstalled the app and canceled subscription. OpenAI can't justify their insane valuation without an user base. Especially when there are capable models elsewhere.

Jcampuzano2today at 3:49 AM

I would put bets on the issue probably being that it was pointed out that Anthropic's models were used to assist the raid in Venezuela, Anthropic then aggressively doubled down on their rules/principles and the DOD didn't like being called out on that so they lashed out, hard.

If theres anything this admin doesn't like, its being postured against or called out by literally anyone, especially in public.

show 1 reply
pbnjaytoday at 4:51 AM

I had kept my Plus subscription just because I was lazy, and it was inexpensive and convenient… but this turn definitely helped me get off the fence. I am exporting and deleting my data now, and the cancellation is already done.

davidwtoday at 3:52 AM

We need some kind of group like "tech people with morals". I'm done with these people and their corruption and garbage.

show 3 replies
deauxtoday at 3:26 AM

All OpenAI employees during the board revolt that vouched for sama's return are personally responsible.

show 1 reply
ttrashhtoday at 5:11 AM

Cancel your subscription. It's the least you can do.

slibhbtoday at 4:18 AM

I'm unsure how to feel about this whole dust-up. It doesn't seem like much has changed in substance. Maybe OpenAI outmaneuvered Anthropic behind the scenes. Possibly Anthropic was seen as not behaving deferentially enough towards the government. But this administration has proven comically corrupt, so it wouldn't surprise me if money was involved. Will be interested to see what journalists turn up.

AbstractH24today at 4:14 AM

It’s amazing how quickly the players keep shifting here.

Yesterday and the day before sentiment seemed to be focused on “Anthropic selling out”, then that shifted to “Anthropic holds true to its principles in a David vs Goliath” and “the industry will rally around one another for the greater good.” But suddenly we’re seeing a new narrative of “Evil OpenAI swoops in to make a deal with the devil.”

Reminds of that weekend where Sam Altman lost control of OpenAI.

show 2 replies
mmanfrintoday at 5:02 AM

Absolute disgrace of a person and organization.

deadbolttoday at 5:33 AM

Choosing to go along with calling it the "Department of War" tells you all you need to know.

show 1 reply
jordanscalestoday at 3:23 AM

This is awkward? https://news.ycombinator.com/item?id=47188473

show 2 replies
jstummbilligtoday at 6:26 AM

> Surely if OpenAI had insisted upon the same things that Anthropic had, the government would not have signed this agreement.

Under normal circumstances, that would seem really plausible. But given how far Trump continues to go just out of spite and to project power, it actually is the opposite.

I am fully prepared to believe that they got absolutely nothing else out of it (to date).

corfordtoday at 3:53 AM

If you're unhappy with this, an immediate way to signal it is with your wallet. In my case I've just uninstalled chatgpt from my phone, cancelled my subscription and will up my spend with anthropic.

show 14 replies
rich_sashatoday at 3:33 AM

Is the Pentagon signing a EULA confirming all their data will now be used, anonymised, for improving the service?

show 1 reply
impulser_today at 5:07 AM

For the people that don't understand how they got a deal with the same redlines, it probably because OpenAI agreed to not question them. The safeguards are there, both parties agree now fuck off and let us use your model how we see fit.

Anthropic probably made the mistake of questioning the Military's activities related to Claude after the Venezuela mission and wanted reassurance that the model wouldn't be used for the redlines, and the military didn't like this and told them we aren't using your models unless you agree to not question us and then the back and forth started.

In the end, we will probably have both OpenAI and Anthropic providing AI to the military and that's a good thing. I don't think they will keep the supply chain risk on Anthropic for more than a week.

show 2 replies
looksjjhgtoday at 6:22 AM

So it’s personal basically

iainctduncantoday at 4:06 AM

Did anyone ever doubt sama would just follow the money?

weasels gonna weasel

jdiaz97today at 5:10 AM

cancelling my openai subscription, they're gonna miss my 20 USD

operator_niltoday at 3:42 AM

So does this mean that OpenAI will give whatever the DoD asks for and they will pinky swear that it won’t be used for mass surveillance and autonomous killing machines?

show 1 reply
LarsDu88today at 5:57 AM

China has evacuated its embassies in Iran.

This is really about the imminent strike on Iran which is now super telegraphed. They are gonna use ChatGPT for target selection, and the likely outcome is that it will fuck things up and a bunch of civilians are going to die because of this decision.

When this happens, Altman will go from being merely a drifter to having blood on his hands.

show 2 replies
insane_dreamertoday at 4:23 AM

I'm never using an OpenAI model or Codex ever again. Period. Idaf whether it scores better than Claude on benchmarks or not.

This is a red line for me. It's clear OpenAI has zero values and will give Hesgeth whatever he wants in exchange for $$$.

https://www.nytimes.com/2026/02/27/technology/openai-reaches...

saostoday at 6:36 AM

Musk 100% right about this guy

AmericanOPtoday at 4:02 AM

Instant uninstall.

interestpiquedtoday at 3:46 AM

What a snake

dataflowtoday at 3:55 AM

This seems full of loopholes.

> The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.

(1) Well, did both sides sign the agreement and is it actually effective? Or is it still sitting on someone's desk until it can get stalled long enough?

(2) What does "agreement" even mean? Is it a legally enforceable contract, or just some sort of MoU or pinkie promise?

(3) If it's a legally enforceable contract, is it equally enforceable on all of their contracts, or just some? Do they not have existing contracts this would need to apply to?

(4) What does "reflects them in law and policy" even mean? Since when does DoW make laws, and in what sense do their laws reflect whatever the agreement was? Are these laws he can point to so everyone else can see? Can he at least copy-paste the exact sentences the government agreed to?

elAhmotoday at 3:50 AM

All that money and not a single ounce of integrity.

straydusktoday at 3:56 AM

I know the reaction to this, if you're a rational observer, is "OpenAI have cut corners or made concessions that Anthropic did not, that's the only thing that makes sense."

However, if you live in the US and pay a passing attention to our idiotic politics, you know this is right out of the Trump playbook. It goes like this:

* Make a negotiation personal

* Emotionally lash out and kill the negotiation

* Complete a worse or similar deal, with a worse or similar party

* Celebrate your worse deal as a better deal

Importantly, you must waste enormous time and resources to secure nothing of substance.

That's why I actually believe that OpenAI will meet the same bar Anthropic did, at least for now. Will they continue to, in the same way Anthropic would have? Seems unlikely, but we'll see.

show 1 reply
m4rtinktoday at 4:23 AM

So this is indeed how OpenAI survives (a little bit longer ?) - government bailout.

camillomillertoday at 5:36 AM

Sam Altman is this. Sam Altman needs to be stopped.

hnthrowaway0315today at 4:43 AM

Ah, is it the time when Skynet starts to manifest itself...

outside1234today at 5:33 AM

Screw OpenAI. Never opening that app again or using one of their models.

mkozlowstoday at 4:02 AM

So there are two possibilities here:

1. There's no substantive change. Hegseth/Trump just wanted to punish Anthropic for standing up to them, even if it didn't get them anything else today -- establishing a chilling effect for the future has some value for them in this case, after all. And OpenAI was willing to help them do that, despite earlier claiming that they stood behind Anthropic's decisions.

2. There is a substantive change. Despite Altman's words, they have a tacit understanding that OpenAI won't really enforce those terms, or that they'll allow them to be modified some time in the future when attention has moved on elsewhere.

Either way, it makes Altman look slimy, and OpenAI has aligned with Trump against Anthropic in a place where Anthropic made a correct principled stand. It's been clear for a while that Anthropic has more ethics than OpenAI, but this is more naked than any previous example.

show 1 reply
rvztoday at 3:42 AM

Not a surprise here, that letter was a trap for OpenAI employees who filled it out with their names on it. [0]

The ones that did might as well leave. But there was no open letter when the first military contract was signed. [1] Now there is one?

[0] https://news.ycombinator.com/item?id=47176170

[1] https://www.theguardian.com/technology/2025/jun/17/openai-mi...

0xfedbeetoday at 6:30 AM

Honestly not even surprised. What else could you expect from a zionist?

skygazertoday at 4:54 AM

Perhaps Trump's DOD objects specifically to Anthropic models themselves declining to do immoral and illegal things, and not something just stipulated in an ignorable contract. That would give room for Sam to throw some public CYA into a contract, while neutering model safety to their requirements.

robertwt7today at 3:35 AM

How did they agree to the terms that were initially put forward by Anthropic but with OpenAI? Surely there’s a catch here. Or is it just Sam negotiation skill?

d--btoday at 4:10 AM

At this stage, everything OpenAi does is to try to keep investors investing.

They’re willing to let their brand go to trash for this government contract.

Pretty much every American is standing with Anthropic on this. No one left or right wants mass surveillance and terminators. In fact, no one in the world wants this, except the US military.

But Altman seems so desperate to keep the cash coming he’s ready to do anything.

dakollitoday at 4:01 AM

They're pretending like they didn't enter into this agreement last January and are completely entrenched in intelligence programs already. They are trying to make it look like they are stepping up in a time of need (time of need for the DoD), in reality they sold their soul to intelligence and the military a year ago.

I posted about this here after Sam made his tweet:

https://news.ycombinator.com/item?id=47189756

Source: https://defensescoop.com/2025/01/16/openais-gpt-4o-gets-gree...

t0lotoday at 3:47 AM

Snakes- as predicted

🔗 View 24 more comments