logoalt Hacker News

cube00yesterday at 11:41 PM17 repliesview on HN

From that same X thread: Our agreement with the Department of War upholds our redlines [1]

OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m

[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...


Replies

AlexVranastoday at 12:15 AM

OpenAI is playing games.

When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."

When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."

That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.

show 2 replies
nkassisyesterday at 11:48 PM

OpenAI's post about their contract has the "redlines" described and they don't match what Anthropic wanted. (even if the text tries to imply they do)

https://openai.com/index/our-agreement-with-the-department-o...

show 1 reply
jrochkind1today at 5:46 AM

Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)

Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.

show 2 replies
Wowfunhappyyesterday at 11:58 PM

> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?

The current administration is so incompetent that I find this perfectly believable.

I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.

I don't know if that's actually what happened here, I just find it plausible.

show 3 replies
jellyroll42today at 1:27 AM

Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.

show 1 reply
_heimdalltoday at 2:19 AM

Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.

show 1 reply
Nevermarktoday at 12:07 AM

> more stringent safeguards than previous agreements, including Anthropic's.

Except they are not "more stringent".

Sam Altman is being brazen to say that.

In their own agreement as Altman relays:

> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control

> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing

> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives

> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.

Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.

In other words, no OpenAI restriction at all.

That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.

(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)

show 9 replies
kelnostoday at 4:06 AM

The red lines are not the same.

Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.

OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.

Unfortunately, existing law is more permissive than Anthropic would have been.

827atoday at 12:04 AM

My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).

bastawhiztoday at 1:53 AM

Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.

show 1 reply
softwaredougtoday at 12:46 AM

The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.

OpenAI has more of an understanding that the technology will follow the law.

There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.

The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.

show 1 reply
rootusrootusyesterday at 11:45 PM

Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.

show 7 replies
ameliustoday at 1:13 AM

There will be a lawsuit about this.

show 1 reply
Analemma_yesterday at 11:55 PM

It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.

show 1 reply
slibhbtoday at 1:43 AM

It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs

tosappletoday at 4:13 AM

[dead]

breakitmakeittoday at 3:52 AM

[dead]