logoalt Hacker News

We Will Not Be Divided

1141 pointsby BloondAndDoomtoday at 12:54 AM426 commentsview on HN

Comments

5o1ecisttoday at 4:59 AM

> We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

This is a trap. Two, I guess, but let's take the first one:

Domestic mass surveillance. Domestic.

Remember the eyes agreements: https://www.perplexity.ai/search/are-the-eyes-agreements-abo...

Expanding:

> These pacts enable member countries to share signals intelligence (SIGINT), including surveillance data gathered globally. Disclosures, notably from Edward Snowden in 2013, revealed that allies intentionally collect data on each other's citizens - bypassing domestic restrictions like the US ban on NSA spying on Americans - then exchange it.

Banning domestic mass surveillance is irrelevant.

The eyes-agreements allow them (respective participating countries) to share data with each other. Every country spies on every other country, with every country telling every other country what they have gathered.

This renders laws, which are preventing The State from spying on its own citizens, as irrelevant. They serve the purpose of being evidence of mass manipulation.

show 2 replies
thimabitoday at 1:44 AM

The problem with forcing public policy on companies is that companies are ultimately made from individuals, and surely you can’t force public policy down people’s throats.

I’m sure nothing good can come out of strong-arming some of the brightest scientists and engineers the U.S. has. Such a waste of talent trying to make them bend over to the government’s wishes… instead of actually fostering innovation in the very competitive AI industry.

show 2 replies
doodlebuggingtoday at 2:52 AM

The best way for AI companies to fight this would be to remind those who request this capability that the AI knows exactly where they live, where they hang out, and that any one of them can also be targeted by a rogue AI system with no human in the loop. Capabilities that they are requesting could jeopardize them, their personal assets, and their families if something goes haywire or, in the much more common case, where the AI is used as an attack tool by an outside adversary who has gained unauthorized access.

All of this should remain a bridge too far, forever.

EDIT: It is one level of bad when someone hacks a database containing personal healthcare data on most Americans as happened not long ago. A few years back, the OPM hack gave them all they needed to know about then-current and former government employees and service members and their families. Wait until a state-sponsored actor finds their way into the surveillance and targeting software and uses that back door to eliminate key adversarial personnel or to hold them hostage with threats against the things they value most so that the adversary builds a collection of moles who sell out everything in a vain attempt to keep themselves safe.

Of course we already know what happens when an adversary employs these techniques and that is why we are where we are right now.

show 3 replies
dangtoday at 2:09 AM

Here's the sequence (so far) in reverse order - did I miss any important threads?

Statement on the comments from Secretary of War Pete Hegseth - https://news.ycombinator.com/item?id=47188697 - Feb 2026 (31 comments)

I am directing the Department of War to designate Anthropic a supply-chain risk - https://news.ycombinator.com/item?id=47186677 - Feb 2026 (872 comments)

President Trump bans Anthropic from use in government systems - https://news.ycombinator.com/item?id=47186031 - Feb 2026 (111 comments)

Google workers seek 'red lines' on military A.I., echoing Anthropic - https://news.ycombinator.com/item?id=47175931 - Feb 2026 (132 comments)

Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1527 comments)

The Pentagon Feuding with an AI Company Is a Bad Sign - https://news.ycombinator.com/item?id=47168165 - Feb 2026 (33 comments)

The Pentagon threatens Anthropic - https://news.ycombinator.com/item?id=47154983 - Feb 2026 (125 comments)

US Military leaders meet with Anthropic to argue against Claude safeguards - https://news.ycombinator.com/item?id=47145551 - Feb 2026 (99 comments)

Hegseth gives Anthropic until Friday to back down on AI safeguards - https://news.ycombinator.com/item?id=47142587 - Feb 2026 (128 comments)

show 1 reply
ArchieScrivenertoday at 2:46 AM

The USA showed itself to be a Command Economy that uses 'private enterprise' as a fascade of legitimacy during Covid. Without government spending, employment, and contracts, the USA would be net negative growth.

Now the DoD, who are by far the largest budgetary expense for the tax payer, wants us to believe they don't have a better Ai than current industry? That is a double sword admission; either they are exposing themselves again as economic decision makers, or admitting they spend money on routine BS with zero frontier war fighting capabilities.

Either way, it is beyond time to reform the Military and remove the majority of its leadership as incompetent stewards and strategists. That doesn't even include the massive security vulnerabilities in our supply chains given military needs in various countries. (Taiwan and Thailand)

largbaetoday at 5:30 AM

The signatories of this (letter, petition, whatever) are the same folks who profit from creating this Pandora's Box. If you don't want it opened, stop making it?

show 1 reply
davidwtoday at 1:18 AM

"We hope our leaders will..." I realize things are moving quickly, and the stakes are high here, but thinking about what happens if the hopes are not met might be a next step.

show 4 replies
kace91today at 2:17 AM

Among other consequences, if Anthropic ends up being killed it’s going to be just another nail in the coffin of trust in America.

Companies who subscribed will find themselves without an important tool because the president went on a rant, and might wonder if it’s safe to depend on other American companies.

dataflowtoday at 2:36 AM

Why are the signing employees (at least the anonymous ones) trusting the creators of this website? What if it was set up by someone who wanted to gather a list of all the dissidents who would silently protest or leave the companies or whatever? Do you know whom you are going to hold accountable if it turns out these folks don't delete your verification data, or share it with your employer, or worse?

Also, another warning to anonymous users: it's a little bit naive to trust the "Google Forms" verification option more than the email one, given such employers probably the ability to monitor anything you do on your device, even if it's loading the form. And, in Google's case, they could obviously see what forms you submitted on the servers, too. If you're anonymous, you might as well use the alternate verification option.

Anyway - I'm not claiming it's likely that the website creator is malicious, but surely it's not beyond question? The website authors don't even seem to be providing others with the verification that they are themselves asking for.

P.S. I fully realize realizing these itself might make fewer people sign the form, which may be unfortunate, but it seems worth a mention.

Meekrotoday at 2:06 AM

I've gathered that the dispute is over Anthropic's two red lines: mass surveillance and fully autonomous weapons. Is there any information (or rumors even) about what the specific request was? I can't believe the government would be escalating this hard over "we might want to do autonomous weapons in the vague, distant future" without a concrete, immediate request that Anthropic was denying.

Even if there was a desire for autonomous weapons (beyond what Anduril is already developing), I would think it would go through a standard defense procurement procedure, and the AI would be one of many components that a contractor would then try to build. It would have nothing to do with the existing contract between Anthropic and the Dept of War.

What, then, is this really about?

show 2 replies
sourcegrifttoday at 6:15 AM

Cute, I will also sign this since there are only upsides of Good optics and no downsides Let me know when any of them resigns after the companies do inevitably take the million dollar contracts.

culitoday at 2:57 AM

Before you leave a comment about how meaningless this is unless they do XYZ,

please realize that there's likely a group chat out there somewhere where all of these concerns have already been raised and considered. The best thing you can do is ask how you as an outsider can help support these organizers

rabbitlordtoday at 1:31 AM

I am not a fan of Anthropic guys, but this time I stand with it. We all should.

show 1 reply
david_shawtoday at 2:32 AM

I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.

I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.

It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

show 1 reply
lightyrstoday at 2:59 AM

» Have there been any mistakes in signature verification for this letter?

» We are aware of two mistakes in our efforts to verify the signatures in the form so far. One person who was not an employee of OpenAI or Google found a bug in our verification system and signed falsely under the name "You guys are letting China Win". This was noticed and fixed in under 10 minutes, and the verification system was improved to prevent mistakes like this from happening again. We also had two people submit twice in a way that our automatic de-duplication didn't catch. We do periodic checks for this. Because of anonymity considerations, all signatures are manually reviewed by one fallible human. We do our best to make sure we catch and correct any mistakes, but we are not perfect and will probably make mistakes. We will log those mistakes here as we find them.

txrx0000today at 1:35 AM

This is why you can't gatekeep AI capabilities. It will eventually be taken from you by force.

It's time to open-source everything. Papers, code, weights, financial records. Do all of your research in the open. Run 100% transparent labs so that there's nothing to take from you. Level the playing field for good and bad actors alike, otherwise the bad actors will get their hands on it while everyone else is left behind. Start a movement to make fully transparent AI labs the worldwide norm, and any org that doesn't cooperate is immediately boycotted.

Stop comparing AI capabilities to nuclear weapons. A nuke cannot protect against or reverse the damage of another nuke. AI capabilities are not like nukes. General intelligence should not be in the hands of a few. Give it to everyone and the good will prevail.

Build a world where millions of AGIs run on millions of gaming PCs, where each AI is aligned with an individual human, not a corporation or government (which are machiavellian out of necessity). This is humanity's best chance at survival.

show 10 replies
codepoet80today at 1:12 AM

Nicely done. Hold this line — there’s got to be one somewhere.

_aavaa_today at 2:02 AM

Yes, take disparate sets of employees and like, oh idk unionize while you still have power.

zahlmantoday at 5:56 AM

Is there a particular reason why the actual letter content requires JavaScript to load while everything else is readable?

conductrtoday at 4:19 AM

You can’t be silly enough to build a product that enables things like mass surveillance to proliferate and then try to take a stance of “please don’t use it like that”. You invented a genie and let him out of the bottle.

show 1 reply
tomcamtoday at 4:39 AM

Please take this question at face value. I tend to be slightly pro defense department in this context, but it is not a strongly held belief.

What I have known is that since its very inception, Google has been doing massive amounts of business with the war department. What makes this particular contract different? I really am trying to understand why these sentiments now.

show 1 reply
hedayettoday at 5:24 AM

Just one thing - unless you're at a principal level or higher, don't quit as long as your conscience can take it. You'll be replaced by 10 other people overnight.

mitch-flindelltoday at 2:11 AM

The primary purpose of these products is mass surveillance why else would they be allowed to be built ?

mortsnorttoday at 3:42 AM

Kneecapping the country's best AI lab seems like a bad way to win at the cyber.

Quarreltoday at 4:30 AM

I know it is a serious topic, but before I clicked on it, I assumed this was going to be about Prime numbers...

Maybe it can get reused after this stuff is over.

mythztoday at 2:43 AM

These 2 Exceptions shouldn't have to be disputed.

At this point I'd go far to say I wouldn't trust any company with my AI history that caves to DoD demands for mass domestic surveillance or fully autonomous weapons.

Your AI will know more about you than any other company, not going to be trusting that to anyone who trades ethics for profits.

latencyhawktoday at 5:49 AM

Well, I think I will get the 200 sub.

rayinertoday at 3:57 AM

This seems squarely within the purpose of the Defense Production Act: https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

"Title I authorizes the President to identify specific goods as 'critical and strategic' and to require private businesses to accept and prioritize contracts for these materials."

If you invented a new kind of power source, and the government determined that it could be used to efficiently kill enemies, the government could force you to provide the product to them under the DPA. Why should AI companies get an exemption to that?

show 1 reply
driverdantoday at 2:52 AM

This is a nice gesture but completely meaningless. There is absolutely no commitment in this. "We hope our leaders.." has no conditions, no effects.

If you're an employee and actually believe in this you need to commit to something, like resigning.

show 1 reply
abhijitrtoday at 2:40 AM

The book "On Tyranny: 20 lessons from the 20th century" by the historian Timothy Snyder is an excellent read for these times. The very first lesson is "Do not obey in advance". It's about how authoritarian power often doesn't need to force compliance, people simply bend the knee in anticipation of being forced. This simply emboldens the authoritarians to go further.

I've been disappointed to see many businesses and institutions obeying in advance recently. I hope this moment wakes up the tech community and beyond.

bcooketoday at 1:12 AM

I'd love to see this extended to any American regardless of past/present employment with Google or OpenAI

show 1 reply
siliconc0wtoday at 3:26 AM

We need key AI researchers at these companies to speak out - execs will not care otherwise. If Jeff Dean made this a red line, it might matter.

show 1 reply
focusgroup0today at 2:47 AM

> domestic mass surveillance and autonomously killing people without human oversight

spoiler alert: this is already happening

do labs in China have a choice in the matter?

bottlepalmtoday at 2:12 AM

We all knew AI had the potential to be extremely powerful, and we all perused it anyways. What did we think would happen? The government/military always takes control of the most powerful/dangerous systems. If you work for a defense contractor or under ITAR then you already know this.

The right way to deal with this is political - corporate campaign contributions and lobbying. You're not going to be able to fight the military if they think they need something for national security.

show 1 reply
himata4113today at 1:25 AM

Does this mean there is a non zero chance we will get some kind of grok+chinese model mix that's used across the entire US military? Ironic isn't it.

ipaddrtoday at 3:58 AM

And people were wondering how OpenAI will find profitability.

MattDaEskimotoday at 1:18 AM

This was a brave, heartwarming read. Thank you to the teams

show 1 reply
mftbtoday at 1:46 AM

Stand your ground.

show 1 reply
trinsic2today at 2:10 AM

I missing the actual letter. I think that part of the content is hidden behind some javascript. Can someone post it.

spuztoday at 1:21 AM

They should be collecting signatures from employees at xAI. I think they're probably most likely to fill the space left by Anthropic.

show 4 replies
PostOncetoday at 1:20 AM

My take is that none of the AI companies really care (companies can't care), they just realize that if they go down that road, public opinion will be so vehemently against AI in all forms that it will be regulated out of viability by the electorate.

Also, if AI exists, AI will be used for war. The AI company employees are kidding themselves if they think otherwise, and yet they are still building it (as opposed to resigning and working on something else), because in the end, money is the only true God in this world.

show 1 reply
ReptileMantoday at 5:49 AM

It is really nice to see employees creating lists for the next round of laoffs themselves.

theahuratoday at 4:22 AM

OpenAI is nothing without its people

dmixtoday at 2:10 AM

Not using Claude only weakens the state. Just don’t oblige

yayryesterday at 9:29 AM

It's good that there are still empathic humans in the decision and build chain when it comes to AI systems...

show 1 reply
ripped_britchestoday at 2:33 AM

No surprise to have not heard anything from xAI

snickerbockerstoday at 2:43 AM

>We are the employees of Google and OpenAI, two of the top AI companies in the world.

Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.

HWNDUS7today at 6:08 AM

Sweet. Looking forward to another CTF season of He Will Not Divide Us.

I love performative acts of wealthy Silicon Valley drags.

chkaloontoday at 5:09 AM

Too late

🔗 View 46 more comments