I understand the vision, but how does this work on a global scale. e.g. American employees refuse to build this, but China's don't.
Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).
Do the same thing we did with the nuclear arms race: Treaties to limit and control it.
Obviously, we would have had more political leverage if our leaders had started working on a treaty before they crossed enough moral red lines to start a tech revolt, but we did not elect the sort of leaders that would do that.
Not to worry, xAI would do it even if Google didn't.
Also, Anthropic didn't actually refuse to work on all military stuff. They have some conditions, which isn't the same thing.
Well, then military use of some US commercial AI systems will be subject to minimal restrictions while Chinese AI might not be.
Thus some people avoid having to see their work used for killing people or in mass surveillance, so that they're actually able to contribute to AI development instead of leaving the field.
The reason it works is when you have less participants in an effort you have slower progress in that endeavor. Brilliant employees prohibiting their entire org to not support the development of bad things prevents less brilliant employees from doing bad things.
It is sort of like computers are amazing but can also be a privacy nightmare. Software engineers don’t help or coordinate with black hat hackers. So black hat hackers have a harder time refining their systems.
That’s exactly why I think the principled position is naive in a tragedy of the commons situation we’re in - it isn’t a sci fi story with a happy ending, it’s the Manhattan project and 70+ years ago nazi and japanese data centers doing foundational model training would’ve been bombed to smithereens at any cost.
I'm going to give a shout out here to an episode of the excellent podcast Hardcore History, specifically Episode 59: The Destroyer of Worlds [1].
The development of the atomic bomb created a debate in American policy circles about how the US should react. Within a few years, the same debate occurred over developing thermonuclear weapons. The same question kept coming up: what if the enemy has these weapons and we don't?
Dan Carlin's position, which I happen to agree with, is that America chose wrong. It became both belligerent and paranoid to a degree that just wasn't the case before WW2. If you look up the history of regime changes at the hands of the US [2] then you can see it went into overdrive after 1945.
Part of the problem here I think is projection, the psychological phenomenon. It's also a cultural phenomenon. So, for example, when you have a historically oppressed people who are being potentially freed, the oppressors will fret that the formerly oppressed will rise up and kill them. This is projection.
We saw this exact thing play out with Emancipation. There was no mass revenge violence by the former slaves. If anything, there was more violence by the former oppressors against freed slaves and a system that excuded the violence (eg the Colfax massacre [3]).
I think nations can be guilty of this too. The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.
We also see this in how we view AI as a resource. We see it as something to be owned and gatekept such that some US company will become insanely wealthy further extracting every last dollar from every person on Earth.
So your comment belays a common fear that China will displace us as a global hegemonic, imperialist power despite there being zero evidence that China behaves in that fashion. American propaganda runs deep and the projection is strong so this will immediately cause some to say "but Tibet" or "but Taiwan" without really knowing anything any of those situations.
As just one example, the One China policy is the official policy of the US, the EU and almost every nation on Earth. "They might invade" I preemptively hear. They won't, partly because they can't but really because they don't need to. If the world already has the One China policy, why do anything? Oh and I said they can't because they can't. They don't have that military capability. If you think that, you don't know anything about war. Crossing 100 miles of ocean to invade an island with a army of over 500,000 is simply not possible.
Let me put it this way: the 17 or so miles of the English Channel stopped the German war machine despite having millions of soldiers.
Anyway, back to the point: this whole argument of "what if China does military AI?" is (IMHO) projection. If anything, China has shown that they won't allow a US tech company to control and gatekeep AI (eg by rreleasing DeepSeek). And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.
[1]: https://www.dancarlin.com/product/hardcore-history-59-the-de...
[2]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...
> American employees refuse to build this, but China's don't.
It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.
It's {top candidates who have their pick of employers} have the luxury to refuse to build this.
Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.
This is regardless of whether it is in the US or China.
With current leadership, I think we're closer to Germany in this analogy.
[dead]
Is there any reason to think that autonomous weapons are a critical strategic capability? It's hard to see what an unpiloted drone can do that a remotely piloted drone can't, other than perhaps human rights violations.
>American employees refuse to build this, but China's don't.
How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.
>is the US the bad guy
Yes
>I'm trying to ask how to balance personal anti war sentiments with the realities of the world
Insist on open information, never surrender consent willingly and demand justification for everything. As always.