logoalt Hacker News

randersonyesterday at 1:50 AM11 repliesview on HN

Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members at risk.

Obviously anyone who has used LLMs know they are not on par with humans. There also needs to be an accountability framework for when software makes the wrong decision. Who gets fired if an LLM hallucinates and kills people? Perhaps Anthropic's stance is to avoid liability if that were to happen.


Replies

singronyesterday at 3:39 AM

It's sort of like the opposite of this idea:

https://en.wikipedia.org/wiki/Roger_Fisher_(academic)#Preven...

> Fisher [...] suggested implanting the nuclear launch codes in a volunteer. If the President of the United States wanted to activate nuclear weapons, he would be required to kill the volunteer to retrieve the codes.

>> [...] The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being. [...]

>> When I suggested this to friends in the Pentagon they said, "My God, that's terrible. Having to kill someone would distort the President's judgment. He might never push the button."

> — Roger Fisher, Bulletin of the Atomic Scientists, March 1981[10]

show 2 replies
saulpwyesterday at 1:57 AM

The danger is that we won't be sending these fully-autonomous drones to 'war', but anytime a person in power feels like assassinating a leader or taking out a dissident, without having to make a big deal out of it. The reality is that AI will be used, not merely as a weapon, but as an accountability sink.

show 2 replies
zarzavatyesterday at 3:42 AM

This is exactly how all other weapons of mass destruction were rationalised.

"If we develop <terrible weapon> we can save so many lives of our soldiers". It always ends up being used to murder civilians.

show 1 reply
gzreadyesterday at 1:31 PM

Our drones will fight their drones, and then whichever side loses, will have their humans fighting the other side's drones, and if the humans somehow win, they will fight the other side's humans. War doesn't have an agreed ending condition.

datsci_est_2015yesterday at 2:35 AM

> Hypothetically if we had a choice between sending in humans to war or sending in fully autonomous drones that make decisions on par with humans, the moral choice might well be the drones - because it doesn't put our service members.

I guess let the record state that I am deeply morally opposed to automated killing of any kind.

I am sick to my stomach when I really try to put myself in the shoes of the indigenous peoples of Africa who were the first victims of highly automatic weapons, “machine guns” or “Gatling guns”. The asymmetry was barbaric. I do hope that there is a hell, simply that those who made the decision to execute en masse those peoples have a place to rot in internal hellfire.

To even think of modernizing that scene of inhumane depravity with AI is despicable. No, I am deeply opposed to automated killing of any kind.

show 1 reply
unethical_banyesterday at 1:53 AM

Isn't this the moral hazard of war as it becomes more of a distance sport? That powerful governments can order the razing of cities and assassinate leaders with ease?

We need to do it because our enemies are doing it, in any case.

show 3 replies
the_afyesterday at 5:09 AM

> the moral choice might well be the drones - because it doesn't put our service members at risk.

Not so clear cut. Because now sending people to die in distant wars is likely to get a negative reaction at home, this creates some sort of impediment for waging war. Sometimes not enough, but it's not nothing. Sending your boys to die for fuck knows what.

If you're just sending AI powered drones, it reduces the threshold for war tremendously, which in my mind is not "the moral choice".

All of this assuming AI is as good as humans.

jmward01yesterday at 2:22 AM

War is not moral. It may be necessary, but it is never moral. The only best choice is to fight at every turn making war easy. Our adversaries will, or likely already have, gone the autonomous route. We should be doing everything we can to put major blockers on this similar to efforts to block chemical, biological and nuclear weapons. The logical end of autonomous targeting and weapons is near instant mass killing decisions. So at a minimum we should think of autonomous weapons in a similar class as those since autonomy is a weapon of mass destruction. But we currently don't think that way and that is the problem.

Eventually, unfortunately, we will build these systems but it is weak to argue that the technology isn't ready right now and that is why we won't build them. No matter when these systems come on line there will be collateral damage so there will be no right time from a technology standpoint. Anthropic is making that weak argument and that is primarily what I am dismissive of. The argument that needs to be made is that we aren't ready as a society for these weapons. The US government hasn't done the work to prove they can handle them. The US people haven't proven we are ready to understand their ramifications. So, in my view, Anthropic shouldn't be arguing the technology isn't ready, no weapon of war is ever clean and your hands will be dirty no matter how well you craft the knife. Instead Anthropic should be arguing that we aren't ready as a society and that is why they aren't going to support them.

show 2 replies
fwipyesterday at 1:55 AM

I think it's the opposite. The human cost of war is part of what keeps the USA from getting into wars more than it already is - no politician wants a second Vietnam.

If war is safe to wage, then it just means we'll do it more and kill more people around the globe.

show 2 replies
jakelazaroffyesterday at 2:02 AM

What do you mean, "hallucinates and kills people"? Killing people is the thing the military is using them for; it's not some accidental side effect. It's the "moral choice" the same way a cruise missile is — some person half a world away can lean back in their chair, take a sip of coffee, click a few buttons and end human lives, without ever fully appreciating or caring about what they've done.

show 2 replies
mulmenyesterday at 2:49 AM

Doesn’t this just lower the bar on going to war? Putting real lives on the line makes war a costly last resort.

show 1 reply