logoalt Hacker News

deepsquirrelnettoday at 3:03 PM6 repliesview on HN

The DoDs recent beef with Anthropic over their right to restrict how Claude can be used is revealing.

> Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance

Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.

1. https://www.nbcnews.com/tech/security/anthropic-ai-defense-w...


Replies

chasd00today at 4:45 PM

hasn't Ukraine already proved out autonomous weapons on the battlefield? There was a NYT podcast a couple years ago where the interviewed higher up in the Ukraine military and they said it's already in place with fpv drones, loitering, target identification, attack, the whole 9 yards.

You don't need an LLM to do autonomous weapons, a modern Tomahawk cruise missile is pretty autonomous. The only change to a modern tomahawk would be adding parameters of what the target looks like and tasking the missile with identifying a target. The missile pretty much does everything else already ( flying, routing, etc ).

show 2 replies
nradovtoday at 5:05 PM

The DoD was pursuing autonomous AI weapons decades ago, and succeeded as of 1979 with the Mk 60 Captor Mine.

https://www.vp4association.com/aircraft-information-2/32-2/m...

The worries over Skynet and other sci-fi apocalypse scenarios are so silly.

show 1 reply
nightskitoday at 3:39 PM

If you ever doubted it you were fooling yourself. It is inevitable.

show 2 replies
georgemcbaytoday at 5:37 PM

> Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.

This situation legitimately worries me, but it isn't even really the SkyNet scenario that I am worried about.

To self-quote a reply to another thread I made recently (https://news.ycombinator.com/item?id=47083145#47083641):

When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.

I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage.

I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem.

show 1 reply
bigyabaitoday at 6:11 PM

It turned out that the Pentagon just ignored Anthropic's demands anyways: https://www.wsj.com/politics/national-security/pentagon-used...

I really doubt that Anthropic is in any kind of position to make those decisions regardless of how they feel.

show 1 reply
zer00eyztoday at 3:47 PM

> Autonomous AI weapons

In theory, you can do this today, in your garage.

Buy a quad as a kit. (cheap)

Figure out how to arm it (the trivial part).

Grab yolo, tuned for people detection. Grab any of the off the shelf facial recognition libraries. You can mostly run this on phone hardware, and if you're stripping out the radios then possibly for days.

The shim you have to write: software to fly the drone into the person... and thats probably around somewhere out there as well.

The tech to build "Screamers" (see: https://en.wikipedia.org/wiki/Screamers_(1995_film) ) already exists, is open source and can be very low power (see: https://www.youtube.com/shorts/O_lz0b792ew ) --

show 2 replies