logoalt Hacker News

protocolturetoday at 4:58 AM1 replyview on HN

>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Their "Values":

>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

Read: They are cool with whatever.

>We support the use of AI for lawful foreign intelligence and counterintelligence missions.

Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.

>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.

Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.


Replies

HDThoreauntoday at 5:01 AM

Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

show 3 replies