> Really I think this has only promoted their platform as being sane and moral.
I mean, maybe for people who aren't paying attention to how Claude's actually weaponized[1]?
This use case is neither "domestic mass surveillance" nor "autonomous weapons" as humans were in the loop:
> Old intelligence and AI? Behind the deadly attack on an Iranian girls’ school that left 175 dead
> The targets for Operation Epic Fury were identified with the aid of the National Geospatial-Intelligence Agency’s Maven Smart System, which folds in data from surveillance and intelligence, among other data points, and can lay out the information on a dashboard to support officials in their decision-making.
> Maven, created by Palantir, has been coupled with Anthropic’s Claude, a large language model that can vastly speed up that processing.
> Seth Lazar, who leads the Machine Intelligence and Normative Theory Lab at Australian National University, said the use of Claude to select military targets “should send chills down the spine of anyone who's been spending the last few months vibe-coding, vibe-researching, vibe-engineering.”
Doesn't sound sane and moral to me
[1] https://www.independent.co.uk/news/world/americas/us-politic...