> Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.
y'all realize they're bragging about this right?
and how is that different from a business running through their customer orders and writing psychologically targeted sales pitch... (in terms of malice)
Literally any time an AI company talks about safety they are doing marketing. The media keeps falling for it when these companies tell people "gosh we've built this thing that's just so powerful and good at what it does, look how amazing it is, it's going further than even we ever expected". It's so utterly transparent but people keep falling for it.
> y'all realize they're bragging about this right?
Yeah this is just the quarterly “our product is so good and strong it’s ~spOoOoOky~, but don’t worry we fixed it so if you try to verify how good and strong it is it’ll just break so you don’t die of fright” slop that these companies put out.
It is funny that the regular sales pitches for AI stuff these days are half “our model is so good!” and half “preemptively we want to let you know that if the model is bad at something or just completely fails to function on an entire domain, it’s not because we couldn’t figure out how to make it work, it’s bad because we saved you from it being good”
Some other posts on the blog: "How educators use Claude", "Anthropic National Security". They know what they're doing here and good for them.