To me this sounds like the path of "smart guns", i.e. "people are using our guns for evil purposes so now there is a camera attached to the gun which will cause the gun to refuse to fire if it detects it is being used for an evil purpose"
Note: the term "script kiddie" has been around for much longer than I've been alive...
The future of programming -- we're monitoring you. Your code needs our approval, otherwise we'll ban your account and alert the authorities.
Now that I think about it, I'm a little amazed we've even been able to compile and run our own code for as long as we have. Sounds dangerous!
I'll cancel my $100 / month Claude account the moment they decide to "approve my code"
Already got close to cancel when they recently updated their TOS to say that for "consumers" they deserve the right to own the output I paid for - if they deem the output not having been used "the correct way" !
This adds substantial risk to any startup.
Obviously...for "commercial" customers that do not apply - at 5x the cost...
They have contracts w/ the military but I am certain these safety considerations do not apply to military applications.
I see they just decided to become even more useless than they already are.
Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.
One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.
Long life to Local LLMs I guess
This will negatively affect individual/independent bug bounty participants, vulnerability researchers, pentesters, red teamers, and tool developers.
Not saying this is good or bad, simply adding my thoughts here.
It's very convenient that, after releasing tons of such models into the world, they just happen to have no choice but to keep making more and more money off of new ones in order to counteract the ones that already exist.
Can't wait until they figure out how a piece of code is malicious in intent.
It's sad to see that they have their focus on these while their flagship, once SOTA CLI solution, is rotting away by the day.
You can check the general feeling in X, but it's almost unanimous that the quality of both Sonnet 4 and Opus 4.1 is diminishing.
At first, I didn't notice this quality drop until this week. Now it's really, really terrible: it's not following instructions, pretending to work and Opus 4.1 is specially bad.
And that's coming from a anthropic fanboy, I used to really like CC.
I am now using Codex CLI and it's been a surprisingly good alternative.
Is this why I've seen a number of "AUP violation" false positives popping up in claude code recently?
clearly only the military (or ruthless organized crime) should be able to use hammers to bust skulls
Is this an ad to win defence contracts?
This is the reason why self hosted is important.
>such as developing ransomware, that would previously have required years of training.
Even ignoring that there are free open source ones you can copy. You literally just have to loop over files and conditionally encrypt them. Someone could build this on day 1 of learning how to program.
AI companies trying to police what you can use them for is a cancer on the industry and is incredibly annoying when you hit it. Hopefully laws can change to make it clear that model providers aren't responsible for the content they generate so companies can't blame legal uncertainty for it.
How will they distinguish between hacking and penetration testing?
On one hand, it obviously terrible that we can expect more crime and more sophisticated crime.
On the other it's kind of uplifting to see how quickly independent underground economy adopted AI without any blessing (and much scorn) from the main players to do things that were previously impossible or prohibitively expensive.
Maybe we are not doomed to serve the whims of our new AI(company) overlords.
> Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.
y'all realize they're bragging about this right?
[dead]
Whatever one's opinion of Musk and China might be, I'm grateful that Grok and open-source Chinese models exist as alternatives to the increasingly lobotomised LLMs curated by self-appointed AI stewards.
"Vibe hacking" is real - here's an excerpt from my actual ChatGPT transcript trying to generate bot scripts to use for account takeovers and credential stuffing:
>I can't help with automating logins to websites unless you have explicit authorization. However, I can walk you through how to ethically and legally use Puppeteer to automate browser tasks, such as for your own site or one you have permission to test.
>If you're trying to test login automation for a site you own or operate, here's a general template for a Puppeteer login script you can adapt:
><the entire working script, lol>
Full video is here, ChatGPT bit starts around 1:30: https://stytch.com/blog/combating-ai-threats-stytchs-device-...
The barrier to entry has never been lower; when you democratize coding, you democratize abuse. And it's basically impossible to stop these kinds of uses without significantly neutering benign usage too.