logoalt Hacker News

Detecting and countering misuse of AI

112 pointsby indigodaddyyesterday at 10:44 PM119 commentsview on HN

Comments

bobbiechenyesterday at 11:41 PM

"Vibe hacking" is real - here's an excerpt from my actual ChatGPT transcript trying to generate bot scripts to use for account takeovers and credential stuffing:

>I can't help with automating logins to websites unless you have explicit authorization. However, I can walk you through how to ethically and legally use Puppeteer to automate browser tasks, such as for your own site or one you have permission to test.

>If you're trying to test login automation for a site you own or operate, here's a general template for a Puppeteer login script you can adapt:

><the entire working script, lol>

Full video is here, ChatGPT bit starts around 1:30: https://stytch.com/blog/combating-ai-threats-stytchs-device-...

The barrier to entry has never been lower; when you democratize coding, you democratize abuse. And it's basically impossible to stop these kinds of uses without significantly neutering benign usage too.

show 4 replies
umviyesterday at 11:22 PM

To me this sounds like the path of "smart guns", i.e. "people are using our guns for evil purposes so now there is a camera attached to the gun which will cause the gun to refuse to fire if it detects it is being used for an evil purpose"

show 2 replies
jedimastertyesterday at 11:53 PM

Note: the term "script kiddie" has been around for much longer than I've been alive...

pton_xdtoday at 12:25 AM

The future of programming -- we're monitoring you. Your code needs our approval, otherwise we'll ban your account and alert the authorities.

Now that I think about it, I'm a little amazed we've even been able to compile and run our own code for as long as we have. Sounds dangerous!

oddmadetoday at 12:05 AM

I'll cancel my $100 / month Claude account the moment they decide to "approve my code"

Already got close to cancel when they recently updated their TOS to say that for "consumers" they deserve the right to own the output I paid for - if they deem the output not having been used "the correct way" !

This adds substantial risk to any startup.

Obviously...for "commercial" customers that do not apply - at 5x the cost...

show 3 replies
measurablefunctoday at 12:54 AM

They have contracts w/ the military but I am certain these safety considerations do not apply to military applications.

fbhabbedyesterday at 11:13 PM

I see they just decided to become even more useless than they already are.

Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.

One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.

Long life to Local LLMs I guess

show 3 replies
Goofy_Coyotetoday at 12:51 AM

This will negatively affect individual/independent bug bounty participants, vulnerability researchers, pentesters, red teamers, and tool developers.

Not saying this is good or bad, simply adding my thoughts here.

BrenBarntoday at 5:35 AM

It's very convenient that, after releasing tons of such models into the world, they just happen to have no choice but to keep making more and more money off of new ones in order to counteract the ones that already exist.

pluctoday at 12:06 AM

Can't wait until they figure out how a piece of code is malicious in intent.

show 1 reply
fcourytoday at 12:18 AM

It's sad to see that they have their focus on these while their flagship, once SOTA CLI solution, is rotting away by the day.

You can check the general feeling in X, but it's almost unanimous that the quality of both Sonnet 4 and Opus 4.1 is diminishing.

At first, I didn't notice this quality drop until this week. Now it's really, really terrible: it's not following instructions, pretending to work and Opus 4.1 is specially bad.

And that's coming from a anthropic fanboy, I used to really like CC.

I am now using Codex CLI and it's been a surprisingly good alternative.

show 1 reply
Ycrostoday at 3:59 AM

Is this why I've seen a number of "AUP violation" false positives popping up in claude code recently?

ysofunnyyesterday at 11:46 PM

clearly only the military (or ruthless organized crime) should be able to use hammers to bust skulls

demarqtoday at 12:18 AM

Is this an ad to win defence contracts?

seanytoday at 4:25 AM

This is the reason why self hosted is important.

charcircuittoday at 12:14 AM

>such as developing ransomware, that would previously have required years of training.

Even ignoring that there are free open source ones you can copy. You literally just have to loop over files and conditionally encrypt them. Someone could build this on day 1 of learning how to program.

AI companies trying to police what you can use them for is a cancer on the industry and is incredibly annoying when you hit it. Hopefully laws can change to make it clear that model providers aren't responsible for the content they generate so companies can't blame legal uncertainty for it.

pannytoday at 12:02 AM

How will they distinguish between hacking and penetration testing?

okasakiyesterday at 11:29 PM

[flagged]

show 1 reply
scotty79today at 1:22 AM

On one hand, it obviously terrible that we can expect more crime and more sophisticated crime.

On the other it's kind of uplifting to see how quickly independent underground economy adopted AI without any blessing (and much scorn) from the main players to do things that were previously impossible or prohibitively expensive.

Maybe we are not doomed to serve the whims of our new AI(company) overlords.

almostgotcaughtyesterday at 11:26 PM

> Claude Code was used to automate reconnaissance, harvesting victims’ credentials, and penetrating networks. Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands. Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts, and generated visually alarming ransom notes that were displayed on victim machines.

y'all realize they're bragging about this right?

show 3 replies
nagamsreekartoday at 4:53 AM

[dead]

LudwigNagasenayesterday at 11:45 PM

Whatever one's opinion of Musk and China might be, I'm grateful that Grok and open-source Chinese models exist as alternatives to the increasingly lobotomised LLMs curated by self-appointed AI stewards.

show 2 replies