logoalt Hacker News

JacobAsmuthlast Thursday at 6:57 PM5 repliesview on HN

So in general you think that making frontier AI models more offensive in black hat capabilities will be good for cybersecurity?


Replies

Uehrekalast Thursday at 7:12 PM

I’m not GP, but I’d argue that “making frontier AI models more offensive in black hat capabilities” is a thing that’s going to happen whether we want it or not, since we don’t control who can train a model. So the more productive way to reason is to accept that that’s going to happen and then figure out the best thing to do.

show 1 reply
abigail95last Thursday at 7:08 PM

Does it shift the playing field towards bad actors in a way that other tools don't?

show 1 reply
bilbo0slast Thursday at 7:23 PM

Frontier models are good at offensive capabilities.

Scary good.

But the good ones are not open. It's not even a matter of money. I know at OpenAI they are invite only for instance. Pretty sure there's vetting and tracking going on behind those invites.

artursapeklast Thursday at 7:03 PM

Of course. Bugs only get patched if they’re found.

tptaceklast Thursday at 7:39 PM

People in North American and Western Europe have an extremely blinkered and parochial view of how widely and effectively offensive capabilities are disseminated.