So in general you think that making frontier AI models more offensive in black hat capabilities will be good for cybersecurity?
Does it shift the playing field towards bad actors in a way that other tools don't?
Frontier models are good at offensive capabilities.
Scary good.
But the good ones are not open. It's not even a matter of money. I know at OpenAI they are invite only for instance. Pretty sure there's vetting and tracking going on behind those invites.
Of course. Bugs only get patched if they’re found.
People in North American and Western Europe have an extremely blinkered and parochial view of how widely and effectively offensive capabilities are disseminated.
I’m not GP, but I’d argue that “making frontier AI models more offensive in black hat capabilities” is a thing that’s going to happen whether we want it or not, since we don’t control who can train a model. So the more productive way to reason is to accept that that’s going to happen and then figure out the best thing to do.