If everyone is running the same models, does this not favour white hat / defense?
Since many exploits consists of several vulnerabilities used in a chain, if a LLM finds one in the middle and it's fixed, that can change a zero day to something of more moderate severity?
E.g. someone finds a zero day that's using three vulns through different layers. The first and third are super hard to find, but the second is of moderate difficulty.
Automated checks by not even SOTA models could very well find the moderate difficulty vuln in the middle, breaking the chain.
> If everyone is running the same models, does this not favour white hat / defense?
The landscape is turbulent (so this comment might be outdated by the time I submit it), but one thing I’m catching between the lines is a resistance to provide defensive coding patterns because (guessing) they make the flaw they’re defending against obvious. When the flaw is widespread - those patterns effectively make it cheap to attack for observant eyes.
After seeing the enhanced capabilities recently, my conspiracy theory is that models do indeed traverse the pathways containing ideal mitigations, but they fall back to common anti-patterns when they hit the guardrails. Some of the things I’ve seen are baffling, and registered as adversarial on my radar.