Updated parent comment. Ideally, looking beyond this work, and more generally, a funded AI would be used to do analysis and then to dispatch tasks to qualified humans. A network of available qualified humans would have to exist that the AI can access. Humans could then of course provide feedback to AI for the loop to continue with new tasks to humans. Think Uber but more generally for AI to tap into real-world work and expertise.
That sounds dystopian as hell to me. Are security researchers willing to become interchangeable units of cognition, as devalued as uber drivers? I hope not.
But your edit makes my original comment unneeded. I was reacting to this jump to “we need ai to solve this!” when the ai is still largely unproven marketing hype from a bunch of highly leveraged ai companies with manic gambler ceos.