This would be impressive even as a human assisted project.
But there's a claim that it is unsupervised, which I doubt. See how these two claims contradict each other.
>"XBOW is a fully autonomous AI-driven penetration tester. It requires no human input, "
>"To ensure accuracy, we developed the concept of validators, automated peer reviewers that confirm each vulnerability XBOW uncovers. Sometimes this process leverages a large language model; in other cases, we build custom programmatic checks."
I mean, I doubt you deploy this thing collecting thousands of dollars in bounties and you sit there twiddling your thumbs. Whatever work you put into the AI, whether fine tuned or generic and reusable, counts as supervised, and that's ok. Take the win, don't try to sell the automated dream to get investors or whatever, don't get caught up in fraud.
As I understand it, when you discover a type of vulnerabilities, it's very common to automate the detection and find other clients with such vulnerability, these are usually short lived and the well dries up fast, you need to constantly stay on top of the latest trends. I just don't buy that if you leave this thing unattended for even 3 months it would keep finding gold, that's a property of the engineers that is not scaleable (and that's ok).