I wonder how these offensive AI agents are being built? I am guessing with off the shelf open LLMs, finetuned to remove safety training, with the agentic loop thrown in.
Honestly you can point regular Claude Code or Codex CLI at a web app and tell it to start a penetration test and get surprisingly good results from their default configurations.
Honestly you can point regular Claude Code or Codex CLI at a web app and tell it to start a penetration test and get surprisingly good results from their default configurations.