You see, the dream is another AI that reads the report and writes the issue in the bug tracker. Then another AI implements the fix. A third AI then reviews the code and approves and merges it. All without human interaction! Once CI releases the fix, the first AI can then find the same vulnerability plus a few new and exciting ones.
This is completely absurd. If generating code is reliable, you can have one generator make the change, and then merge and release it with traditional software.
If it's not reliable, how can you rely on the written issue to be correct, or the review, and so how does that benefit you over just blindly merging whatever changes are created by the model?