something i am missing in this area is education and services.
if, during an automated code review, claude finds a vulnerability in a dependency, where should i direct it to share the findings?
who would be willing to take the slop-report, and validate it?
i've never done vulnerability disclosure, yet, with opus at max effort, i have found some security issues in popular frameworks/libraries i depend on.
a proper report can't be one pass, it has to validate it's a real problem, but ask opus to do that and you run the risk of the api refusing the request, endangering your account status. you ask to do it anyway, and write a report and now, you're burning tokens on a report that's likely to be ignore, because slop.
so i sit on this, and hope it doesn't hit me.
i'd be happy to use an official skill for vulnerability reporting
the skill would be manually triggered when vulnerabilities are found; do another pass for details; version, files, lines, then write a lightweight report and submit somewhere. anthropic could host this, or work with h1 to do that. when the models have extra capacity a process comes around and picks up these reports one by one, does another check, maybe with proof-of-concept, reports through proper channels.
It often takes strong understanding of the upstream codebase and roadmap to write a good patch. It's easy enough to write a rough PoC and draft patch but getting all the way through the cycle takes up a bunch of time both from you and the maintainers (who are often already overloaded). My advice would be to draft a bunch privately, take one of the highest impact all the way through a deployed fix, and then plan based on what you learn. Some people's answer is to maintain private forks with automated fixes applied, with a periodic rebase on upstream.