Isn’t this a great use of llms?
Clone the repo in a sandbox and have the llm identify if the issues are real and the appropriate response based on severity level.
Wouldn’t be perfect but would have caught something like this.
Humans + LLMs are really good at producing enough spam to overwhelm anything like this. There’s a reason curl bans LLM slop reports now.
Humans + LLMs are really good at producing enough spam to overwhelm anything like this. There’s a reason curl bans LLM slop reports now.