Just use an LLM to weed them out. What’s so hard about that?
If AI can't be trusted to write bug reports, why should it be trusted to review them?
How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.
The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.
At this point it's impossible to tell if this is sarcasm or not.
Brave new world we got there.
Set a thief to catch a thief.
Because LLMs are bad at reviewing code for the same reasons they are bad at making it? They get tricked by fancy clean syntax and take long descriptions / comments for granted without considering the greater context.