logoalt Hacker News

novalis78today at 7:12 AM5 repliesview on HN

Just use an LLM to weed them out. What’s so hard about that?


Replies

GalaxyNovatoday at 7:17 AM

Because LLMs are bad at reviewing code for the same reasons they are bad at making it? They get tricked by fancy clean syntax and take long descriptions / comments for granted without considering the greater context.

show 1 reply
bootsmanntoday at 7:17 AM

If AI can't be trusted to write bug reports, why should it be trusted to review them?

f311atoday at 7:24 AM

How would it work if LLMs provide incorrect reports in the first place? Have a look at the actual HackerOne reports and their comments.

The problem is the complete stupidity of people. They use LLMs to convince the author of the curl that he is not correct about saying that the report is hallucinated. Instead of generating ten LLM comments and doubling down on their incorrect report, they could use a bit of brain power to actually validate the report. It does not even require a lot of skills, you have to manually tests it.

show 1 reply
eqvinoxtoday at 7:15 AM

At this point it's impossible to tell if this is sarcasm or not.

Brave new world we got there.

vee-kaytoday at 7:16 AM

Set a thief to catch a thief.