logoalt Hacker News

cryptonectortoday at 4:03 AM2 repliesview on HN

Oh no! It's pouring PRs!

Come on. Maintainers can:

  - insist on disclosure of LLM origin
  - review what they want, when they can
  - reject what they can't review
  - use LLMs (yes, I know) to triage PRs
    and pick which ones need the most
    human attention and which ones can be
    ignored/rejected or reviewed mainly
    by LLMs
There are a lot of options.

And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?

Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.


Replies

JumpCrisscrosstoday at 4:14 AM

> Maintainers can...insist on disclosure of LLM origin

On the internet, nobody knows you're a dog [1]. Maintainers can insist on anything. That doesn't mean it will be followed.

The only realistic solution you propose is using LLMs to review the PRs. But at that point, why even have the OSS? If LLMs are writing and reviewing the code for the project, just point anyone who would have used that code to an LLM.

[1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...

bigiaintoday at 4:18 AM

Claiming maintainers can (do things while still take effort and time away from their OSS project's goals) is missing the point when the rate of slop submissions is ever increasing and malicious slop submitters refuse to follow project rules.

The Curl project refuse AI code and had to close their bug bounty program due to the flood of AI submissions:

"DEATH BY A THOUSAND SLOPS

I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...