logoalt Hacker News

selimenes1today at 6:27 AM0 repliesview on HN

The danpalmer comment really resonates. I've been in similar spots where AI-generated code passes tests and looks fine at first glance, but you don't have the mental model of why it works that way. That missing confidence is real and I think it's the core issue with these low-effort PRs too — the submitter has no skin in the game understanding what the code actually does.

What's interesting is this isn't entirely new. Before AI slop PRs, we had Hacktoberfest spam, drive-by typo-fix PRs that broke things, and copy-paste-from-stackoverflow contributions. The difference now is just volume and the fact that the code looks superficially more competent.

Honestly I think the most practical signal for maintainers is whether the contributor can answer a specific question about the change. Not "explain the PR" but something like "why did you choose X over Y here" or "what happens when Z edge case occurs." A human who wrote or at least deeply understood the code can answer that in seconds. Someone who just prompted and submitted cannot.