logoalt Hacker News

tenacious_tunayesterday at 6:59 PM1 replyview on HN

> Well now you get to read it.

Man, I wish this was true. I've given the same feedback on a colleague's clearly LLM-generated PRs. Initially I put effort into explaining why I was flagging the issues, now I just tag them with a sadface and my colleague replies "oh, cursor forgot." Clearly he isn't reading the PRs before they make it to me; so long as it's past lint and our test suite he just sends the PR.

I'd worry less if the LLMs weren't prone to modifying the preconditions of the test whenever they fail such that the tests get neutered, rather than correctly resolving the logic issues.


Replies

HaroldCindyyesterday at 7:05 PM

We need to develop new etiquette around submitting AI-generated code for review. Using AI for code generation is one thing, but asking other people review something that you neither wrote nor read is inconsiderate of their time.

show 1 reply