logoalt Hacker News

zhangchentoday at 12:18 PM1 replyview on HN

that's already happening tbh. the real issue isn't hypocrisy though, it's that maintainers reviewing their own LLM output have full context on what they asked for and can verify it against their mental model of the codebase. a random contributor's LLM output is basically unverifiable, you don't know what prompt produced it or whether the person even understood the code they're submitting.


Replies

hijnksforall956today at 1:15 PM

How is that different than before LLMs? You have no idea how the person came up with it, or whether they really understood.

We are inventing problems here. Fact is, an LLM writes better code than 95% of developers out there today. Yes, yes this is Lake Wobegone, everyone here is in the 1%. But for the world at large, I bet code quality goes up.

show 1 reply