logoalt Hacker News

Groxxyesterday at 5:29 PM1 replyview on HN

from what I've seen in a several-thousand-eng company: LLMs generally produce vastly more code than is necessary, so they quickly out-pace human coders. they could easily be producing half or more of all of the code even if only 10% of the teams use it. particularly because huge changes often get approved with just a "lgtm", and LLM-coding teams also often use/trust LLMs for reviews.

but they do that while making the codebase substantially worse for the next person or LLM. large code size, inconsistent behavior, duplicates of duplicates of duplicates strewn everywhere with little to no pattern so you might have to fix something a dozen times in a dozen ways for a dozen reasons before it actually works, nothing handles it efficiently.

the only thing that matters in a business is value produced, and I'm far from convinced that they're even break-even if they were free in most cases. they're burning the future with tech debt, on the hopes that it will be able to handle it where humans cannot, which does not seem true at all to me.


Replies

HardCodedBiasyesterday at 5:34 PM

Measuring the value is very difficult. However there are proxies (of varying quality) which are measured, and they are showing that AI code is clearly better than copy-pasted code (which used to be the #1 source of lines of code) and at least as "good" (again, I can't get into the metrics) as human code.

Hopefully one of the major companies will release a comprehensive report to the public, but they seem to guard these metrics.

show 1 reply