logoalt Hacker News

drzaiusx11yesterday at 1:35 PM1 replyview on HN

LLM systems at their core being probabilistic text generators makes them easily produce massive works at scale.

In software engineering our job is to build reliable systems that scale to meet the needs of our customers.

With the advent of LLMs for generating software, we're simply ignoring many existing tenets of software engineering by assuming greater and greater risk for the hope of some reward of "moving faster" without setting up the proper guard rails we've always had. If a human sends me a PR that has many changes scattered across several concerns, that's an instant rejection to close that PR and tell them to separate those into multiple PRs so it doesn't burn us out reviewing something beyond human comprehension limits. We should be rejecting these risky changes out of hand, with the possible exception when "starting from scratch", but even then I'd suggest a disciplined approach with multiple validation steps and phases.

The hype is snake oil: saying we can and should one-shot everything into existence without human validation, is pure fantasy. This careless use of GenAI is simply a recipe for disasters at scales we've not seen before.


Replies

tau5210today at 5:39 AM

Well said, thank you.