logoalt Hacker News

enraged_camelyesterday at 3:19 PM2 repliesview on HN

>> As software engineers we don’t just crank out code—in fact these days you could argue that’s what the LLMs are for. We need to deliver code that works—and we need to include proof that it works as well.

I would go a step further: we need to deliver code that belongs. This means following existing patterns and conventions in the codebase. Without explicit instruction, LLMs are really bad at this, and it's one of the things that make it incredibly obvious to reviews that a given piece of code has been generated by AI.


Replies

0x500x79yesterday at 4:21 PM

Agree, maintainability, security, standards, all of these are important to follow and there are usually reasons for these things existing.

I also see AI coding tools violate "Chesterton's Fence" (and the pre-Chesterton's Fence, not sure what that is called, the idea being that code is necessary otherwise it shouldn't be in the source).

9rxyesterday at 4:31 PM

> Without explicit instruction, LLMs are really bad at this

They used to be. They have become quite good at it, even without instruction. Impressively so.

But it does require that the humans who laid the foundation also followed consistent patterns and conventions. If there is deviation to be found, the LLM will see it and be forced to choose which direction to go, and that's when things quickly fall off the rails. LLMs are not (yet) good at that, and maybe never can be as not even the humans were able to get it right.

Garbage in, garbage out, as they say.