logoalt Hacker News

niek_pastoday at 2:18 PM4 repliesview on HN

What's even worse is that when dealing with human software teams, a vague requirement will (at least in a well-run org) receive demands for further specification. "What do you mean by 'get data'?", etc.

An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it.


Replies

smokeltoday at 2:26 PM

ChatGPT 5.5 responds:

> What data should I retrieve, and where should I get it from? Please specify at least: ...

And it then goes on to ask just exactly what is necessary, being all constructive about it.

show 1 reply
vidarhtoday at 2:24 PM

When the cycles are short enough, though, that is to some degree the right thing. That is, it's the right thing for things the users can then immediately see and give feedback on, because it lets them give feedback on something tangible.

It's the wrong thing for important things under the hood (like durability and security requirements) that are not tangible to them.

resterstoday at 2:52 PM

Just as poorly designed code can still compile. This is operator error, not a failure of the technology.

pydrytoday at 2:36 PM

IME you give it very precise specifications and it still fucks it up.

When we talk about "the" bottleneck being specs it just isnt the case that it's the only thing LLMs do poorly. Theyre really bad at a lot of stuff in the SDLC.

They're also good at providing results which are bad but look ok if you either dont look too closely or dont know what you're looking for.