logoalt Hacker News

Someonetoday at 8:03 AM1 replyview on HN

> That is not true, and the proof is that LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions.

LLMs can generate (relatively small amounts of) working code from relatively terse descriptions, but I don’t think they can do so _reliably_.

They’re more reliable the shorter the code fragment and the more common the code, but they do break down for complex descriptions. For example, try tweaking the description of a widely-known algorithm just a little bit and see how good the generated code follows the spec.

> Sometimes the interpolated detail is wrong (and indeterministic), so, if reliable result is to be achieved

Seems you agree they _cannot_ reliably generate (relatively small amounts of) working code from relatively terse descriptions


Replies

mike_hearntoday at 9:11 AM

Neither can humans, but the industry has decades of experience with how to instruct and guide human developer teams using specs.

show 4 replies