logoalt Hacker News

skywhoppertoday at 9:13 AM2 repliesview on HN

“LLMs _can_ reliably generate (relatively small amounts of) working code from relatively terse descriptions”

Only with well-known patterns that represent shared knowledge specified elsewhere. If the details they “fill in” each time differ in ways that change behavior, then the spec is deficient.

If we “figure out” how to write such detailed specs in the future, as you suggest, then that becomes the “code”.


Replies

ModernMechtoday at 11:40 AM

Right, when you tell it “draw me a renaissance woman” and it gives you a facsimile of the Mona Lisa, it’s not because it intelligently anticipated what you wanted — it’s just been trained thoroughly to make that association.

mexicocitinlueztoday at 11:34 AM

Also, they're a bit more willing to make assumptions.

After awhile, I think we all get a sense of not only the amount of micro-decisions you have to make will building stuff (even when you're intimate with the domain), but also the amount of assumptions you'll need to make about things you either don't know yet or haven't fully fleshed out.

I'm painfully aware of the assumptions I'm making nowadays and that definitely changes the way I build things. And while I love these tools, their ability to not only make assumptions, but over-engineer those assumptions can have disastrous effects.

I had Claude build me a zip code heat map given a data source and it did it spectacularly. Same with a route planner. But asking it build out medical procedure documentation configurations based off of a general plan DID NOT work as well as I had expected it would.

Also, I asked Claude about what the cron expression I wrote would do, and it got it wrong (which is expected because Azure Web Jobs uses a non-standard form). But even after telling it that it was wrong, and giving it the documentation to rely on, it still doubled down on the wrong answer.