logoalt Hacker News

em-beeyesterday at 9:01 PM1 replyview on HN

anything unpredictable is inherently untrustworthy and requires extra effort to review.

Lack of determinism is not a practical concern.

it is to me. it's a knockout criteria. it is the only reason that keeps me from using LLMs for coding. nothing else is as serious an issue to me as this.

here is why: i tell the LLM to build something with requirements A B C D and E. it builds, i review and i find A B and D are good, C and E are broken. i tell it to fix them, it does, so C and E are fixed, but now A is broken. i tell it to fix that, and i have to keep iterating until i find a combination where everything works. in every iteration any part can randomly break, so for every iteration i get changes all over the place. they never are confined to the issue i pointed out. i have to review the whole thing every time. that's what i mean by lack of determinism, and that is a serious practical concern because instead of getting done in two or three iterations it requires dozens of them. see my related replies elsewhere. i just don't want to work that way.


Replies

SuperV1234yesterday at 9:21 PM

You'd have to review and verify even changes that you've written by hand. You might think that your hand-written code satisfies A+B+C+D+E, but until you've verified it, you cannot prove it.

That's not any different from LLM-assisted writing -- humans are inherently non-deterministic as well :)

The other fallacy is assuming that everyone else's experience with LLM-assisted writing is the same as yours. Personally, I've rarely encountered the issue you've mentioned -- most of my LLM-assisted coding has been a net positive and quite straightforward.

Perhaps it's the nature of the problem I'm working on, perhaps it's the model I chose, perhaps it's my prompting skills. It doesn't matter -- you just cannot assume that because something doesn't work for you it doesn't work for anyone else.

The other fallacy is considering LLM-assisted coding a binary option, like the nonsensical Zig policy does.

I agree with you that "vibe coding" something from scratch will likely result in poor quality and many iterations. But that's not the only way to use LLMs.

You can ask LLMs to review hand-written code. You can ask LLMs to optimize a specific part of code. You can ask LLMs to apply a specific refactor. You can ask LLMs to brainstorm solutions to a problem. You can ask LLMs to autocomplete patterns.

I could go on. This stuff works. It is helpful.

Assuming that everyone who uses LLMs is incompetent and preventing them from contributing because of a hunch or your own negative experiences is just asinine.

show 1 reply