logoalt Hacker News

phpnodetoday at 4:01 PM6 repliesview on HN

People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?


Replies

michaelrpeskintoday at 5:08 PM

In my experience, I'm using LLMs as my abstraction to "junior engineer". A junior engineer isn't deterministic either. I find that if you treat the LLM output like a person's output, you're good. Or at least in my projects, it's been very successful. I don't have it generate more code than I can review, or if I give it a snippet to help me fix it, if it ends up re-writing it like an ambitious engineer would do, I tell it to start over and make minimal changes.

I guess I'm not spun up about the determinism because I've been working at the "treat it like a person" level more than the "treat it like a compiler" level.

To me, it's really like an engineer who knows the docs and had a good memory rather than infallable code generator.

I work at a small company, so we don't have tons of processes in place, but I imagine that if you already had huge "standards" docs that engineers need to follow, then giving the LLM those standards would make things even better.

show 1 reply
AstroBentoday at 4:25 PM

If it's not deterministic you can never fully trust it. In a deterministic abstraction I don't need to audit the lower levels.

show 1 reply
mrbananagrabbertoday at 5:22 PM

this is the way LLMs _should_ be used, as an assistant to create reliable, deterministic code. and honestly, they're fantastic when used this way. build the thing you need with the LLM, then put the LLM away.

but in practice, the current obsession with agents means people are creating applications that depend entirely on sending requests to LLMs for their core functionality. which means abandoning the whole idea of deterministic software in favor of just praying that all of the prompts you put around those API requests will lead to the right result.

udavetoday at 6:50 PM

try distributing this spec amongst your team members, ask each of them to drive it to completion. no follow up edits. deploy to individual environments and then run a rigorous test suite against all of the deployments. see if all of them behave the same way.

show 1 reply
ex-aws-dudetoday at 6:10 PM

Exactly, the argument makes sense if its about inference at runtime

But that's not the case here

knivetstoday at 5:44 PM

how do you know the artifact is correct?