It’s not just about non-determinism, but about how chaotic LLMs are. A one word difference in a spec can and frequently does produce unrecognizably different output.
If you are using an LLM as a high level language, that means that every time you make a slight change to anything and “recompile” all of the thousands upon thousands of unspecified implementation details are free to change.
You could try to ameliorate this by training LLMs to favor making fewer changes, but that would likely end up encoding every bad architecture decisions made along the way and essentially forcing a convergence on bad design.
Fixing this I think requires judgment on a level far beyond what LLMs have currently demonstrated.
>It’s not just about non-determinism
I'm very specifically addressing prompt reproducibility mentioned above, because it's a notorious red herring in these discussions. What you want is correctness, not determinism/reproducibility which is relatively trivial. (although thinking of it more, maybe not that trivial... if you want usable repro in the long run, you'll have to store the model snapshot, the inference code, and make it deterministic too)
>A one word difference in a spec can and frequently does produce unrecognizably different output.
This is well out of scope for the reproducibility and doesn't affect it in the slightest. And for practical software development this is also a red herring, the real issue is correctness and spec gaming. As long as the output is correct and doesn't circumvent the intention of the spec, prompt instability is unimportant, it's just the ambiguous nature of the domain LLMs and humans operate in.