Reproducibility across all existing build systems took a decade of work involving everything from compilers to sandboxing, and a hard reproducibility guarantee in completely arbitrary cases is either impossible or needs deterministic emulators which are terribly slow. (e.g. builds that depend on hardware probing or a simulation result)
Input-to-output reproducibility in LLMs (assuming the same model snapshot) is a matter of optimizing the inference for it and fixing the seed, which is vastly simpler. Google for example serves their models in an "almost" reproducible way, with the difference between runs most likely attributed to batching.
It’s not just about non-determinism, but about how chaotic LLMs are. A one word difference in a spec can and frequently does produce unrecognizably different output.
If you are using an LLM as a high level language, that means that every time you make a slight change to anything and “recompile” all of the thousands upon thousands of unspecified implementation details are free to change.
You could try to ameliorate this by training LLMs to favor making fewer changes, but that would likely end up encoding every bad architecture decisions made along the way and essentially forcing a convergence on bad design.
Fixing this I think requires judgment on a level far beyond what LLMs have currently demonstrated.