I see what you're getting at, but determinism isn't the right word either. LLMs are fundamentally deterministic -- they are pure functions which output text as a function of the input text and the network parameters[1]. Depending on your views on free will, it could be effectively argued that humans are deterministic as well.
The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.
[1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.
> Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice
Everyone added /dev/random to their offerings, so every LLM tools for coding are non deterministic.
Let's be real, if you and I both ask claude to generate a feature on the same project, what are the chances that it spits out 100% replicated code? But if we are to build the project using a Dockerfile, we will get the same binary and the same image. Products around LLMs are non deterministic unlike compilers.