The real issue is expecting an LLM to be deterministic when it's not.
Oh how I wish people understood the word "deterministic"
LLMs are deterministic in the sense that a fixed linear regression model is deterministic. Like linear regression, however, they do however encode a statistical model of whatever they're trying to describe -- natural language for LLMs.
they are deterministic, open a dev console and run the same prompt two times w/ temperature = 0
LLMs are essentially pure functions.
Language models are deterministic unless you add random input. Most inference tools add random input (the seed value) because it makes for a more interesting user experience, but that is not a fundamental property of LLMs. I suspect determinism is not the issue you mean to highlight.