> LLMs are a tool like every other. Only that it's non deterministic.
If you stay away from the corporate SaaS token vendors, and run your own, you will find LLMs are deterministic, purely based on the exact phrase on input. And as long as the context window's tokens are the same, you will get the same output.
The corporate vendors do tricks and swap models and play with inherent contexts from other chats. It makes one-shot questions annoying cause unrelated chats will creep into your context window.
Yes and no. You might get the same output if you turn down the temperature, but you will probably not know the output without running it first. It's a bit like a hashing function. If I give the same input I get the same hash but I don't know which input will to which hash without running the function.
Also most LLMs are not run as I write a prompt and I will read output. Usually you have MCPs or other tools connected. These will change the input and it will probably lead to different outputs. Otherwise it wouldn't be a problem at all.