No, setting the temperature to zero is still going to yeld different results. One might think they add random seeds, but it makes no sense for temperature zero. One theory is that the distributed nature of their systems adds entropy and thus produces different results each time.
Random seeds might be a thing, but for what I see there's a lot demand for reproducibility and yet no certain way to achieve it.
It's not really a mystery why it happens. LLM APIs are non-deterministic from user's point of view because your request is going to get batched with other users' requests. The batch behavior is deterministic, but your batch is going to be different each time you send your request.
The size of the batch influences the order of atomic float operations. And because float operations are not associative, the results might be different.