Kind of, LLMs still use randomness when selecting tokens, so the same input can lead to multiple different outputs.