If you had all the token probabilities it would be bijective. There was a post about this here some time back.
Kind of, LLMs still use randomness when selecting tokens, so the same input can lead to multiple different outputs.
Kind of, LLMs still use randomness when selecting tokens, so the same input can lead to multiple different outputs.