> Does this process work by having the main model compute the "probability" that it would generate the draft sequence, then probabilistically accepting the draft?
It does the generation as normal using the draft model, thus sampling from the draft model's distribution for a given prefix to get the next (speculated) token. But it then uses the draft model's distribution and the main model's distribution for the given prefix to probabilistically accept or reject the speculated token, in a way which guarantees the distribution used to sample each token is identical to that of the main model.
The paper has the details[1] in section 2.3.
The inspiration for the method was indeed speculative execution as found in CPUs.
[1]: https://arxiv.org/abs/2211.17192 Fast Inference from Transformers via Speculative Decoding