logoalt Hacker News

tibbar05/15/20252 repliesview on HN

If you use a deterministic sampling strategy for the next token (e.g., always output the token with the highest probability) then a traditional LLM should be deterministic on the same hardware/software stack.


Replies

extraduder_ire05/15/2025

Wouldn't seeding the RNG used to pick the next token be more configurable? How would changing the hardware/other software make a difference to what comes out of the model?

show 1 reply
roywiggins05/15/2025

Deterministic is one thing, but stable to small perturbations in the input is another.

show 1 reply