logoalt Hacker News

samwhoyesterday at 12:31 PM1 replyview on HN

How would information leak, though? There’s no difference in the probability distribution the model outputs when caching vs not caching.


Replies

srousseyyesterday at 11:48 PM

the probability distribution the model outputs is identical under identical conditions.

A local model running alone on your machine will 100% always return the exact same thing and the internal state will be exactly the same and you can checkpoint or cache that to avoid rerunning to that point.

But… conditions can be different, and batching requests tends to affect other items in flight. I believe Thinking Machines had an article about how to make a request deterministic again without performance going to complete crap.

I tend to think of things this way (completely not what happens though): what if you were to cache based on a tensor as the key? To generate a reasonably sized key what is an acceptable loss of precision to retrieve the same cache knowing that there is inherent jitter in the numbers of the tensor?

And then the ever so slight leak of information. But also multiplied since there are internal kv caches for tokens and blah blah blah.