logoalt Hacker News

jedbergtoday at 3:33 AM1 replyview on HN

That's not exactly how LLM temperature works. :). Also that's on inference, not training. Presumably these would be used for training, the latency would be too high for inference.


Replies

fookertoday at 3:51 AM

It doesn't work like that, but it can.

Latency would be fine for inference, this is low earth orbit, that is about 25ms optimistically. Well within what we expect from our current crop of non local LLMs.