logoalt Hacker News

zozbot234today at 2:37 PM1 replyview on HN

You don't need that much VRAM to run the very largest models, these are MoE models where only a small fraction is being computed with at any given time. If you plan to run with multiple GPUs and have enough PCIe lanes (such as with a proper HEDT platform) CPU-GPU transfers start to become a bit less painful. More importantly, streaming weights from disk becomes feasible, which lets you save on expensive RAM. The big labs only avoid this because it costs power at scale compared to keeping weights in DRAM, but that aside it's quite sound.


Replies

lambdatoday at 4:44 PM

While you can run with weights in RAM or even disk, it gets a lot slower; even though on any given token a fraction of the weights are used, that can change with each token, so there is a lot of traffic to transfer weights to the GPU, which is a lot slower than if it's directly in GPU RAM. And even more slower if you stream from disk. Possible, yes, and maybe OK for some purposes, but you might find it painfully slow.