Hi everyone, I'm kinda involved in some retrogaming and with some experiments I ran into the following question: "It would be possible to run transformer models bypassing the cpu/ram, connecting the gpu to the nvme?"
This is the result of that question itself and some weekend vibecoding (it has the linked library repository in the readme as well), it seems to work, even on consumer gpus, it should work better on professional ones tho
0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff
This is an interesting area for experiments. I suspect that in the longer term model optimization (knowing which bits you can leave out without affecting the functioning of the model) will become the dominant area of research just like it did with compression algorithms because effectively a model is a lossy compression scheme.
And that's good because that increases democratization of AI away from the silos that are being created.
Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.
One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.
I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.
Curious if anyone's tried this already.
I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?
I feel like we need an entirely new type of silicon for LLMs. Something completely focused on bandwidth and storage probably at the sacrifice of raw computation power.
Didn't DirectX add an API for loading assets directly to GPU memory? Would that work?
Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.
Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!
Yeah, GPUdirect should allow you to dma straight to a storage device.
I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.