did you see yesterday nano-vllm [1] from a deepseek employee 1200LOC and faster than vanilla vllm?
1. https://github.com/GeeeekExplorer/nano-vllm
Is it faster for large models, or are the optimizations more noticeable with small models? Seeing that the benchmark uses a 0.6B model made me wonder about that.
Is it faster for large models, or are the optimizations more noticeable with small models? Seeing that the benchmark uses a 0.6B model made me wonder about that.