Regular models are very fast if you do batch inference. GPT-OSS 20B gets close to 2k tok/s on a single 3090 at bs=64 (might be misremembering details here).
Right but everyone else is talking about latency, not throughput.
Right but everyone else is talking about latency, not throughput.