Kimi k2.6 is about on par with GPT 5.2 so I’d say open weight models are about 6 months behind.
The Q4 quantization requires about 600GB of RAM without context, not exactly consumer hardware friendly.
Has Kimi found a way to vastly reduce the amount of VRAM required without running at 3 tokens per second? That’s the real concern.
The Q4 quantization requires about 600GB of RAM without context, not exactly consumer hardware friendly.