logoalt Hacker News

armcatyesterday at 7:27 PM4 repliesview on HN

Out of curiosity, what kind of specs do you have (GPU / RAM)? I saw the requirements and it's a beyond my budget so I am "stuck" with smaller Qwen coders.


Replies

zeroxfeyesterday at 8:06 PM

I'm not running it locally (it's gigantic!) I'm using the API at https://platform.moonshot.ai

show 2 replies
Carrokyesterday at 7:31 PM

Not OP but OpenCode and DeepInfra seems like an easy way.

observationistyesterday at 11:46 PM

API costs on these big models over private hosts tend to be a lot less than API calls to the big 4 American platforms. You definitely get more bang for your buck.

tgrowazayyesterday at 8:08 PM

Just pick up any >240GB VRAM GPU off your local BestBuy to run a quantized version.

> The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs.

show 1 reply