logoalt Hacker News

Aurornistoday at 3:47 AM1 replyview on HN

A suggestion: Don't invest in any new hardware to run an LLM locally until you've tried the model for a while through OpenRouter.

The Qwen models are cool, but if you're coming from Opus you will be somewhere between mildly to very disappointed depending on the complexity of your work.


Replies

zozbot234today at 11:02 AM

OpenRouter-served models are often more heavily quantized than what you can run locally, or try for yourself on generic cloud-based infrastructure.