logoalt Hacker News

jimberlagetoday at 12:41 AM2 repliesview on HN

Yeah this was me. I just got a message that I hit my limit and now I am looking into what it takes to run Qwen on local hardware.


Replies

Aurornistoday at 3:47 AM

A suggestion: Don't invest in any new hardware to run an LLM locally until you've tried the model for a while through OpenRouter.

The Qwen models are cool, but if you're coming from Opus you will be somewhere between mildly to very disappointed depending on the complexity of your work.

show 1 reply
andrewjvbtoday at 1:49 AM

Been having a ton of fun with copilot cli directed to local qwen 3.6. If you’re willing to increase the amount of specificity in your prompts then delegating from a GPT-5.4 or Opus to local qwen has been great so far.