logoalt Hacker News

bottlepalmtoday at 2:15 AM2 repliesview on HN

You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.


Replies

3836293648today at 2:29 AM

You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.

CamperBob2today at 6:32 AM

Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.