logoalt Hacker News

quackertoday at 1:09 AM2 repliesview on HN

I could have used this article before I spent the weekend arriving to the same conclusion!

Same laptop, and my contrived test was having it fix 50 or so lint errors in a small vibe-coded C++ repo. I wanted it to be able to handle a bunch of small tasks without getting stuck too often.

GPT OSS 20B was usable but slow, and actually frequently made mistakes like adding or duplicating statements unnecessarily, listing things as fixed without editing the code, and so on.

Qwen 3.5 9B with Opencode was much faster and actually able to work through a majority of the lint warnings without getting stuck, even through compaction and it fixed every warning with a correct edit.

I tried 4bit MLX quants of Qwen 3.5 9B but it eventually would crash due to insufficient memory. I switched to GGUF, which I run with llama.cpp, and it runs without crashing.

It is absolutely not comparable to frontier models. It’s way slower and gets basic info wrong and really can’t handle non trivial tasks in one go. I asked it for an architecture summary of the project and it claimed use of a library that isn’t present anywhere in the repo. So YMMV, but it’s still nice to have and hopefully the local LLM story can get much better on modest hardware over time.


Replies

solenoid0937today at 1:27 AM

> It is absolutely not comparable to frontier models.

This is not said often enough.

Yes, local LLMs are great! But reading most HN posts on the subject, you'd think they're within reach of Opus 4.7.

There is a very small, very vocal, very passionate crowd that dramatically overstates the capabilities of local LLMs on HN.

show 3 replies
layorictoday at 2:06 AM

Honestly surprised to hear that GPT OSS 20B runs slow on mac hardware. It's absolutely one of the fastest models I've run on local GPUs for its size, but only tried Nvidia cards.

Edit: TIL it is MoE and only has 3.6B active, explains a lot.

show 1 reply