logoalt Hacker News

guilamuyesterday at 8:30 PM1 replyview on HN

Yes those two models were tested on my own PC (local inference using my own CPU/GPU). So something my be bugged on my setup. gemma4-26b should be far better than gemma4-e4b.


Replies

embedding-shapeyesterday at 9:08 PM

Sounds like maybe using worse quantization on the bigger model? Quantization matters a lot for the quality, basically anything below Q8 is borderline unusable. If it isn't specified in a benchmark already it probably should.