logoalt Hacker News

XCSmeyesterday at 3:46 PM2 repliesview on HN

Seems to be marginally better than gpt-20b, but this is 30b?


Replies

strangescriptyesterday at 3:50 PM

I find gpt-oss 20b very benchmaxxed and as soon as a solution isn't clear it will hallucinate.

show 1 reply
lostmsuyesterday at 4:00 PM

It actually seems worse. gpt-20b is only 11 GB because it is prequantized in mxfp4. GLM-4.7-Flash is 62 GB. In that sense GLM is closer to and actually is slightly larger than gpt-120b which is 59 GB.

Also, according to the gpt-oss model card 20b is 60.7 (GLM claims they got 34 for that model) and 120b is 62.7 on SWE-Bench Verified vs GLM reports 59.7