logoalt Hacker News

IceWreckyesterday at 9:22 PM1 replyview on HN

> This is speculative, but I suspect that if we dropped one of the latest, most capable open-weight LLMs, such as GLM-5, into a similar harness, it could likely perform on par with GPT-5.4 in Codex or Claude Opus 4.6 in Claude Code.

People have been doing that for over a year already? GLM officially recommends plugging into Claude Code https://docs.z.ai/devpack/tool/claude and any model can be plugged into Codex CLI (it's open source and can be set via config file).


Replies

girvoyesterday at 9:47 PM

And while it’s not Opus level, it is incredibly good. I use it basically exclusively (and qwen3.5-plus) on my personal projects.