logoalt Hacker News

manishsharanyesterday at 3:55 PM1 replyview on HN

I am not following this obsession with SOTA and benchmark rankings

I have been using DeepSeek and GLMnmodels with OpenCode and Codex and Claudr side by side.

I have not found the Chinese models lacking. I enjoy for coding and like to maintain full control of my codebade and deeply care about the GOF patterns. So I am very stringent in terms of what I want the LLM to code and how to code.

So from my perspective, they are all about the same.


Replies

amunozoyesterday at 4:00 PM

That I agree with, but for more complex autonomous changes the differences are considerable. However, it seems that most models will reach the saturation time in which they will be useful for almost everything and the difference will be in more and more niche and specialized tasks.