I want to believe it's gonna be good, but after trying GPT-5.5 even the most advanced Chinese models seem depressing.
I am not following this obsession with SOTA and benchmark rankings
I have been using DeepSeek and GLMnmodels with OpenCode and Codex and Claudr side by side.
I have not found the Chinese models lacking. I enjoy for coding and like to maintain full control of my codebade and deeply care about the GOF patterns. So I am very stringent in terms of what I want the LLM to code and how to code.
So from my perspective, they are all about the same.
Honestly I depends on the context which this performance matters. Mistral is quiet cheap
This is a French model sir