Interesting, does it mean they're optimizing for cases where China is disconnected from the internet completely?
No, for example Alibaba has huge proprietary Qwen models, like Qwen 3 Max. You just never hear about them because that space in western LLM discussions is occupied with the US labs.
No, there are datacenters in China they can still use if their internet is down.
Well, my answer was partially tongue in cheek. However, under the circumstances I mentioned I had better results with a local model. But then up to today I really like Deepseek for dev - for me and my use cases it still outperforms Opus. The problem is we don't have good and universal benchmarks, inststead we have benchmaxxing. For my use cases Chinese models may be perfect but maybe for yours American one's may outperform them by a huge margin? So I urge you to use YCI - Your Capability Index.
Let's see who disconnected from the internet completely :)
Almost all Chinese models are open weight research models.
My theory is that these models serve the purpose of being relatively easy to run/tweak for researchers, and mainly serve to demonstrate the effectiveness of new techniques in training and inference, as well as the strength of AI labs that created them.
They are not designed to be state of the art commercial models.
By choosing bigger model sizes, running more training epochs, and drilling the models a bit more on benchmarking questions, I'm sure the Chinese could close the gap, but that would delay these models, make them more expensive and harder to run without showing any tangible research benefit.
Also my 2c: I was perfectly happy with Sonnet 3.7 as of a year ago, if the Chinese have a model really as good as that (not only one that benchmarks as well), I'd definitely like to try it.