I don't want to criticise models for things they're not being trained on or constraints companies have. None of the companies said our models don't hallucinate and we always have right facts.
For example,
* I am not expecting Gemini 3 Flash to cure cancer and constantly criticising them for that
* Or I am not expecting Mistral to outcompete OpenAI/Claude on their each release, because talent density and capital is obviously on a different level on OpenAI side
* Or I am not expecting GPT 5.3 saying anytime soon: Yes, Israel committed genocide and politicians covered it up
We should set expectations properly and don't complain about Tianmen every time when Chinese companies are releasing their models and we should learn to appreciate them doing it and creating very good competition and they are very hard working people.
I think most people feel differently about an emergent failure in a model vs one that's been deliberately engineered in for ideological reasons.
It's not like Chinese models just happen to refuse to talk about the topic, it trips guardrails that have been intentionally placed there, just as much as Claude has guardrails against telling you how to make sarin gas.
eg ChatGPT used to have an issue where it steadfastly refused to make any "political" judgments, which led it to genocide denial or minimization- "could genocide be justifiable" to which sometimes it would refuse to say "no." Maybe it still does this, I haven't checked, but it seemed very clearly a product of being strongly biased against being "political", which is itself an ideology and worth talking about.