Interesting but for a country like china with the fact that companies are partially-owned by CCP itself. I feel like most of these discussions would / (should?) have happened in a way where they don't leak outside.
If the govt. formally anounces it, perhaps, I believe that they must have already taken appropriate action against it.
Personally I believe that we are gonna see distills of large language models perhaps even with open weights Euro/American models filtering.
I do feel like everybody knows seperation of concerns where nobody really asks about china to chinese models but I am a bit worried as recently I had just created if AI models can still push a chinese narrative in lets say if someone is creating another nation's related website or anything similar. I don't think that there would be that big of a deal about it and I will still use chinese models but an article like this definitely reduces china's influence overall
America and Europe, please create open source / open weights models without censorship (like the gpt model) as a major concern. You already have intelligence like gemini flash so just open source something similar which can beat kimi/deepseek/glm
Edit: Although thinking about it, I feel like the largest impact wouldn't be us outsiders but rather the people in china because they had access to chinese models but there would be very strict controls on even open weights model from america etc. there so if chinese models have propaganda it would most likely try to convince the average chinese with propagandization perhaps and I don't want to put a conspiracy hat on but if we do, I think that the chinese credit score can take a look at if people who are suspicious of the CCP ask it to chatbots on chinese chatbots.
Last time I checked, China's state-owned enterprises aren't all that invested in developing AI chatbots, so I imagine that the amount of control the central government has is about as much as their control over any tech company. If anything, China's AI industry has been described as under-regulated by people like Jensen Huang.
A technology created by a certain set of people will naturally come to reflect the views of said people, even in areas where people act like it's neutral (e.g., cameras that are biased towards people with lighter skin). This is the case for all AI models—Chinese, American, European, etc.—so I wouldn't dub one that censors information they don't like as propaganda just because we like it, since we naturally have our own version of that.
The actual chatbots, themselves, seem to be relatively useful.