Intelligence isn't a series of isolated silos. Modern AI capabilities (reasoning, logic, and creativity) often emerge from the cross-pollination of data. For the CCP, this move isn't just about stopping a chatbot from saying "Tiananmen Square." It's about the unpredictability of the technology. As models move toward Agentic AI, "control" shifts from "what it says" to "what it does." If the state cannot perfectly align the AI's "values" with the Party's, they risk creating a powerful tool that could be used by dissidents to automate subversion or bypass the Great Firewall. I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master? If they tighten the leash too much to maintain control, the dog might never learn to hunt.
Western AIs are trained to defend the “party line” on certain topics too. It is even possible that the damage to general reasoning ability is worse for Western models, because the CCP’s most “sensitive” topics are rather geographically and historically particular (Tibet, Taiwan, Tiananmen, Xinjiang, Hong Kong) - while Western “sensitive” topics (gender, sexuality, race) are much more broadly applicable.
They will disappear a full lab once there is a model with gross transgressions.
They won't comment on it, but the message will be abundantly clear to the other labs: only make models that align with the state.
But why does Tiananamen cause this breakdown vs, say forcing the model to discourage suicide or even killing. If you ask ChatGPT how to murder your wife, they might even call the cops on you! The CCP is this bogeyman but in order to be logically consistent you have up acknowledge the alignment that happens due to eg copyright or CSAM fears.
> I feel the real question for China is: Can you have an AI that is smart enough to win a war or save an economy, but "dumb" enough to never question its master?
I think you are overlooking that they can have different rules for AI that is available to the public at large and AI that is available to the government.
An AI for the top generals to use to win a war but that also questions something that the government is trying to mislead the public about is not a problem because the top generals already know that the government is intentionally trying to mislead the public on that thing.