It's obviously true that DeepSeek models are biased about topics sensitive to the Chinese government, like Tiananmen Square: they refuse to answer questions related to Tiananmen. That didn't magically fall out of a "predict the next token" base model (of which there is plenty of training data for it to complete the next token accurately); that came out of specific post-training to censor the topic.
It's also true that Anthropic and OpenAI have post-training that censors politically charged topics relevant to the United States. I'm just surprised you'd deny DeepSeek does the same for China when it's quite obvious that they do.
What data you include, or leave out, biases the model; and there's obviously also synthetic data injected into training to influence it on purpose. Everyone does it: DeepSeek is neither a saint nor a sinner.
All I'm saying is that if you want to hear your own propaganda, use your own state approved AI. Deepseek is obviously going to respond according to their own regulatory environment.
Well said, except for the last sentence:
Just because everyone does it doesn’t mean one isn’t a sinner for doing it.