One point is reliability, as others have mentioned. Another important point for me is censorship. Due to their political nature, the model seemed to be heavily censored on topics such as the CCP and Taiwan (R.O.C.).
Although I haven’t used these new models. The censorship you describe hasn’t historically been baked into the models as far as I’ve seen. It exists solely as a filter on the hosted version. IOW it’s doing exactly what Gemini does when you ask it an election related question: it just refuses to send it to the model and gives you back a canned response.
Not ideal but the use cases that info pop quizzes about the ccp aren’t exactly many
I’d prefer it rather not be censored out of principle but practically it’s a non issue
Chinese censorship is less than American censorship.
Have you tried asking anything even slightly controversial to ChatGPT?
it's 2025 and this is what we are still reading on HN forums lmao... if you are not a historian trying to get this model to write a propaganda paper that will earn you a spot in an establishment backed university then I see no reason why would this be a problem for anyone. Imagine that OpenAI finally reach AGI with o-99 and when you ask chatgpt-1200 about deepseek it spits out garbage about some social credit bullshit because that's what supposedly intelligent creatures lurking HN forums do!
It will then become the truth, unless the US and EU starts to loosen copyright, which is going to allow higher quality datasets to be ingested.
To be fair, anthropic and openai censor heavily on a lot of subjects
1. profanity 2. slightly sexual content 3. "bad taste" joke
that is heavily linked to the fact that they are US-based company, so I guess all AI companies produce a AI model that is politically correct.