logoalt Hacker News

_imnothere01/20/20256 repliesview on HN

One point is reliability, as others have mentioned. Another important point for me is censorship. Due to their political nature, the model seemed to be heavily censored on topics such as the CCP and Taiwan (R.O.C.).


Replies

allan_s01/20/2025

To be fair, anthropic and openai censor heavily on a lot of subjects

1. profanity 2. slightly sexual content 3. "bad taste" joke

that is heavily linked to the fact that they are US-based company, so I guess all AI companies produce a AI model that is politically correct.

show 3 replies
Me100001/20/2025

Although I haven’t used these new models. The censorship you describe hasn’t historically been baked into the models as far as I’ve seen. It exists solely as a filter on the hosted version. IOW it’s doing exactly what Gemini does when you ask it an election related question: it just refuses to send it to the model and gives you back a canned response.

show 1 reply
Havoc01/21/2025

Not ideal but the use cases that info pop quizzes about the ccp aren’t exactly many

I’d prefer it rather not be censored out of principle but practically it’s a non issue

buyucu01/20/2025

Chinese censorship is less than American censorship.

Have you tried asking anything even slightly controversial to ChatGPT?

Shinolove01/22/2025

it's 2025 and this is what we are still reading on HN forums lmao... if you are not a historian trying to get this model to write a propaganda paper that will earn you a spot in an establishment backed university then I see no reason why would this be a problem for anyone. Imagine that OpenAI finally reach AGI with o-99 and when you ask chatgpt-1200 about deepseek it spits out garbage about some social credit bullshit because that's what supposedly intelligent creatures lurking HN forums do!

rvnx01/20/2025

It will then become the truth, unless the US and EU starts to loosen copyright, which is going to allow higher quality datasets to be ingested.