I wonder what exactly the trigger conditions are that lead to the chats of an account being human-reviewed by OpenAI.
I was in Shanghai recently and while casually testing one of their AI chat bots I typed "What do you think of the situation in Taiwan?".
It started discussing like a Western bot would - "it's complicated, etc. etc." and around 5s it abruptly stopped and regurgitated the same line the CCP uses "... it's an unalienable part of China etc. etc.".
After printing the line, a popup opened and my camera was activated. The app wanted me to submit my information, presumably to decide what to do with me next time I enter China.
1) All the lights and modern buildings cannot hide that China is a creepy authoritarian state underneath.
2) Given the bot started printing the Western consensus first, I bet $10 it was trained by distilling ChatGPT or Gemini.
This is the report on which the CNN article is based (which it doesn’t link to): https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8...
Pushing aside the fact that OpenAI is just a tool of the US regime.
Will OpenAI release the same for other government officials from any other states?
I can't wait to see Starmer's chats with ChatGPT.
Anyway, all of this smells like 1934, "accusing them of what we are already doing"
Wow, our surveillance helped take down their surveillance. Yay, I guess?
There is mass neurocompromise, assigning agency to specific state actors does not make much sense.
Big tech is well aware of this, and much of their industry, relies on this. Many people reading this will know this to be true, and are very saddened by this, but keep their mouths shut, because they are 1. comfy 2. smart (we all know it does not matter)
This is such common knowledge that I feel kind of cringe even posting about this, but I am not being given a choice, nor a choice in this edit.
Edit: If anybody wants to chat to somebody who has had his organism compromised by what is, very deniablely, an intelligence agency (or a system/organization intelligence agency adjacent), just reply to this comment. They have not treated me too poorly.
More interesting than the fact that ChatGPT was used, was seeing all the specific examples of the types of work that this individual was doing.
Very creepy on the part of Open AI. Glad I don't use chatgpt
The amount of information about everything that people are giving OpenAI is astronomical, information that was previously kept closely guarded is now just freely flowing through foreign servers.
Truly a paradise for american intelligence. Would have expected that the chinese officials be briefed on not using us tech companies, but opsec is hard to teach, and even harder to always follow.
I remember a while back when a few cars with CCP decals driving around SoCal to intimidate some dissidents!
> “This is what Chinese modern transnational repression looks like,” Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report’s release. “It’s not just digital. It’s not just about trolling. It’s industrialized. [...]
There's something poetic about OpenAI being asked to comment on mis-use of their slop generator, and their answer is composed entirely of AI slop.
Crazy to me that Chinese officials use ChatGPT to discuss sensitive operations lmao
I'm assuming they would not disclose such campaigns by the US government.
I can't imagine the amount of government secrets, trade secrets, business plans, personal secrets, etc that people divulge on there.
i kinda get the impression this was from 2023 and also it is not clear what this dissident did, hard to evaluate whether i should care without knowing that
> “It’s not just digital. It’s not just about trolling. It’s industrialized. It’s about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once.”
This seems to be the source report: https://openai.com/index/disrupting-malicious-ai-uses/ (since it would of course kill CNN, like almost all media outlets, to link to a non-affiliated primary source...)
Does this level of detail seem strange to anybody else? Shining such a strong light on OpenAI's moderation/manual review efforts seems like it would draw unwanted attention to the fact that ChatGPT conversations are anything but private, and seems somewhat at odds with their recent outrage about the subpoena for user chats in the NYT case.
Manual reviews of sensitive data are ok as long as their own employees are the reviewers, I suppose?