>behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
Surely they're using "history of the user-inputted chat" as a signal and just choosing not to highlight that? Because that would make it so much easier to predict age.
Everyone's saying this is for advertising, but I don't think it is. It's so they can let ChatGPT sext with adults.
Hard no. It's so easy to get "flagged" by opaque systems for "Age verification" processes or account lockouts that require giving far too much PII to a company like this for my liking.
> Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Yea, my linkedin account which was 15 years old and was a paid pro user for several years got flagged for verification (no reason ever given, I rarely used it for anything other than interacting with recruiters) with this same company as their backend provider. They wouldn't accept a (super invasive feeling) full facial scan + a REAL ID, they also wanted a passport. So I opted out of the platform. There was no one to contact - it wasn't "fast" or "easy" at all. This kind of behavior feels like a data grab for more nefarious actors and data brokers further downstream of these kinds of services.
Pretty cool. Evidence that you can do whatever you want under the banner of 'protecting the kids.'
Does regulators really care about a predicted age? I feel like they require hard proof of being above age to show explicit content. The only ones that care about predicted age is advertisers.
This could have the unintended consequence of encouraging under-agers to ask more 'adult' questions in order to try to trick it into thinking they're an adult. Analogous to the city that wanted to get rid of rats, so offered a bounty for every dead rat, and to the surprise of nobody except policy makers, the city ended up with more rats, not less. (lesson: they thought they were incentivising less rats, but unintentionally incentivised more)
The padding in OpenAI's statement is easy to see through:
> The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
(the only real signal here is 'usage patterns' - AKA the content of conversations - the other variables are obfuscation to soften the idea that OpenAI will be pouring over users' private conversations to figure out if they're over/under age.).
Worth also noting 'neutered' AI models tend to be less useful. Example: Older Stable Diffusion models were preferred over newer, neutered models: https://www.youtube.com/watch?v=oFtjKbXKqbg&t=1h16m
> Viral challenges that could encourage risky or harmful behavior in minors
Why would it encourage this for anyone?
>young people deserve technology that both expands opportunity and protects their well-being.
Then maybe OpenAI should just close shop, since (SaaS) LLMs do neither in the mid to long term.
They're trying to make ChatGPT more attractive to advertisers.
Considering that OpenAI is having trouble getting its models to avoid recommending suicide (something it probably does not want for ANY user), I rather doubt this age prediction is going to be that helpful for curbing the tool's behavior.
How long before a phrase is found that causes a predicted birthdate of 1970/01/01 ?
When we look at how fast and coordinated the rollout of age verification has been around the globe, it's hard not to wonder if there was some impetus behind it.
There are dark sides to the rollout that EFF details in their resource hub: https://www.eff.org/issues/age-verification
There is a confluence of surveillance capitalism and a global shift towards authoritarianism that makes it particularly alarming right now.
It's nonsense and doesn't work. They have "age predicted" my account a couple of months back saying I'm under 18, while I'm a man in my 40s who uses ChatGPT for mostly work related stuff, and nothing that would indicate that it's someone under 18. So now they are asking for a government ID to prove it. Yeah, no thanks.
Creepy people doing creepy things.
I think this is good.
I've been very aggressive toward OpenAI on here about parental controls and youth protection, and I have to say the recent work is definitely more than I expected out of them.
"it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."
See it starts with gender, and if (user.gender === "Female") user.age = 29.
After that, the algorithm gets very complex and becomes a black box. I'm sure they spent billions training it.
Looks like an elegant solution. And yes, demographics are useful for advertising.
This title gave me a weird feeling as if they were going to predict my own age.
It feels like OpenAI is moving into the extraction phase far too soon. They are making their product less appealing to end users with ads and aggressive user-data gathering (which is what this really is). Usually you have to be very secure in your position as a market segment owner before you start with the anti-consumer moves, but they are rapidly losing market share, and they have essentially no moat. Is the goal just to speed-run an IPO before they lose their position?
The minority reports vibe is getting stronger by the minutes.
I imagine they're building this system with the goal of extracting user demographics (age, sex, income) from chat conversations to improve advertising monetization.
This seems to be a side project of their goal and a good way to calibrate the future ad system predictions.
> typical times of day when someone is active, usage patterns over time,
> Users [...] will always have a [...] simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service.
Nice, so now your most secret inner talk with the LLM can be directly associated with face and ID. Get ready for the fun moment when Trump decide that he needs to see what are your discussions with the AI when you pass the border or piss him off...
Age detection was already very effective with Leisure Suit Larry 3 age questions.
https://allowe.com/games/larry/tips-manuals/lsl3-age-quiz.ht...
Wait, I don't understand this. Does it mean that they can erroneously predict I'm a minor, covertly restrict my account without me knowing? I guess it's time to cancel my subscription.
I just asked ChatGPT. "Based on everything I've asked, how old do you think I am?" It was dead-on with its answer. It guessed 30-35. I'm 32.
That was just a spur-of-the-moment question. I've been using ChatGPT for over six months now.
In case it wasn't clear LLM conversations are being analyzed in a similar way to the social media advertising profiles...
"Q: ipsum lorem
ChatGPT: response
Q: ipsum lorem"
OpenAI: take this user's conversation history and insert the optimal ad that they're susceptible to, to make us the most $
OpenAI are liars. I have all the privacy settings on, and it still assumes things about me that it would only do if it knew all my previous conversations.
Their teen content restrictions
> ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content, such as:
* Graphic violence or gory content
* Viral challenges that could encourage risky or
harmful behavior in minors
* Sexual, romantic, or violent role play
* Depictions of self-harm
* Content that promotes extreme beauty standards,
unhealthy dieting, or body shaming
That wording implies that's not comprehensive, but hate and disinformation are omitted.Let's be honest - to protect the children, big tech will put everyone under the suspicion of being one. And the issue is not how they use the technologies they have, because they have a moral responsibility to do it safely, but that we don't have technologies of hours.
What I wonder lately is how an adult person can be empowered by tech to bare the consequences of their action and the answer usually is that we cannot. We don''t have the means of production in the literal marxist definition of the phrase and we are being shaped by outside forces that define what we can do with ourselves. And it does not matter if those forces are benevolent or not, it matters that it is not us.
The winter is coming and we are short on thermal underware.
The chinese open models being reason for hope is just a very sad joke.
well I hope it's better than Spotify's age prediction which came to the conclusion that I'm 87 years old.
Seriously though this is the most easily game-able thing imaginable, pretty sure teens are clever enough to figure out how to pretend to be an adult. If you've come to the conclusion that your product is unsuited for kids implement actual age verification instead of these shoddy stochastic surveillance systems. There's a reason "his voice sounded deep" isn't going to work for the cashier who sold kids booze
This is 100% for advertising, not user safety.
It’s absolutely crucial for effective ad monetization to know the users age - significant avenues are closed down due to various legislation like COPPA and similar around the world. It severely limits which users can even be subject to ads, the kind of ads, and whether data can be collected for profiling and targeting for ads.
For some reason ChatGPT has suddenly started thinking i'm a teen. Every answer it starts out "Since you are a teen I will..." and prompts me to upload an ID to show my age. I'm 35.