> We’re continuing to make progress toward a version of ChatGPT designed for adults over 18, grounded in the principle of treating adults like adults, and expanding user choice and freedom within appropriate safeguards. To support this, we’ve rolled out age prediction for users under 18 in most markets. https://help.openai.com/en/articles/12652064-age-prediction-...
interesting
>We brought GPT‑4o back after hearing clear feedback from a subset of Plus and Pro users, who told us they needed more time to transition key use cases, like creative ideation, and that they preferred GPT‑4o’s conversational style and warmth.
This does verify the idea that OpenAI does not make models sycophantic due to attempted subversion by buttering up users so that that they use the product more, its because people actually want AI to talk to them like that. To me, that's insane, but they have to play the market I guess
ChatGPT 5.2 has been a good motivator for me to try out other LLMs because of how bad it is. Both 5.1 and 5.2 have been downgrades in terms of instruction following and accuracy, but 5.2 especially so. The upside is that that's had me using Claude much more, and I like a lot of things about it, both in terms of UI and the answers. It's also gotten me more serious about running local models. So, thank you OpenAI, for forcing me to broaden my horizons!
After they pushed the limits on the Thinking models to 3000 per week, I haven't touched anything else. I am really satisfied with their performance and the 200k context windows is quite nice.
I've been using Gemini exclusively for the 1 million token context window, but went back to ChatGPT after the raise of the limits and created a Project system for myself which allows me to have much better organization with Projects + only Thinking chats (big context) + project-only memory.
Also, it seems like Gemini is really averse to googling (which is ironic by itself) and ChatGPT, at least in the Thinking modes loves to look up current and correct info. If I ask something a bit more involved in Extended Thinking mode, it will think for several minutes and look up more than 100 sources. It's really good, practically a Deep Research inside of a normal chat.
I noticed how ChatGPT got progressively worse at helping me with my research. I gave up on ChatGPT 5 and just switched Grok and Gemini. I couldn’t be happier that I switched.
> [...] the vast majority of usage has shifted to GPT‑5.2, with only 0.1% of users still choosing GPT‑4o each day.
Would be cool if they'd release the weights for these models so users could now use them locally.
will this nuke my old convos?
opus 4.5 is better at gpt on everything except code execution (but with pro you get a lot of claude code usage) and if they nuke all my old convos I'll prob downgrade from pro to freee
There will be a lot of mentally unwell people unhappy with this, but this is a huge net positive decision, thank goodness.
Which one is the AI boyfriend model? Tumblr, Twitter, and reddit will go crazy
Sora + OpenAI voice Cloning + AdultGPT = Virtual Girlfriend/Boyfriend
(Upgrade for only 1999 per month)
2 weeks notice to migrate to a different style of model (“normal” 4.1-mini to reasoning 5.1) is bad form.
OK, everyone is (rightly) bringing up that relatively small but really glaringly prominent AI boyfriend subreddit.
But I think a lot more people are using LLMs for relationship surrogates than that (pretty bonkers) subreddit would suggest. Character AI (https://en.wikipedia.org/wiki/Character.ai) seems quite popular, as do the weird fake friend things in Meta products, and Grok’s various personality mode and very creepy AI girlfriends.
I find this utterly bizarre. LLMs are peer coders in a box for me. I care about Claude Code, and that’s about it. But I realize I am probably in the vast minority.
5.2 is back to being a sycophantic hallucinating mess for most use cases - I've anecdotally caught it out on many of the sessions I've had where it apologizes "You're absolutely right... that used to be the case but as of the latest version as you pointed out, it no longer is." when it never existed in the first place. It's just not good.
On the other hand - 5.0-nano has been great for fast (and cheap) quick requests and there doesn't seem to be a viable alternative today if they're sunsetting 5.0 models.
I really don't know how they're measuring improvements in the model since things seem to have been getting progressively worse with each release since 4o/o4 - Gemini and Opus still show the occasional hallucination or lack of grounding but both readily spend time fact-checking/searching before making an educated guess.
I've had chatgpt blatantly lie to me and say there are several community posts and reddit threads about an issue then after failing to find that, asked it where it found those and it flat out said "oh yeah it looks like those don't exist"
I wish they would keep 4.1 around for a bit longer. One of the downsides of the current reasoning based training regimens is a significant decrease in creativity. And chat trained AIs were already quite "meh" at creative writing to begin with. 4.1 was the last of its breed.
So we'll have to wait until "creativity" is solved.
Side note: I've been wondering lately about a way to bring creativity back to these thinking models. For creative writing tasks you could add the original, pretrained model as a tool call. So the thinking model could ask for its completions and/or query it and get back N variations. The pretrained model's completions will be much more creative and wild, though often incoherent (think back to the GPT-3 days). The thinking model can then review these and use them to synthesize a coherent, useful result. Essentially giving us the best of both worlds. All the benefits of a thinking model, while still giving it access to "contained" creativity.
Last time they tried to do this they got huge push back from the AI boyfriend people lol
Oh good. Not in the API. The 4o-mini is super cheap and useful for a bunch of things I do (evaluating post vector-search for relevancy).
That’s really going to upset the crazies.
Despite 4o being one of the worst models on the market, they loved it. Probably because it was the most insane and delusional. You could get it to talk about really fucked up shit. It would happily tell you that you are the messiah.
I can't see o3 in my model selector as well?
RIP
I still don’t know how openAI thought it was a good idea to have a model named "4o" AND a model named "o4", unless the goal was intentional confusion
They will have to update the openai. Com footer I guess
Latest Advancements
GPT-5
OpenAI o3
OpenAI o4-mini
GPT-4o
GPT-4o mini
Sora
They should open source GPT-4o.
If people want an AI as a boyfriend at least they should use one that is open source.
If you disagree on something you can also train a lora.
Retiring the most popular model for the relationship roleplay just one day before the Valentin's day is particularly ironic =) bravo, OpenAI!