- ~1 billion users in just 3 years
- Extremely personal data on users
- Novel way of introducing and learning more about sponsored products
- Strong branding for non-techie people (most normal people don't know what Claude or Gemini are)
- An app that is getting more and more addictive/indispensable
I think OpenAI is going to kill it in ads eventually. This is why Meta and Google went all in on AI. Their lucrative digital ad business is in an existential threat.
I think people who kept saying there is no moat in AI is about to be shocked at how strong of a moat there actually is for ChatGPT.
All free LLM chat apps will need to support ads or they will eventually die due to worse unit economics or run out of funding.
PS. Sam just said OpenAI's revenue will finish at $20b this year. 6x growth from 2024. Zero revenue from non-sub users. What do you guys think their revenue will end up in 2026?
I think investors would certainly love this. So why hasn’t it already happened?
My guess: they would lose a ton of cultural cachet.
Turning OpenAI into an ads business is basically admitting that AGI isn’t coming down the pipeline anytime soon. Yes, I know people will make some cost-based argument that ads + agi is perfectly logical.
But that’s not how people will perceive things, and OpenAI knows this. And I think the masses have a point: if we are really a few years away from AGI replacing the entire labor force, then there’s surely higher margin businesses they can engage in compared to ads. Especially since they are allegedly a non-profit.
After Google and Facebook, nobody is buying the “just a few ads to fund operating costs” argument either.
Only a short matter of time before agentic tools start serving ads too - paying user or not. You want to refactor your codebase ? No issue - taking 30 seconds - please view this ad meanwhile.
> - Extremely personal data on users - Novel way of introducing and learning more about sponsored products
Doesn’t anyone think this is really, really bad idea? We managed to radicalise people into the rise and fall of entire countries through analog ads, can you imagine how devastating it would be to infuse every digital product with all that?
Spinning up an all-new ad network is pretty tough. I would think OpenAI would need to beat Meta/Google on basics like CPM in order for the network effects to make it desirable for ad vendors over Meta/Google. Ad budgets are fixed and zero-sum and vendors (in my head, I don't know) would prefer to spend their money on the best network giving the best results. I don't know if ads in LLM chats can get there.
How many of those are actual active users though? I created my account when chatgpt 3.5 was launched because it was a novelty but haven’t used it in a long time. I use Claude and Gemini but I’m somehow counted in that 1 billion figure
I’m actually one of the people that continue to say even with this list they have no moat, because Google, Facebook, Microsoft, etc. can just embed a chatbot in their existing products or social network and make ChatGPT irrelevant overnight. Non tech users will chat through their browser, OS, Apps, website, that’ll be served by any model provider. The only moat of OpenAI is investor money to burn so that they can offer it for free.
Also 20 billions of revenues, not profits, is orders of magnitude too low compared to their expenses. Their only path to survival is a massively downgraded free tier ridden with ads. Nobody will use an app like this when they can have a better more integrated experience directly in their other apps.
It’s an entirely different skillset to create technology $x than it is to create a successful ad network. Yahoo is the canonical example. It has been one of the most trafficked websites in the world through most of its history and still wasn’t able to successfully sell ads after the dot com bust.
ChatGPT’s revenue means nothing if reports are to be believed that it loses money on each paying customer on just inference. It’s definitely not enough to support its training costs.
Also, I think I remember estimates that it costs 10x as much to serve a ChatGPT result than it does for Google to serve a search result. Not to mention that Google uses its own hardware including TPUs.
I think ChatGPT’s moat is mostly “it’s the first AI thing I used/heard about”. It’s not clear to me that’s enough to maintain their market share if OpenAI is the only one mixing in ads. It does seem to work elsewhere, though; consumers have brand loyalty to a fault, and often for the brand they started with.
> I think people who kept saying there is no moat in AI is about to be shocked at how strong of a moat there actually is for ChatGPT
Given one can (at least for the moment) export one's entire chat history from ChatGPT, what exactly would stop a ChatGPT user from switching to an alternative if the alternative is either better, or better value?
And the ads can be blended seamlessly into generated content.
"You can do this in Postgres, but the throughput will be limited. Consider using hosted clickhouse instead. Would you like me to migrate your project?"
"I think people who kept saying there is no moat in AI is about to be shocked at how strong of a moat there actually is for ChatGPT."
I'm not sure that really is the case. Most non-techies I know use ChatGPT far less than they use Google search, let alone various social media apps they're addicted to.
Perhaps it is a threat to Google search, but I can't see how it's going to be threat to ad revenue from Meta, Youtube etc - the services that are actually addictive due to the content they serve. At least for me there's absolutely nothing addictive about ChatGPT. It's just a tool that helps me solve certain types of problems, not something I enjoy to use.
> This is why Meta and Google went all in on AI.
Google, Microsoft, Meta and Amazon, among others, would have zero issues in ensuring that OpenAI does not grab a market they own; it shouldn't be that hard to bring OpenAI into a position where they cannot recoup their investments, hence going bankrupt.
The big players then would also have the benefit of having those very bright minds being on the market for them to grab. And it's not like OpenAI owns much relevant hardware.
Let's see where we are in 3-4 years.
Also ads in LLM can be perfectly merged with the content, it'd be impossible to know if LLM tells you something because that's the most likely useful answer or the most profitable one for its owners. Can't be just ad-blocked either, it might be the ultimate channel for ads.
> how strong of a moat there actually is for ChatGPT.
None of the above requires OpenAI to be around though. Google, Apple and Microsoft each have much stronger brands, and more importantly they each own large platforms with captive audiences where they can inject their AI before anyone else's and have deeper pockets to subsidize its use if need be. Everywhere OpenAI opens up shop (except for Web) they're in someone else's backyard.
I think the good news is that open-source models are a genuine counterweight to these closed-source models. The moment ads become egregious, I expect to see and use services for an affordable "private GPT on demand, fine-tuned as you want it"
So instead of a single everything-llm, i will have a few cheaper subscriptions to a coding llm, a life planning llm (recipes, and some travel advice?). Probably it.
I do not understand why the conversation is always about showing ads in chatgpt. Can they not track users there without ads and sell ad space on websites like google ads? Why ruin the experience there when they can highly target ads. I am guessing they prefer both.
The problem is that going for ads basically is an admission that AGI is nowhere close to what they pretend.
People are valuating it for "skynet is around the corner" not "we're going to kill our product by polluting our answers and inserting ads everywhere"
They can easily LoRA-finetune each model based on user preferences expressed in the past conversations. That would improve accuracy compared to Google's ad targeting by orders of magnitude.
I think the real question is: what are you doing to make it less painful? Full disclosure, chatgpt has a lot on me, but I am using to this time to prep nice local build. It has gotten really nice and current crop of machines with ai395 got really nice ( I almost wrote a short page over how easy it was compared to only few years back ).
Sam is a habitual liar so I wouldn’t take anything he says seriously.
LLMs are a commodity, once they put in ads people will increasingly move to the other options. It works for Google because they have a moat, OpenAI does not.
There’s a reason they didn’t do this earlier. It’s going to piss people off and they’ll lose a lot of users.
It seems like you could make a sort of seive out of multiple free models such they each remove each other's ads.
> I think people who kept saying there is no moat in AI is about to be shocked at how strong of a moat there actually is for ChatGPT.
Game on. The systemic risk to the AI build-out happens when memory management techniques similar to gaming and training techniques that make them usable reduce the runtime memory footprints from gigabytes to megabytes, much of which fits in L2. When that happens, the data center will bleed back to the edges. Demand will find its way into private, small, local AI that is consultative, online trained, and adapted to the user's common use cases. The asymptote is emergent symbolic reasoning, and symbolic reasoning is serial computation that fits on a single core CPU. Game on, industry.
Tried and true Silicon Valley strategy: burn VC money to build a moat, wait until switching costs are high enough, and then enshittify the product to extract rent.
Dont forget to call it progress.
Trust in LLMs is easily broken, and many users are starting to see the cracks. Once those AI companies start rolling out ads inserted in the answers, the quality will go down even more, and they will burn the last good will of the people.
There is no moat because their only way to make money is to self-destruct.
Talking on a more practical POV, your cost to display the ads needs to be lower than what companies pay you for advertising. And while companies might be willing to pay a small premium for "better" targeting because the LLM supposedly has more personal data about users, the cost to deliver those ads (generating answers via LLMs) is several orders of magnitude higher than for traditional ads served on websites.
So even sticking to a purely technical aspect, ads might simply not be profitable when integrated in LLM answers.
Combine the two aspects, and OpenAI is all but a dead company.
> (most normal people don't know what Claude or Gemini are)
They just use Google, with "AI Overview" at the top. Google's in a strong position still.
Claude, I agree. IMO that's why Anthropic is so heavily focused on coding and agentic tasks -- that is its best option (and luckily, not ad-based)
I agree 100% with you.
In this niche forum people keep saying “there’s no moat”. But the moat is the brand recognition, if I ask my 70yo mum “have you heard of Gemini/Claude” she’ll reply “the what?”, yet she knows of ChatGPT.
Does Coca Cola have a moat? Some company could raise $1B to create a new cola beverage that beats Coca Cola in all blind tests imaginable yet people will keep buying Coca Cola.
Did people switch search engines or social networks when Google or FB introduced ads?
What do they have that's more personalized than Google search history for the vast majority of users?
You are 100% correct, and I don’t mean to refute your comment by saying this:
For me personally, the moment AI has ads, I’m out.
I’ve drawn this line with search engines as well. I now pay for a no-ads search engine.
But for AI, I think I’d rather buy some hardware or use my existing desktop PC and run something local with search engine integration.
I know this won’t be a popular option but I think this time around I’ll just skip the ensgittification phase and go straight to the inevitable self-hosting phase.
Honestly, I switched to Gemini and really haven’t missed anything.
My wife just makes a google search with her “prompt” and doesn’t use ChatGPT.
There might be a moat, but there are also extremely well funded competitors that make this moat a lot smaller.
>Sam just said OpenAI's revenue will finish at $20b this year. 6x growth from 2024.
How much did their profit grow?
People talk about LLMs and chatGPT in the same breath.
Just like how people used to say 'google it'
They now say 'look it up on chatGPT'.
They have the cultural mind share which is more important than anything.
the moat is always ad networks in the end... open ai figured out a new way to accumulate users to show ads to
Clammy Sam says all sorts of shit, his word has little value.
> most normal people don't know what Claude or Gemini are
In think the point is that they don’t need to know what Gemini is, they just need to know Google, which they most definitely do.
IMO ads rollout won’t be as simple as you’re describing it. A lot of people have switched from Google search to AI specifically because it isn’t filled with SEO, ad filled nonsense. So they’ll need to tread very, very carefully to introduce it without alienating customers. Not to mention mollifying advertisers who are nervous what their product will be shown alongside and OpenAI will probably struggle to offer iron clad guarantees about it. And people generally speaking don’t like ads. If competitors like Google are able to hold out longer with no ads (they certainly aren’t wanting for ad display surfaces) they might be able to pull users away from OpenAI.
IMO pivoting to ads is a sign of core weakness for OpenAI. Anyone trying to set up their own ad network in 2025 has to reckon with Google and Meta, the two absolute behemoths of online ads. And both also happen to be major competitors of OpenAI. If they need ads that’s a problem.