Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
That means responses can be far more tailored - it knows what your job is, knows where you go with friends, knows that when you ask about 'dates' you mean romantic relationships and which ones are going well or badly not the fruit, etc.
Eventually when they make it work better, open ai can be your friend and confident, and you wouldn't dump your friend of many years to make another new friend without good reason.
What kind of a moat is that? I think it only works in abusive relationships, not consumer economies. Is OpenAIs model being an abusive money grubbing partner? I suppose it could be!
But Google has your Gmail inbox, your photos, your maps location history…
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider
This sounds like first-mover advantage more than a moat.
You can prompt the model to dump all of the memory into a text file and import that.
In the onboarding flow, I can ask you, "Do you use another LLM?" If so, give it this prompt and then give me the memory file that outputs.
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
Branding isn't a moat when, as far as the mass market is concerned, you are 2 years old.
Branding is a moat when you're IBM, Microsoft (and more recently) Google, Meta, etc.
It's certainly valuable but you can ask Digg and MySpace how secure being the first mover is. I can already hear my dad telling me he is using Google's ChatGPT...
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
Their 'memory' is mostly unhelpful and gets in the way. At best it saves you from prompting some context, but more often than not it adds so much irrelevant context that it over fits responses so hard that it makes them completely useless, specially in exploratory sessions.
I just learned Gemini has "memory" because it mixed its response to a new query with a completely unrelated query I had beforehand, despite making separate chats for them. It responded as if they were the same chat. Garbage.
Couldn't you just ask it to write down what it knows about you and copy paste into another provider?
I really think this memory thing is overstated on Hacker News. This is not something that is hard to move at all. It's not a moat. I don't think most users even know memory exist outside of a single conversation.