> but people have hundreds and thousands on conversation on these apps that can't be easily moved elsewhere.
Except these aren't conversations in the traditional sense. Yes, there's the history of prompts and responses exchanged. But the threads don't build on each other - there's no cross-conversational memory, such as you'd have in a human relationship. Even within a conversation it's mostly stateless, sending the full context history each time as input.
So there's no real data or network effect moat - the moat is all in model quality (which is an extremely competitive race) and harness quality (same). I just don't think there's any real switching cost here.
I see people who have conversations spanning months. They don't start new threads and instead go back to existing threads to continue the topic. They also reference the prior threads discussion many times.
This would feel like a switching cost for people who use the system that way.
ChatGPT and Gemini has cross conversation personalization. I believe the former is off by default and the latter is on.
This is not the case.
I use OpenAI a lot on the paid plan via the UI. It now knows absolutely loads about me and seems to have a massive amount of cross conversational memory. It's really getting very close to what you'd expect from a human conversation in this regard.
Sure the model itself is still stateless, and if you use the API then what you say is true.
But they are doing so much unseen summarisation and longer context building behind the scenes in the webapp, what you see in the current conversation history is just a fraction of what is getting sent to the model.