logoalt Hacker News

keedalast Thursday at 8:37 AM2 repliesview on HN

The memory is definitely sort of a moat. As an example, I'm working on a relatively niche problem in computer vision (small, low-resolution images) and ChatGPT now "knows" this and tailors its responses accordingly. With other chatbots I need to provide this context every time else I get suggestions oriented towards the most common scenarios in the literature, which don't work at all for my use-case.

That may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now. I asked ChatGPT to roast me again at the end of last year, and I was a bit taken aback that it had even figured out the broader problem I'm working on and the high level approach I'm taking, something I had never explicitly mentioned. In fact, it even nailed some aspects of my personality that were not obvious at all from the chats.

I'm not saying it's a deep moat, especially for the less frequent users, but it's there.


Replies

JumpCrisscrosslast Thursday at 9:22 AM

> may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now

I’m not saying it’s minor. And one could argue first-mover advantages are a form of moat.

But the advantage is limited to those who have used ChatGPT. For anyone else, it doesn’t apply. That’s different from a moat, which tends to be more fundamental.

show 1 reply
irishcoffeelast Thursday at 1:59 PM

Sounds similar to how psychics work. Observing obvious facts and pattern matching, except in this case you made the job super easy for the psychic because you gave it a _ton_ of information, instead of a psychic having to infer from the clothes you wear, your haircut, hygiene, demeanor, facial expression etc.

show 1 reply