The moat is people, data, and compute in that order.
It’s not just compute. That has mostly plateaued. What matters now is quality of data and what type of experiments to run, which environments to build.
This "moat" is actually constantly shifting (which is why it isn't really a moat to begin with). Originally, it was all about quality data sources. But that saturated quite some time ago (at least for text). Before RLHF/RLAIF it was primarily a race who could throw more compute at a model and train longer on the same data. Then it was who could come up with the best RL approach. Now we're back to who can throw more compute at it since everyone is once again doing pretty much the same thing. With reasoning we now also opened a second avenue where it's all about who can throw more compute at it during runtime and not just while training. So in the end, it's mostly about compute. The last years have taught us that any significant algorithmic improvement will soon permeate across the entire field, no matter who originally invented it. So people are important for finding this stuff, but not for making the most of it. On top of that, I think we are very close to the point where LLMs can compete with humans on their own algorithmic development. Then it will be even more about who can spend more compute, because there will be tons of ideas to evaluate.
This "moat" is actually constantly shifting (which is why it isn't really a moat to begin with). Originally, it was all about quality data sources. But that saturated quite some time ago (at least for text). Before RLHF/RLAIF it was primarily a race who could throw more compute at a model and train longer on the same data. Then it was who could come up with the best RL approach. Now we're back to who can throw more compute at it since everyone is once again doing pretty much the same thing. With reasoning we now also opened a second avenue where it's all about who can throw more compute at it during runtime and not just while training. So in the end, it's mostly about compute. The last years have taught us that any significant algorithmic improvement will soon permeate across the entire field, no matter who originally invented it. So people are important for finding this stuff, but not for making the most of it. On top of that, I think we are very close to the point where LLMs can compete with humans on their own algorithmic development. Then it will be even more about who can spend more compute, because there will be tons of ideas to evaluate.