It's a joint ignorance of how these frontier models get baked and what consumers want.
Many pundits think it's just a matter of scraping the internet and having a few ML scientists run ablation experiments to tune hyperparameters. That hasn't been true for over a year. The current requirements are more org-scale, more payoff from scale, more moat. The main legitimate competitive threat is adversarial distillation.
Many pundits also think that consumers don't want to pay a premium for small differences on the margin. That is very wrong-headed. I pay $200/month to a frontier lab because, even though it's only a few % higher in benchmark scores, it is 5x more useful on the margin.
It is the benchmark error rate, not the benchmark success %, that we actually trip up on.
Going from 85% to 90% is possibly 1/3 fewer errors or even higher, depending on the distribution of work you’re doing.
You pay to OpenAI or which one do you use? Do you switch regularly?
> The current requirements are more org-scale, more payoff from scale, more moat.
What moat? None of the AI providers have a moat at the moment, and the trend doesn't indicate that any of them will in the near future.