> The evidence shows that there is no methodological moat for LLMS.
Does it? Then how come Meta hasn't been able to release a SOTA model? It's not for a lack of trying. Or compute. And it's not like DeepSeek had access to vastly more compute than other Chinese AI companies. Alibaba and Baidu have been working on AI for a long time and have way more money and compute, but they haven't been able to do what DeepSeek did.
They may not have been leading (as in, releasing a SOTA model), but they definitely can match others - easily, as shown by llama 3/4, which proves the point - there is no moat. With enough money and resources, you can match others. Whether without SOTA models you can make a business out of it is a different question.