> Is it actually being sold at a steep discount? Anthropic CEO has stated they have high margins on inference, so training is the big cost center.
They're spending more than they're making. For the foreseeable future, saying "we could be profitable if we stopped training" if goofy, because they can't stop. If they do, no one will want to use their product because it will be overtaken by competitors within three months.
I get it that in 10 years all of this might peak and we're gonna be content using old models, but that'll be a very different landscape and Anthropic might not be a part of it anymore if they don't start making money before that.
That's a perfectly valid approach if you can balance capex and revenue. Why stop and try to be profitable when the economy is giving you the liquidity to push that down the road?
Models are already super useful, but if you can make them more useful by burning cash people are willing to hand you, why not?
> I get it that in 10 years all of this might peak and we're gonna be content using old models
I would personally be happy using gpt 5.3 codex for the foreseeable future, with just improvements in harnesses
IMO we're already at the point where even if these company collapse and the models end up being sold at the cost of inference (no new training), we would be massively ahead