> Dario in his recent interview with Dwarkesh made it abundantly clear that they have substantial inference margins, and they use that to justify the financing for the next training run.
You're putting way too much faith in Dario's statements. It wasn't "abundantly clear" to me. In that interview, prior to explaining how inference profits work, he said, "These are stylized facts. These numbers are not exact. I'm just trying to make a toy model," followed shortly by "[this toy model's economics] are where we're projecting forward in a year or two."
so you think Dario was just straight up lying that each model re-coups its training costs and is profitable? In order for that to be the case, inference just has to have good margins. If you just do some basic math and compare it to chinese open source models, theres just no way Sonnet is actually as expensive as API costs indiciate.