logoalt Hacker News

revolvingthrowtoday at 5:42 AM5 repliesview on HN

> pricing "Pro" $3.48 / 1M output tokens vs $4.40

I’d like somebody to explain to me how the endless comments of "bleeding edge labs are subsidizing the inference at an insane rate" make sense in light of a humongous model like v4 pro being $4 per 1M. I’d bet even the subscriptions are profitable, much less the API prices.

edit: $1.74/M input $3.48/M output on OpenRouter


Replies

schneehertztoday at 6:06 AM

This price is high even because of the current shortage of inference cards available to DeepSeek; they claimed in their press release that once the Ascend 950 computing cards are launched in the second half of the year, the price of the Pro version will drop significantly

show 1 reply
m00xtoday at 6:08 AM

They are profitable to opex costs, but not capex costs with the current depreciation schedules, though those are now edging higher than expected.

mirzaptoday at 6:04 AM

My thoughts exactly. I also believe that subscription services are profitable, and the talk about subsidies is just a way to extract higher profit margins from the API prices businesses pay.

show 1 reply
raincoletoday at 6:19 AM

Insert always has been meme.

But seriously, it just stems from the fact some people want AI to go away. If you set your conclusion first, you can very easily derive any premise. AI must go away -> AI must be a bad business -> AI must be losing money.

show 1 reply
masafej536today at 6:06 AM

Point taken but there isnt any western providers there yet. Power is cheaper in china.

show 2 replies