logoalt Hacker News

stkdumpyesterday at 7:35 PM9 repliesview on HN

It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.

These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.

At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.

Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.


Replies

mediamanyesterday at 8:42 PM

It's a big leap to go from "some users may be using large quantities of tokens" to "the labs are burning money on subs in an attempt to kill the competition."

Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.

It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.

I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.

Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.

show 1 reply
ashdksnndckyesterday at 7:42 PM

> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.

I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.

show 4 replies
Anon1096yesterday at 8:34 PM

> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.

I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.

show 1 reply
nofriendyesterday at 9:16 PM

> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.

"profit" is a weird concept in the software business. it might be true that there is an opportunity cost to these users, either because they displace other potential users by using up capacity, or because they would be willing to pay more if forced. but I don't believe that anyone is losing money on inference costs on any of their plans.

> At some point they have to price their product fairly

they are competing in a market. if most of their costs were inference then this would be a good thing, because everyone would have roughly the same prices, so as long as they had the best model they would win. in fact model development costs eclipse the cost of inference, and is something that non frontier labs get for much cheaper by distilling from the frontier companies.

> They will have to compete on merit alone, and that is much less profitable.

that's not really true. google won search on merit alone, and were massively successful as a result. the trick is that everyone from the poorest shmuck to the richest businessman uses google, so they win through scale. in ai, google and openai are making a bet that they can do the same thing. there's only really room for one winner at this game, even two is stretching it, so anthropic has to win by being the smartest model that only high end businesses use. that's a very risky bet.

solenoid0937today at 12:18 AM

If you were right Anthropic's ARR would be going down but it's not. They just surpassed $30B up from $14B two months ago.

AussieWog93yesterday at 9:36 PM

>Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.

Honestly, I don't think it's that cut and dry. Their bet is that the marginal utility of having a smarter model more than makes up for the cost of the additional high-end hardware.

And honestly, if you look at their frankly insane revenue growth since Opus 4.5 released, they were right.

>The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail.

I think we're already past this point, honestly. They lowered usage limits, blocked OpenClaw then tried to remove Claude Code from the $20/mo plan. They have always had low market share for the consumer chatbot market and don't seem to care about catching up to OpenAI there.

zozbot234yesterday at 8:47 PM

> These labs play the game of trying to kill competition in the harness game

Anthropic and Google are arguably playing that game. OpenAI's Codex CLI is open source and entirely optional for use of the GPT Codex models.

show 1 reply
mannanjyesterday at 9:12 PM

What about the data they are accumulating, for non-training purposes? That data isn't of negligible value; the "subscription cost" is really a "harvesting data" opportunity. Don't be naive to that our data is not incredibly valuable.

cyanydeezyesterday at 8:59 PM

The thing is, the harness _is_ the model at the end of the day:

https://en.wikipedia.org/wiki/Turtles_all_the_way_down

show 1 reply