logoalt Hacker News

weeviltoday at 2:46 AM1 replyview on HN

I think most people just assume it's magic, and are too awestruck by the hype to think critically.

Financially this feels similar to Uber's business plan in the 2010s; undercut the market with unsound pricing propped up by venture capital (PE was literally subsidising taxi fares; they admitted this and their intention to readjust, but no one seemed to care) then stop manipulating the market and allow fares to even out at (gasp) what it cost to get a cab before Uber.

The difference here is that the LLM market is human productivity; enormous subsidies are afforded to Anthropic, OpenAI etc. in the form of VC or compute credit, but eventually those debts will be called in, the free-to-use aspect will vanish because it's simply not profitable, and we'll be left with several premium products that only a few people will actually pay for, and even then that may not be enough to cover their costs. That's when the bubble will burst.


Replies

great_psytoday at 4:07 AM

Actually I think there’s another option.

There’s the scenario where LLMs get more efficient in size, and to get 2026 SOTA performance you will be able to get it from consumer grade laptop.

Sure with a 1000B parameter you will get better performance but the average person will have it write some python script, not derive new physics equations.

So in a sense the demand for LLM intelligence with reach a plateau (arguably we are there today for avg person) so there will not be any subsidy required, because the avg person will not need the latest and greatest.

There’s not the same demand pattern for something like uber.

show 1 reply