logoalt Hacker News

famouswafflesyesterday at 5:32 PM1 replyview on HN

>Folks say handwavy things like “oh they’ll just sell ads” but even a cursory analysis shows that math doesn’t ad up relative to the sums of money being invested at the moment.

Ok, so I think there's 2 things here that people get mixed on.

First, Inference of the current state of the art is Cheap now. There's no 2 ways about it. Statements from Google, Altman as well as costs of 3rd parties selling tokens of top tier open source models paint a pretty good picture. Ads would be enough to make Open AI a profitable company selling current SOTA LLMs to consumers.

Here's the other thing that mixes things up. Right now, Open AI is not just trying to be 'a profitable company'. They're not just trying to stay where they are and build a regular business off it. They are trying to build and serve 'AGI', or as they define it, 'highly autonomous systems that outperform humans at most economically valuable work'. They believe that, to build and serve this machine to hundreds of millions would require costs order(s) of magnitudes greater.

In service of that purpose is where all the 'insane' levels of money is moving to. They don't need hundreds of billions of dollars in data centers to stay afloat or be profitable.

If they manage to build this machine, then those costs don't matter, and if things are not working out midway, they can just drop the quest. They will still have an insanely useful product that is already used by hundreds of millions every week, as well as the margins and unit economics to actually make money off of it.


Replies

cmiles8yesterday at 5:36 PM

If OpenAI was the only company doing this that argument might sort of make sense.

The problem is they have real competition now and that market now looks like an expensive race to an undifferentiated bottom.

If someone truly invents AGI and it’s not easily copied by others then I agree it’s a whole new ballgame.

The reality is that years into this we seem to be hitting a limit to what LLMs can do with only marginal improvements with each release. On that path this get ugly fast.

show 1 reply