logoalt Hacker News

bgirardyesterday at 9:03 PM9 repliesview on HN

It's a fun trope to repeat but that's not what OpenAI is doing. I get a ton of value from ChatGPT and Codex from my subscription. As long as the inference is not done at a lost this analogy doesn't hold. They're not paying me to use it. They are generating output that is very valuable to me. Much more than my subscription cost.

I've been able to help setup cross app automation for my partner's business, remodel my house, plan a trip of Japan and assist with the cultural barrier, vibe code apps, technical support and so much more.


Replies

bloppeyesterday at 9:20 PM

To be fair, I would get a ton of value out of someone selling dollars for 20 cents apiece.

But ya, OAI is clearly making a ton of revenue. That doesn't mean it's a good business, though. Giving them a 20 year horizon, shareholders will be very upset unless the firm can deliver about a trillion in profit, not revenue, to justify the 100B (so far) in investment, and that would barely beat the long term s&p 500 average return.

But Altman himself has said he'll need much more investment in the coming years. And even if OAI became profitable by jacking up prices and flooding gpt with ads, the underlying technology is so commodified, they'd never be able to achieve a high margin, assuming they can turn a profit at all.

show 1 reply
felixfurtakyesterday at 9:17 PM

All of which you will be able to do with your bundled assistant in the not-to-distant future.

OpenAI is a basket case:

- Too expensive and inconvenient to compete with commoditized, bundled assistants (from Google/ Microsoft/Apple)

- Too closed to compete with cheap, customizable open-source models

- Too dependent on partners

- Too late to establish its own platform lock-in

It echoes what happened to:

- Netscape (squeezed by Microsoft bundling + open protocols)

- BlackBerry (squeezed by Apple ecosystem + open Android OS)

- Dropbox (squeezed by iCloud, Google Drive, OneDrive + open tools like rclone)

When you live between giants and open-source, your margin collapses from both sides.

show 1 reply
munk-ayesterday at 9:13 PM

As a developer - ChatGPT doesn't hold a candle compared to claude for coding related tasks and under performs for arbitrary format document parsing[1]. It still has value and can handle a lot of tasks that would amaze someone in 2020 - but it is simply falling behind and spending much more doing so.

1. It actually under performs Claude, Gemini and even some of the Grok models for accuracy with our use case of parsing PDFs and other rather arbitrarily formatted files.

tartoranyesterday at 10:15 PM

There's no doubt you're getting a lot of value from OpenAI, I am too. And yes the subscription is a lot more value than what you pay for. That's because they're burning investor's money and it's not something that is sustainable. Once the money runs out they'll have to jack up prices and that's the moment of truth, we'll see what users are willing to pay for what. Google or another company may be able to provide all that much cheaper.

rglullisyesterday at 9:16 PM

> They're not paying me to use it.

Of course they are.

> As long as the inference is not done at a loss.

If making money on inference alone was possible, there would be a dozen different smaller providers who'd be taking the open weights models and offering that as service. But it seems that every provider is anchored at $20/month, so you can bet that none of them can go any lower.

show 2 replies
jfbyesterday at 10:41 PM

That the product is useful does not mean the supplier of the product has a good business; and of course, vice versa. OpenAI has a terrible business at the moment, and the question is, do they have a plausible path to a good one?

mirthflat83yesterday at 9:12 PM

Well, don't you think you're getting a ton of value because they're selling each of their dollars for 0.2 dollars?

steveBK123yesterday at 9:18 PM

If the subscription cost 5x as much would you still pay and feel you are getting such a great value?

show 1 reply
ReptileManyesterday at 9:11 PM

>. As long as the inference is not done at a lost this analogy doesn't hold.

I think that there were some article here that claimed that even inference is done at loss - and talking about per subscriber. I think it was for their 200$ subscription.

In a way we will be in a deal with it situation soon where they will just impose metered models and not subscription.