logoalt Hacker News

zozbot234yesterday at 10:00 PM1 replyview on HN

That AI will have to be significantly preferable to the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.


Replies

johnvanommenyesterday at 10:45 PM

> the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.

It’s not a challenge at all.

To win, all you need is to starve your competitors of RAM.

RAM is the lifeblood of AI, without RAM, AI doesn’t work.

show 1 reply