> China is pursuing these because they cannot compete on the frontier.
? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao. Local + efficient is clearly the future
AI boosters cling to this notion because it's the only way the massive data center buildouts make any sense at all. I guess you could say the US is winning the frontier AI race. Okay. I'm never going to grant a cloud service access to all the contents of my hard drive, that's just never going to happen, so if you expect me and a lot of people like me who feel similarly to get on this train, you better have a local, lightweight model too or we're not even having a discussion, the answer is just no.
> ? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao. Local + efficient is clearly the future
Corporate America is where the money is, and corporate America will dictate what products are successful by virtue of spend. Individuals aren't going to be paying $100s or $1000s/month en masse for these models but businesses will be. Being local and efficient isn't that important at this stage but even so as American companies continue to scale and invest they'll be able to make those models more local and efficient if the market wants it. Sort of like how you had a big, giant desktop computer and now you've got a super computer in your phone which is in your pocket. Going straight to "local and efficient" means going straight to being behind because at some point, perhaps now even, the local and efficient model won't be able to keep up.
For some reason people think that they somehow know something that Google or Nvidia or whoever, with hundreds of billions of dollars of real money at stake don't already know and it's both amusing and bizarre to see this play out again and again in off-hand comments like "lol tiny benefits".
You buy an iPhone even though the cheap-o Wal-Mart Android phone for $100 "does the same thing". Except that in this case the Android phone just puts you out of business while those spending big money for "tiny benefits" beat you in the market.
> ? Claude, ChatGPT, etc are heinously expensive for tiny benefits lmao
Unfortunately local inference is inefficient, 100s of times more inefficient than cloud. When you answer one request at a time you still have to fetch all active weights into compute units, once every token. When you run a batch of 300, you load it once and compute 300 at a time.
Compared to cloud, local inference is less flexible. You can't scale up 5x or 20x, can't have spikes, and pay for it no matter if you use it or not. But usage factor is very low, like 5%. And to run a decent model your system costs $2000 or more.