Apple really dropped the ball here. They had every ability to make something competitive with Nvidia for AI training as well as inference, by selling high end multi GPU Mac Pro workstations as well as servers, but for some reason chose not to. They had the infrastructure and custom SoCs and everything. What a waste.
It really could have been a bigger market for them than even the iPhone.
If my Grandma had wheels she would be a bicycle. Apple would need to transition from being a consumer electronics company to being a B2B retailer for data centre hardware to take advantage of this.
Obviously Siri from WWDC 2yrs ago was a disaster for Apple. Other than that they seem to have done pretty well navigating the new LLM world. I do think they would benefit from having their own SOA LLM, but I don’t think its is necessary for them. My mental model for LLMs and Apple is that they are similar Garage Band - “Now everyone can play an instrument” becomes “now anyone can make an app”. Apple owns the interface to the user (i don’t see anyone making nicer to use consumer hardware) and can use what ever stack in the background to deliver the technical features they decide to.
They didn’t drop the ball at all?
They want to be able to sell handsets, desktops and laptops to their customer base.
Pursing a product line that would consume the finite amount of silicon manufacturing resources away from that user base would be corporate suicide.
Even nvidia has all but dropped support for its traditional gaming customer base to satisfy its new strategy.
At any rate, the local inference capabilities are only going to get cheaper and more accessible over the coming years, and Apple are probably better placed than anyone to make it happen.
Don’t mistake stock market performance for revenue. NVIDIA makes ~200B annually, same as what Apple makes from iPhones. It’s a big market but GPUs aren’t just AI.
Nah, Apple made the right choice. Nobody except a niche market of hobbyists is interested in running tiny quantized models.
If Apple doesn't offer a Linux product, they cannot be used seriously in headless computing task. They are adamant in controlling the whole stack, so unless they remake some server version of macOS (and wait years for the community to accustom themselves with it), they will keep being a consumer/professional oriented company
this is what needs to come back with modern hardware and modern interconnect
> They had the infrastructure and custom SoCs and everything. What a waste.
What are they wasting, exactly?
How is this dropping the ball? I think they dropped the ball a long time ago by waiting until M5 to do integrated tensor cores instead of the separate ANE only which was present before.
For multi-gpu you can network multiple Macs at high speed now. Their biggest disadvantage to Nvidia right now is that no one wants to do kernel authoring in Metal. AMD learned that the hard way when they gave up on OpenCL and built HIP.
> something competitive with Nvidia for AI training
Apple is counting on something else: model shrink. Every one is now looking at "how do we make these smaller".
At some point a beefy Mac Studio and the "right sized" model is going to be what people want. Apple dumped a 4 pack of them in the hands of a lot of tech influencers a few months back and they were fairly interesting (expensive tho).
Nothing is a bigger market than the iPhone, let alone expensive niche machines.
Just about everybody who isn't Nvidia dropped the ball, bigtime.
Intel should have shipped their GPUs with much more VRAM from day one. If they had done this, they'd have carved out a massive niche and much more market share, and it would have been trivially simple to do.
AMD should have improved their tools and software, etc.
Apple should have done as you say.
Google had nigh on a decade to boost TPU production, and they're still somehow behind the curve.
Such a lack of vision. And thus Nvidia is, now quite durably, the most valuable company in the world. Imagine telling that to a time traveler from 2018.