This is the classic apple approach - wait to understand what the thing is capable of doing (aka let others make sunk investments), envision a solution that is way better than the competition and then architect a path to building a leapfrog product that builds a large lead.
Wha I don't get about Apple is when everyone else was giving up on yet another VR attempt, moving into AI, they decide AI isn't worth it, and it was the right time for a me too VR headset.
So no VR, given the price and lack of developer support, and late arrival into AI.
I've had it turned off since Sequoia, and this I truly appreciate. It hasn't nagged me once to turn it or Siri on, and it isn't mandatory.
When I open up JIRA or Slack I am always greeted with multiple new dialogues pointing at some new AI bullshit, in comparison. We hates it precious
Apple aren’t in the business of building chatbots to impress investors (other than some WWDC2024 vaporware they’d rather not talk about any more). They’re in the business of consumer hardware.
Consumers want iPhones and (if Apple are right) some form of AR glasses in the next decade. That’s their focus. There’s a huge amount of machine learning and inference that’s required to get those to work. But it’s under the hood and computed locally. Hence their chips. I don’t see what Apple have to gain by building a competitor to what OpenAI has to offer.
Nvidia restricts gamer cards in data centers through licensing, eventually they will probably release a cheaper consumer AI card to corner the local AI market that can't be used in data centers if they feel too much of a threat from Apple.
Imagine a future where Nvidia sells the exact same product at completely different prices, cheap for those using local models, and expensive for those deploying proprietary models in data centers.
there are always three elements in the equations of business model: 1. marginal cost 2. marginal revenue 3. value created
for llm providers, i always believe the key is to focus on high value problems such as coding or knowledge work, becaues of the high marginal cost of having new customers - the token burnt. and low marginal revenue if the problem is not valuable enough. in this sense no llm providers can scale like previous social media platforms without taking huge losses. and no meaning user stickiness can be built unless you have users' data. and there is no meaningful business model unless people are willing to pay a high price for the problem you solve, in the same way as paying for a saas.
i am really not optimistic about the llm providers other than anthropic. it seems that the rest are just burning money, and for what? there is no clear path for monetization.
and when the local llm is powerful enough, they will soon be obsolete for the cost, and the unsustainable business model. in the end of the day, i do agree that it is the consumer hardware provider that can win this game.
It's the same everywhere: great fundamentals pay off. It's true of martial arts, dance, and absolutely about software platforms. You just have to trust that process and invest in it, which Apple does (although frustratingly not enough!).
Maybe they thought an investment in a product with lots of substitutes & high capital requirements wasn't very attractive.
My capex is even less than Apple, I can ship to user's Apple hardware and I can't access iPhone user photos either...so really I'm the winner.
Honestly, I think part of the reason Apple hasn't jumped deep into AI is due to two big reasons:
1) Apple is not a data company.
2) Apple hasn't found a compelling, intuitive, and most of all, consistent, user experience for AI yet.
Regarding point 2: I haven't seen anyone share a hands down improved UX for a user driven product outside of something that is a variation of a chat bot. Even the main AI players can't advertise anything more than, "have AI plan your vacation".
> I am actually of the opinion that without some kind of bailout, OpenAI could be bankrupt in the next 18-24 months, but I am horrible at predictions
I find this intriguing.. Does anyone here have enough insight to speculate more?
So Apple’s AI acceleration and memory architecture is accidental, but nvidia’s is not?
That’s actually by design. Apple never jumps on the tech hype bandwagon.
they wait until the dust settles before making their well-thought-out moves.
Every time they’ve jumped the hype train too quickly it hasn’t worked out, like Siri for example.
What I think was a wasted opportunity was not bringing the xserve back, being one of the few e2e solutions out there at scale.
Apple is almost 2 years out from their announcement of Apple Intelligence. It has barely delivered on any of the hype. New Siri was delayed and barely mentioned in the last WWDC; none of the features are released in China.
In other news, people keep buying iPhones, and Apple just had its best quarter ever in China. AAPL is up 24% from last year.
I just realized that next year Apple's Neural Engine will be 10 years old, just like the "NPUs will change AI forever!" puff pieces.
Here's to another 10 years of scuffed Metal Compute Shaders, I guess.
> Pure strategy, luck, or a bit of both? I keep going back and forth on this, honestly, and I still don’t know if this was Apple’s strategy all along, or they didn’t feel in the position to make a bet and are just flowing as the events unfold maximising their optionality.
Maximizing the available options is in fact a "strategy", and often a winning one when it comes to technology. I would love to be reminded of a list of tech innovators who were first and still the best.
Anyway, hasn't this always been Apple's strategy?
Apple is just waiting for all the slop to inevitably crash to see what actually works
This seems mistaken to me. The core idea is that LLMs are commoditizing and that the UI (Siri in this case) is what users will stick with.
But... what's the argument that the bulk of "AI value" in the coming decade is going to be... Siri Queries?! That seems ridiculous on its face.
You don't code with Siri, you don't coordinate automated workforces with Siri, you don't use Siri to replace your customer service department, you don't use Siri to build your documentation collation system. You don't implement your auto-kill weaponry system in Siri. And Siri isn't going to be the face of SkyNet and the death of human society.
Siri is what you use to get your iPhone to do random stuff. And it's great. But ... the world is a whole lot bigger than that.
Apple never competed in the "AI race" in the first place, because they already knew they were already at the finish line.
This was really unsurprising [0].
> Then Stargate Texas was cancelled, OpenAI and Oracle couldn’t agree terms, and the demand that had justified Micron’s entire strategic pivot simply vanished. Micron’s stock crashed.
Well.. no. The Stargate expansion was cancelled the orginally planned 1.2MW (!) datacenter is going ahead:
> The main site is located in Abilene, Texas, where an initial expansion phase with a capacity of 1.2 GW is being built on a campus spanning over 1,000 acres (approximately 400 hectares). Construction costs for this phase amount to around $15 billion. While two buildings have already been completed and put into operation, work is underway on further construction phases, the so-called Longhorn and Hamby sections. Satellite data confirms active construction activity, and completion of the last planned building is projected to take until 2029.
> The Stargate story, however, is also a story of fading ambitions. In March 2026, Bloomberg reported that Oracle and OpenAI had abandoned their original expansion plans for the Abilene campus. Instead of expanding to 2 GW, they would stick with the planned 1.2 GW for this location. OpenAI stated that it preferred to build the additional capacity at other locations. Microsoft then took over the planning of two additional AI factory buildings in the immediate vicinity of the OpenAI campus, which the data center provider Crusoe will build for Microsoft. This effectively creates two adjacent AI megacampus locations in Abilene, sharing an industrial infrastructure. The original partnership dynamics between OpenAI and SoftBank proved problematic: media reports described disagreements over site selection and energy sources as points of contention.
https://xpert.digital/en/digitale-ruestungsspirale/
> Micron’s stock crashed. [the link included an image of dropping to $320]
Micron’s stock is back to $420 today
> One analysis found a max-plan subscriber consuming $27,000 worth of compute with their 200$ Max subscription.
Actually, no. They'd miscalculated and consumed $2700 worth of tokens.
The same place that checked that claim also points out:
> In fact, Anthropic’s own data suggests the average Claude Code developer uses about $6 per day in API-equivalent compute.
https://www.financialexpress.com/life/technology-why-is-clau...
I like Apple's chips, but why do we put up with crappy analysis like this?
But why do I feel like the quality of the software from Apple declined sharply in recent years? The liquid glass design feels very unpolished and not well thought out throughout almost everywhere… seems like even Apple can’t resist falling victim to AI slop
Don't worry, when apple introduce it, it'll be revolutionary and 10% thinner.
I like how we are acting like this market is so novel and emergent revering the luck of some while lamenting the failures of others when it was all "roadmapped" a decade ago. It's like watching a Shaanxi shadow puppet show with artificial folk lore about the origins of the industry. I hate reality television!
Gemma4 in my view is good enough to do things similar to Gemini 2.5 flash, meaning if I point it code and ask for help and there is a problem with the code it’ll answer correctly in terms of suggestions but it’s not great at using all tools or one shooting things that require a lot of context or “expert knowledge”
If a couple more iterations of this, say gemma6 is as good as current opus and runs completely locally on a Mac, I won’t really bother with the cloud models.
That’s a problem.
For the others anyway.