FWIW, AI is not entirely locked down in the Apple ecosystem. Sure, they control it but they've already built the foundation of a major opportunity for developers.
There's an on device LLM that is packaged in iOS, iPadOS and macOS 26 (Tahoe) [1]. They even have a HIG on use of generative AI [2]
Something like half of all macs are running macOS 26 [3] already, so this could be the most widely distributed on-device LLM on the planet.
I think people are sleeping on this, partly because the model is seen as under powered. But I think we can presume it won't always be so.
I've just posted a Show HN of app for macOS 26 I created that uses Apple's local LLM to summarize conversations you've had with Claude Code and Codex. [3]
I've been somewhat surprised at the quality and reliability of Apple's built-in LLM and have only been limited by the logic I've built around it.
I think Apple's packaging of an LLM in its core operating systems is actually a fast move with AI and even has potential to act as an existential threat to Windows.
[1] https://developer.apple.com/videos/play/wwdc2025/286/
[2] https://developer.apple.com/design/human-interface-guideline...
Don’t a lot of Android devices come with Gemini Nano on the device?
Probably not as many out there as there are Apple devices because it is only the high end ones at the moment. I don’t think they are that far behind in numbers though.
There is no major opportunity for developers on Apple's platforms when they can just rug pull you as they please.
I can second this. I am nearing launch on an app that uses both the new SpeechAnalyzer and on device LLM and it has met or exceeded my expectations. A longer context would always be nice but then I remember its running on a phone.