It has much better utility in a phone (accessibility to camera and photos, various sensors, contacts, chats, smart home, payment methods...) than on a PC. I can imagine an AI that's more proactive, I don't go to ask a question but it helps me manage my day effectively and get more information where its useful.
Okay, but does it need to be deeply integrated into the OS or can it just interact with programs through their normal interfaces?
The most effective way to get an LLM to control a computer right now is to just give it a unix terminal because it's already a text-based environment where programs are expected to be highly interoperable.
What I'm saying is that you don't need to stop everything to redesign around AI, just allow for a decent level of interoperability that iOS (and largely android) doesn't currently have.
The mobile app development model is oriented around packaging somewhat useful software (that could usually be a web app) with malware and selling it for $0.99, necessitating a ton of sandboxing and preventing this type of interoperability in the first place. I would say focus on the semantic HTML aspect of the web and design some way for LLMs to interact with websites in an open-ended way.