Interesting! I wonder how UI will evolve in the long-term? If there are browser-use/computer-use and clicky-clones automating pointer actions, do we really need complex UI anymore? If yes, when?
I've been playing with writing a visionOS app that allows an AI agent to be aware of what you're looking at at any given time.
At some point I fully expect eye tracking (or attention tracking) to be common enough to be a first-class input method.
I've been playing with writing a visionOS app that allows an AI agent to be aware of what you're looking at at any given time.
At some point I fully expect eye tracking (or attention tracking) to be common enough to be a first-class input method.