But do you (or MSFT) trust it to do that correctly, consistently, and handle failure modes (what happens when the meaning of that button/screen changes)?
I agree, an assistant would be fantastic in my life, but LLMs aren't AGI. They can not reason about my intentions, don't ask clarifing questions (bring back ELIZA), and handle state in an interesting way (are there designs out there that automatically prune/compress context?).