I think the truth is we don’t know what to do here. The whole point of an ideal AI agent is to do anything you tell it to - permissions and sandboxing would negate that. I think the uncomfortable truth is as an industry we don’t actually know what to do other than say “don’t use AI” or “well it’s your fault for giving it too many permissions”. My hunch is that it’ll become an arms race with AI trying to find malware developed by humans/AI and humans/AI trying to develop malware that’s not detectable.
Sandboxing and permissions may help some, but when you have self modifying code that the user is trying to get to impersonate them, it’s a new challenge existing mechanisms have not seen before. Additionally, users don’t even know the consequences of an action. Hell, even curated and non curated app stores have security and malware difficulties. Pretending it’s a solved problem with existing solutions doesn’t help us move forward.