This just seems like the logical consequence of the chosen system to be honest. "Skills" as a concept are much too broad and much too free-form to have any chance of being secure. Security has also been obviously secondary in the OpenClaw saga so far, with users just giving it full permissions to their entire machine and hoping for the best. Hopefully some of this will rekindle ideas that are decades old at this point (you know, considering security and having permission levels and so forth), but I honestly have my doubts.
Skills are just more input to a language model, right?
That seems bad, but if you're also having your bot read unsanitized stuff like emails or websites I think there's a much larger problem with the security model
[dead]
I think the truth is we don’t know what to do here. The whole point of an ideal AI agent is to do anything you tell it to - permissions and sandboxing would negate that. I think the uncomfortable truth is as an industry we don’t actually know what to do other than say “don’t use AI” or “well it’s your fault for giving it too many permissions”. My hunch is that it’ll become an arms race with AI trying to find malware developed by humans/AI and humans/AI trying to develop malware that’s not detectable.
Sandboxing and permissions may help some, but when you have self modifying code that the user is trying to get to impersonate them, it’s a new challenge existing mechanisms have not seen before. Additionally, users don’t even know the consequences of an action. Hell, even curated and non curated app stores have security and malware difficulties. Pretending it’s a solved problem with existing solutions doesn’t help us move forward.