I’m not convinced that automated checks will be able to reliably assess whether a plugin is malicious.
I think the best (only?) way to solve the plugin security problem would be to properly sandbox them with an explicit API and permission system.
It doesn't do anything about first-party malware, but it can help a lot in gauging how dependencies are kept up-to-date and whether they contain any known CVEs, e.g. the same way that e.g. Trivy does and Artifacthub highlights.
I am curious how well this works out in practice for the ecosystem, though. In my experience blanket scans have a good chance to produce false-positives (= CVE exists but doesn't apply to the context it's used in), so the scans need some know-how to interpret correctly, which can lead to a lot of maintainer churn.
Read through the blog post. A permissions system is planned in addition to the automated scans and more controls for teams.
All are necessary because permissions alone can't solve certain malicious behaviors. Look at some scorecards on the Community site you'll quickly see why some of the warnings are not things a permissions system or sandboxing could catch.
The blog post contains details about the rollout, but it will be a phased approach because it requires changes to the plugin API.
Obviously this wouldn’t be compatible with existing plugins, so I’d separate legacy plugins and new plugins, and add a lot of friction to install the legacy plugins, which will be deprecated at some point.
Podman/Linux has an API with a permission system and we still god Copy Fail: https://garrido.io/notes/podman-rootless-containers-copy-fai...
Security and authorization is just hard and at one point if you are designing a platform you have to ask yourself if it's worth the risk for the sake of flexibility. To plan for a perfectly safe system is a hopeless proposition.
IMO this is an outdated view. Existing developer platforms have had to rely on static heuristics and capability-based permission systems, but now AI can run at scale and surface a lot of user-unfriendly intent that wasn't possible before.
The permission system are definitely useful for hard limits - but AI review can surface way more detail (what kinds of things are actually sent over the network, etc).
They don't have to reliably assess whether a plugin is malicious.
The checks are a filter so they can apply manual review only to those plugins which pass the baseline (and automatable) requirements.
Sandbox? Cool now the plugin that reads your private notes runs inside a sandbox and sends the notes back home from there.
>I think the best (only?) way to solve the plugin security problem would be to properly sandbox them with an explicit API and permission system.
I want to say "and especially prevent them from touching my private data (i.e. the whole point of Obsidian plugins being to read/write the documents)".
But if it can't talk to the internet, I kind of don't see the issue.
EDIT: Apparently due to how JS and Electron works, Obsidian plugins are just JS blobs that run in the global scope, and can read/write the whole filesystem (limited by user permissions) and make HTTP requests? Can someone confirm/deny this pls?