> In its current form, however, trusted publishing applies to a limited set of use cases. Support is restricted to a small number of CI providers, it cannot be used for the first publish of a new package, and it does not yet offer enforcement mechanisms such as mandatory 2FA at publish time. Those constraints have led maintainer groups to caution against treating trusted publishing as a universal upgrade, particularly for high-impact or critical packages.
This isn't strictly accurate: when we designed Trusted Publishing for PyPI, we designed it to be generic across OIDC IdPs (typically CI providers), and explicitly included an accommodation for creating new projects via Trusted Publishing (we called it "pending" publishers[1]). The latter is something that not all subsequent adopters of the Trusted Publishing technique have adopted, which is IMO both unfortunate and understandable (since it's a complication over the data model/assumptions around package existence).
I think a lot of the pains here are self-inflicted on GitHub's part here: deciding to remove normal API credentials entirely strikes me as extremely aggressive, and is completely unrelated to implementing Trusted Publishing. Combining the two together in the same campaign has made things unnecessarily confusing for users and integrators, it seems.
[1]: https://docs.pypi.org/trusted-publishers/creating-a-project-...
I maintain some very highly used npm package and this situation just has me on edge. In our last release of dozens of packages, I was manually reading though our package-lock and package.json changes and reviewing every dependency change. Luckily our core libraries have no external dependencies, but our tooling has a ton.
We were left with a tough choice of moving to Trusted Publishers or allowing a few team members to publish locally with 2FA. We decided on Trusted Publishers because we've had an automated process with review steps for years, but we understand there's still a chance of a hack, so we're just extremely cautious with any PRs right now. Turning on Trusted Publishers was a huge pain with so many package.
The real thing we want for publishing is for us is to be able to continue to use our CI-based publishing setup, with Trusted Publishers, but with a human-in-the-loop 2FA step.
But that's only part of a complete solution. HITL is only guaranteed to slow down malicious code propagating. It doesn't actually protect our project against compromised dependencies, and doesn't really help prevent us from spreading it. All of that is still a manual responsibility of the humans. We need tools to lock down and analyze our dependencies better, and tools to analyze our our packages before publishing. I also want better tools for analyzing and sandboxing 3rd party PRs before running CI. Right now we have HITL there, but we have to manually investigate each PR before running tests.
The comment about maintainers not getting paid resonates. I'm a solo founder and these security changes, while necessary, add real friction to shipping.
The irony is that trusted publishing pushes everyone toward GitHub Actions, which centralizes risk. If GHA gets compromised, the blast radius is enormous. Meanwhile solo devs who publish from their local machine with 2FA are arguably more secure (smaller attack surface, human in the loop) but are being pushed toward automation.
What I'd like to see: a middle ground where trusted publishing works but requires a 2FA confirmation before the publish actually goes through. Keep the automation, keep the human gate. Best of both worlds.
The staged publishing approach mentioned in the article is a good step - at least you'd catch malicious code before it hits everyone's node_modules.
I really think that the main issue is that NPM itself will execute any script that is in the "postinstall" section of a package, without asking the user for permission. This is a solved problem in other package managers, e.g. PNPM will only run scripts if the user allows them to, and store the allowlist in the package.json file for future reference.
In this scenario, if a dependency were to add a "postinstall" script because it was compromised, it would not execute, and the user can review whether it should, greatly reducing the attack surface.
The shift wouldn't have been so turbulent if npm had simply updated their CLI in tandem. I still can't use 2FA to publish because their CLI simply cannot handle it.
I really don't think this should be a registry-level issue. As in, the friction shouldn't be introduced into _publishing_ workflows, it should be introduced into _subscription_ workflows where there is an easy fix. Just stop supporting auto-update (through wildcard patch or minor versions) by default... Make the default behaviour to install whatever version you load at install time (like `npm ci` does)
2FA publishing still doesn't work for me. just use legacy tokens at this point, gave up trying to figure out what's wrong
Just to be clear, "trusted publishing" means a type of reverse vendor lock in? Only some CI systems are allowed to be used for it.
Seems like requiring 2FA to publish or trusted publishing should prevent the vast majority of this issue.
The only tricky bit would be to disallow approval own pull request when using trusted publishing. That should fall back to requiring 2FA
I think this turbulent shift is going to push a lot of node devs elsewhere.
I understand things need to be safe, but this is a recklessly fast transition.
I try to make this point when the subject comes up, but IMHO this is a lipstick-on-a-pig solution. The problem with npm security isn't stability or attestation, it's the impossibility of auditing all that garbage.
The software is too fine-grained. Too many (way too many) packages from small projects or obscure single authors doing way too many things that are being picked up for one trivial feature. That's just never going to work. If you don't know who's writing your software the answer will always end up being "Your Enemies" at some point.
And the solution is to stop the madness. Conglomerate the development. No more tiny things. Use big packages[1] from projects with recognized governance. Audit their releases and inclusion in the repository from a separate project with its own validation and testing. No more letting the bad guys push a button to publish.
Which is to say: this needs to be Debian. Or some other Linux distro. But really the best thing is for the JS community (PyPI and Cargo are dancing on the edge of madness too) to abandon its mistake and move everything into a bunch of Debian packages. Won't happen, but it's the solution nonetheless.
[1] c.f. the stuff done under the Apache banner, or C++ Boost, etc...
I've been thinking, Java doesn't have many supply chain issues, their model is based of namespacing with the DNS system. If I want a library from vendor.com, the library to import is somewhere under com.vendor.*
Simple enough, things like npm and pip reinvent a naming authority, have no cost associated (so it's weak to sybil attacks), all for not much, what do you get in exchange? You create equality by letting everyone contribute their wonderful packages, even those that don't have 15$/yr? I'm sorry was the previous leading internet mechanism not good and decentralized enough for you?
Java's package naming system is great in design, the biggest vuln in dependencies that I can think of on java was not a supply chain specific vuln, but rather a general weakness of a library (log4j). But maybe someone with more java experience can point to some disadvantage of the java system that explains why we are not all copying that
Can we finally decare this (and other incomplete language specific package namanegers) to be a failed experiment and go back to robust and secure distro based package management workflow, with maintainers separate from upstream developpers ?
[dead]
Why is JS, in particular, so deeply afflicted with these issues? Why hasn’t there been an effort to create a more robust standard library? Or at least first party libraries maintained by the JavaScript team? That way folks can pull in trusted deps instead of all the hodgepodge.
Go did a lot wrong. It was just awful before they added Go modules. But it’s puzzling to me to understand why as a community and ecosystem its 3rd party dependencies seem so much less bloated. Part of it I think is because the standard library is pretty expansive. Part of it is because of things like golang.org/x. But there’s also a lot of corporate maintainers - and I feel like part of that is because packages are namespaces to the repository - which itself is namespaced to ownership. Technically that isn’t even a requirement - but the community adopted it pretty evenly - and it makes me wonder why others haven’t.
In all of this, people forget that NPM packages are largely maintained by volunteers. If you are going to put up hurdles and give us extra jobs, you need to start paying us. Open source licenses explicitly state some variation of "use at your own risk". A big motivation for most maintainers is that we can create without being told what to do.
I had 25 million downloads on NPM last year. Not a huge amount compared to the big libs, but OTOH, people actually use my stuff. For this I have received exactly $0 (if they were Spotify or YouTube streams I would realistically be looking at ~$100,000).
I propose that we have two NPMs. A non-commercial NPM that is 100% use at your own risk, and a commerical NPM that has various guarantees that authors and maintainers are paid to uphold.