The most surprising part of uv's success to me isn't Rust at all, it's how much speed we "unlocked" just by finally treating Python packaging as a well-specified systems problem instead of a pile of historical accidents. If uv had been written in Go or even highly optimized CPython, but with the same design decisions (PEP 517/518/621/658 focus, HTTP range tricks, aggressive wheel-first strategy, ignoring obviously defensive upper bounds, etc.), I strongly suspect we'd be debating a 1.3× vs 1.5× speedup instead of a 10× headline — but the conversation here keeps collapsing back to "Rust rewrite good/bad." That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
“Why did it take a greenfield project…?”
By definition greenfield projects literally means free from constraints.
So the answer is in your question: Why did it take a team unbound by constraints to try something new, as compared to a project with millions of existing stakeholders?
Single vision. Smaller team. What they landed on is a hit (no guarantee of that in advance!)
Conversely, with so many stakeholders, getting everyone to rally around a change (in advance) is hard.
In my experience this is about human nature/organisation and spans all types of organisations, not just python or open source etc.
It also looks like python would have got there, given the foundations put in place as noted in the article.
I largely agree but don't want to entirely discount the effect that using a compiled language had.
At least in my limited experience, the selling point with the most traction is that you don't already need a working python install to get UV. And once you have UV, you can just go!
If I had a dollar for every time I've helped somebody untangle the mess of python environment libraries created by an undocumented mix of python delivered through the distributions package management versus native pip versus manually installed...
At least on paper, both poetry and UV have a pretty similar feature set. You do however need a working python environment to install and use poetry though.
Note that the advantages of Rust are not just execution speed: it's also a good language for expressing one's thoughts, and thus makes it easier to find and unlock the algorithmic speedups that really increase speed.
But yeah. Python packaging has been dumb for decades and successive Python package managers recapitulated the same idiocies over and over. Anyone who had used both Python and a serious programming language knew it, the problem was getting anyone to do anything about it. I can't help thinking that maybe the main reason using Rust worked is that it forced anyone who wanted to contribute to it to experience what using a language with a non-awful package manager is like.
> the conversation here keeps collapsing back to "Rust rewrite good/bad." That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
I think there's a few things going on here:
- If you're going have a project that's obsessed with speed, you might as well use rust/c/c++/zig/etc to develop the project, otherwise you're always going to have python and the python ecosystem as a speed bottleneck. rust/c/c++/zig ecosystems generally care a lot about speed, so you can use a library and know that it's probably going to be fast.
- For example, the entire python ecosystem generally does not put much emphasis on startup time. I know there's been some recent work here on the interpreter itself, but even modules in the standard library will pre-compile regular expressions at import time, even if they're never used, like the "email" module.
- Because the python ecosystem doesn't generally optimize for speed (especially startup), the slowdowns end up being contagious. If you import a library that doesn't care about startup time, why should your library care about startup time? The same could maybe be said for memory usage.
- The bootstrapping problem is also mostly solved by using a complied language like c/rust/go. If the package manager is written in python (or even node/javascript), you first have to have python+dependencies installed before you can install python and your dependencies. With uv, you copy/install a single binary file which can then install python + dependencies and automatically do the right thing.
- I think it's possible to write a pretty fast implementation using python, but you'd need to "greenfield" it by rewriting all of the dependencies yourself so you can optimize startup time and bootstrapping.
- Also, as the article mentions there are _some_ improvements that have happened in the standards/PEPs that should eventually make they're way into pip, though it probably won't be quite the gamechanger that uv is.
> That feels like cargo-culting the toolchain [...]
Pun intended?
Jokes aside, what you describe is a common pattern. It's also why Google internally they used to get decent speedups from rewriting some old C++ project in Go for a while: the magic was mostly in the rewrite-with-hindsight.
If you put effort into it, you can also get there via an incremental refactoring of an existing system. But the rewrite is probably easier to find motivation for, I guess.
Consensus building and figuring out what was actually needed?
Someone on this site said most tech problems are people problems - this feels like one.
Greenfield mostly solves the problem because it's all new people.
I don't know the problem space and I'm sure that the language-agnostic algorithmic improvements are massive. But to me, there's just something about rust that promotes fast code. It's easy to avoid copies and pointer-chasing, for example. In python, you never have any idea when you're copying, when you're chasing a pointer, when you're allocating, and so on. (Or maybe you do, but I certainly don't.) You're so far from hardware that you start thinking more abstractly and not worrying about performance. For some things, that's probably perfect. But for writing fast code, it's not the right mindset.
> That feels like cargo-culting the toolchain instead of asking the uncomfortable question: why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
This feels like a very unfair take to me. Uv didn’t happen in isolation, and wasn’t the first alternative to pip. It’s built on a lot of hard work by the community to put the standards in place, through the PEP process, that make it possible.
What uv did was to bring it all together.
It just has to do with values. If you value perf you aren't going to write it in Python. And if you value perf then everything else becomes a no brainer as well.
It's the same way in JS land. You can make a game in a few kilobytes, but most web pages are still many megabytes for what should have been no JS at all.
I suspect that the non-Rust improvements are vastly more important than you’re giving credit for. I think the go version would be 5x or 8x compared to the 10x, maybe closer. It’s not that the Rust parts are insignificant but the algorithmic changes eliminate huge bottlenecks.
Poetry largely accomplished the same thing first with most of the speedups (except managing your python installations) and had the disadvantage of starting before the PEPs you mentioned were standardized.
Because it broke backwards compatibility? It's worth noting that setuptools is in a similar situation to pip, where any change has a high chance of breaking things (as can be seen by perusing the setuptools and pip bug trackers). PEP 517/518 removed the implementation-defined nature of the ecosystem (which had caused issues for at least a decade, see e.g. the failures of distutils2 and bento), instead replacing it with a system where users complain about which backend to use (which is at least an improvement on the previous situation)...
I have been a big Astral and uv booster for a long time. But specifications like this one: https://gist.github.com/b7r6/47fea3c139e901cd512e15f42355f26... have me re-evaluating everything.
That's TensorRT-LLM in it's entirety at 1.2.0rc6 locked to run on Ubuntu or NixOS with full MPI and `nvshmem`, the DGX container Jensen's Desk edition (I know because I also rip apart and `autopatchelf` NGC containers for repackaging on Grace/SBSA).
It's... arduous. And the benefit is what exactly? A very mixed collection of maintainers have asserted that software behavior is monotonic along a single axis most of which they can't see and we ran a solver over those guesses?
I think the future is collections of wheels that have been through a process the consumer regards as credible.
> it's how much speed we "unlocked" just by finally treating Python packaging as a well-specified systems problem instead of a pile of historical accidents.
A lot of that, in turn, boils down to realizing that it could be fast, and then expecting that and caring enough about it.
> but with the same design decisions (PEP 517/518/621/658 focus, HTTP range tricks, aggressive wheel-first strategy, ignoring obviously defensive upper bounds, etc.), I strongly suspect we'd be debating a 1.3× vs 1.5× speedup instead of a 10× headline
I'm doing a project of this sort (although I'm hoping not to reinvent the wheel (heh) for the actual resolution algorithm). I fully expect that some things will be barely improved or even slower, but many things will be nearly as fast as with uv.
For example, installing from cache (the focus for the first round) mainly relies on tools in the standard library that are written in C and have to make system calls and interact with the filesystem; Rust can't do a whole lot to improve on that. On the other hand, a new project can improve by storing unpacked files in the cache (like uv) instead of just the artifact (I'm storing both; pip stores the artifact, but with a msgpack header) and hard-linking them instead of copying them (so that the system calls do less I/O). It can also improve by actually making the cached data accessible without a network call (pip's cache is an HTTP cache; contacting PyPI tells it what the original download URL is for the file it downloaded, which is then hashed to determine its path).
For another example, pre-compiling bytecode can be parallelized; there's even already code in the standard library for it. Pip hasn't been taking advantage of that all this time, but to my understanding it will soon feature its own logic (like uv does) to assign files to compile to worker processes. But Rust can't really help with the actual logic being parallelized, because that, too, is written purely in C (at least for CPython), within the interpreter.
> why did it take a greenfield project to give Python the package manager behavior people clearly wanted for the last decade?
(Zeroth, pip has been doing HTTP range tricks, or at least trying, for quite a while. And the exact point of PEP 658 is to obsolete them. It just doesn't really work for sdists with the current level of metadata expressive power, as in other PEPs like 440 and 508. Which is why we have more PEPs in the pipeline trying to fix that, like 725. And discussions and summaries like https://pypackaging-native.github.io/.)
First, you have to write the standards. People in the community expect interoperability. PEP 518 exists specifically so that people could start working on alternatives to Setuptools as a build backend, and PEP 517 exists so that such alternatives could have the option of providing just the build backend functionality. (But the people making things like Poetry and Hatch had grander ideas anyway.)
But also, consider the alternative: the only other viable way would have been for pip to totally rip apart established code paths and possibly break compatibility. And, well, if you used and talked about Python at any point between 2006 and 2020, you should have the first-hand experience required to complete that thought.
Specifically regarding the "aggressive wheel-first strategy", I strongly encourage you to read the discussion on https://github.com/pypa/pip/issues/9140.
It's not just greenfield-ness but the fact it's a commercial endeavor (even if the code is open-source).
Building a commercial product means you pay money (or something they equally value) to people to do your bidding. You don't have to worry about politics, licensing, and all the usual FOSS-related drama. You pay them to set their opinions aside and build what you want, not what they want (and if that doesn't work, it just means you need to offer more money).
In this case it's a company that believes they can make a "good" package manager they can sell/monetize somehow and so built that "good" package manager. Turns out it's at least good enough that other people now like it too.
This would never work in a FOSS world because the project will be stuck in endless planning as everyone will have an opinion on how it should be done and nothing will actually get done.
Similar story with systemd - all the bitching you hear about it (to this day!) is the stuff that would've happened during its development phase had it been developed as a typical FOSS project and ultimately made it go nowhere - but instead it's one guy that just did what he wanted and shared it with the world, and enough other people liked it and started building upon it.