>> Ignoring requires-python upper bounds. When a package says it requires python<4.0, uv ignores the upper bound and only checks the lower. This reduces resolver backtracking dramatically since upper bounds are almost always wrong. Packages declare python<4.0 because they haven’t tested on Python 4, not because they’ll actually break. The constraint is defensive, not predictive > > Man, it's easy to be fast when you're wrong. But of course it is fast because Rust not because it just skips the hard parts of dependency constraint solving and hopes people don't notice.
Version bound checking is NP complete but becomes tractable by dropping the upper bound constraint. Russ Cox researched version selection in 2016 and described the problem in his "Version SAT" blog post (https://research.swtch.com/version-sat). This research is what informed Go's Minimal Version Selection (https://research.swtch.com/vgo-mvs) for modules.
It appears to me that uv is walking the same path. If most developers don't care about upper bounds and we can avoid expensive algorithms that may never converge, then dropping upper bound support is reasonable. And if uv becomes popular, then it'll be a sign that perhaps Python's ecosystem as a whole will drop package version upper bounds.
Perhaps so, although I'm more algorithmically optimistic. If ignoring upper bounds makes the problem more tractable, you can
1. solve dependency constraints as if upper bounds were absent,
2. check that your solution actually satisfies constraints (O(N), quick and passes almost all the time), and then
3. only if the upper bound constraint check fails, fall back to the slower and reliable parser.
This approach would be clever, efficient, and correct. What you don't get to do is just ignore the fucking rules to which another system studiously adheres then claim you're faster than that system.
That's called cheating.