There's two fallacious arguments encoded here. The first is obvious, that we should prioritize hypothetical future vulnerabilities and fixes over ones we know exist today. The second is subtler and more insidious: it's the idea that the goal of software is to ensure every package and project is viable, that everyone who wants to deploy it should be able to do so. The risks this attitude pose to users, ordinary people who have no agency over which software packages you use to serve their needs, are a pure externality. The idea that a project serving real human users might opt to compromise availability rather than putting people at risk is never even broached.