That's not YAGNI backfiring.
The point of YAGNI is that you shouldn't over-engineer up front until you've proven that you need the added complexity.
If you need vector search against 100,000 vectors and you already have PostgreSQL then pgvector is a great YAGNI solution.
10 million vectors that are changing constantly? Do a bit more research into alternative solutions.
But don't go integrating a separate vector database for 100,000 vectors on the assumption that you'll need it later.
I think the tricky thing here is that the specific things I referred to (real time writes and pushing SQL predicates into your similarity search) work fine at small scale in such a way that you might not actually notice that they're going to stop working at scale. When you have 100,000 vectors, you can write these SQL predicates (return the 5 top hits where category = x and feature = y) and they'll work fine up until one day it doesn't work fine anymore because the vector space has gotten large. So, I suppose it is fair to say this isn't YAGNI backfiring, this is me not recognizing the shape of the problem to come and not recognizing that I do, in fact, need it (to me that feels a lot like YAGNI backfiring, because I didn't think I needed it, but suddenly I do)