This is a good point.
However, the same point could also apply to both tenure decisions and academic hiring...in other words, it is difficult or impossible to tell who is the most promising young researcher. Therefore, rather than do real assessments (and be ok with getting it wrong sometimes), we use citations, journal impact factor, and h-index as proxies. These proxies are alluring because they feel quantitative and objective, and they somewhat remove blame from decision makers. However, as it has often been pointed out, these metrics would have passed over some of the most prominent researchers in science (whose names we all know) and very possibly would have relegated them to obscurity.
And along these lines, I would also point out that it is likely far easier to judge whether or not an idea is promising rather than a person who may or may not have had an opportunity to shine yet. In the case of a person, past performance is certainly not always indicative of future results. And this is especially skewed in the case of a person without previous good performance who may be well-capable of great work in the right environment. However, in the case of ideas, one has the ability to at least assess whether an idea makes some sense given the laws of nature, and also one has some ability to assess potential impact of an idea.
Now, none of that is to say it is easy to assess promising ideas. It is really hard, and the point is well taken. I agree with the point in that I don't think there is an easy solution here because assessment is difficult, yet we have to do some kind of assessment.
All I'm saying is that we are currently missing a lot of good people/ideas. Although, to be fair, it could also be the case that modern academia missing less good people/ideas than we ever have before - it is difficult to know.