The problem with an endorsement scheme is citation rings, ie groups of people who artificially inflate the perceived value of some line of work by citing each other. This is a problem even now, but it is kept in check by the fact that authors do not usually have any control over who reviews their paper. Indeed, in my area, reviews are double blind, and despite claims that “you can tell who wrote this anyway” research done by several chairs in our SIG suggests that this is very much not the case.
Fundamentally, we want research that offers something new (“what did we learn?”) and presents it in a way that at least plausibly has a chance of becoming generalizable knowledge. You call it gate-keeping, but I call it keeping published science high-quality.
I would have thought that those participants who are published in peer-reviewed journals could be be used as a trust anchor - see, for example, the Advogato algorithm as an example of a somewhat bad-faith-resistant metric for this purpose: https://web.archive.org/web/20170628063224/http://www.advoga...
But if you have a citation ring and one of the paper goes down as being fraudulent it reflects extremely bad on all people that endorsed it. So it's a bad strategy (game theory wise) to take part in such rings.
But you can choose to not trust people that are part of citation rings.