I agree. OpenReview is a good initiative, and while it has its own flaws, it's definitely a step in the right direction.
The arXiv and the derivative preprint repositories (e.g., bioRxiv) are other good initiatives.
However, I don't think it's enough to leave the content review completely to the community. There's are known issues with researchers using arXiv, for example, to stake claims on novel things, or readers jumping on the claims made by well-known institutions in preprints, which may turn out to be overconfident or bogus.
I believe that a number of checks (beyond plagiarism) need to happen before the paper is endorsed by a journal or a conference. Some of these can and should be done in a peer review-like format, but it needs to be heavily redesigned to support review quality without sacrificing speed. Also, there are things that we have good tools for (e.g., checking citation formatting), so this part should be integrated.
Plus, time may be one of the bottlenecks, but that's partly because publishers take money from academic institutions, yet expect voluntary service. There's no reason for this asymmetry, IMO.
A problem that the current system actually perpetuates. This is because when authors plagiarize the papers get silently desk rejected. Other researchers do not learn of this and cannot then take extra precaution at other works by these authors. IMO fraud is one of the greatest sins you can make in science. Science depends a lot on trust (even more so because our so-called peer-review system places emphasis on novelty and completely rejects replication) on authors.
The truth is that no reviewer can validate claims by reading a paper. I can tell you I can't do that even for papers that are in my direct niche. But what a reviewer can do is invalidate. We need to be clear about that difference and the bias. Because we should never interpret papers as "this is the truth" but "this is likely the truth under these specific conditions". Those are very different things.
I agree that checking is better, but I don't believe absolutely necessary. The bigger problem I have right now is that we are publishing so much that it is difficult to get a reviewer who is a niche expert, or sub-domain expert. More generalized reviewers can't properly interpret papers. It is too easy to over-generalize results and think they are just doing the same thing as another work (I've seen this way too often), or see something as too incremental (almost everything is incremental... and it is going to stay that way as long as we have a publish or perish system). BUT the people that are niche experts are going to tend to find the papers because they are seeking them out.
But what I think does need to be solved still is the search problem. It's getting harder and frankly we shouldn't make scientists also be marketers. It is a waste of time and creates perverse incentives, as you've even mentioned.
And the government.Honestly I hate how shady this shit is. I understand conferences, where there's a physical event, but paid-access journals are a fucking scam (I'd be okay with a small fee for server costs and such but considering arxiv and openreview, I suspect this isn't very costly). They are double dipping. Getting money from govs, academics paying for access, but then getting the literal "product" they are selling given to them for free and then the "quality control" of that "product" also being done for free. And by "for free" I mean on the dime of academic institutions and government tax dollars.