> Even minor papers by the most eminent scientists are cited much more than papers by relatively unknown scientists
I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.
There's definitely a "rich get richer" effect for academic papers. A highly cited paper becomes a "landmark paper" that people are more likely to read, and hence cite - but also, at a certain point it can also become a default "safe" or "default" paper to cite in a literature review for a certain topic or technique, so out of expediency people may cite it just to cover that base, even if there's a more relevant related paper out there. This applies especially in cases where researchers might not know an area very well, so it's easy to assume a highly cited paper is a relevant one. At least for conferences, there's a deadline and researchers might just copy paste what they have in their bibtex file, and unfortunately the literature review is often an afterthought, at least from my experience in CV/ML.
Another related "rich get richer" effect is also that a famous author or institution is a noisy but easy "quality" signal. If a researcher doesn't know much about a certain area and is not well equipped to judge a paper on its own merits, then they might heuristically assume the paper is relevant or interesting due to the notoriety of the author/institution. You can see this easily at conferences - posters from well known authors or institutions will pretty much automatically attract a lot more visitors, even if they have no idea what they're looking at.
It's a status game, primarily - they want credibility by association. Erdos number and those type of games are very significant in academia, and part of the underlying dysfunction in peer review. Bias towards "I know that name, it must be serious research" and assumptions like "Well, if it's based on a Schmidhuber paper, it must be legitimate research" make peer review a very psychological and social game, rather than a dispassionate, neutral assessment of hypotheses and results.
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.