It's a status game, primarily - they want credibility by association. Erdos number and those type of games are very significant in academia, and part of the underlying dysfunction in peer review. Bias towards "I know that name, it must be serious research" and assumptions like "Well, if it's based on a Schmidhuber paper, it must be legitimate research" make peer review a very psychological and social game, rather than a dispassionate, neutral assessment of hypotheses and results.
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.
I'm sure status is part of it but I think it's almost certainly driven by "availability."
Big names give more talks in more places and people follow their outputs specifically (e.g., author-based alerts on PubMed or Google Scholar), so people are more aware of their work. There are often a lot of papers one could cite to make the same point, and people tend to go with the ones that they've already got in mind....