logoalt Hacker News

grumbelbart2today at 10:24 AM4 repliesview on HN

This goes deeper than the institutions, actually. The KPI for many (non-industrial) researchers is the number of publications and citations. That's what careers and funding depends on.

Goodhart's law states "When a measure becomes a target, it ceases to be a good measure", and that's what we see here. There is a strong incentive to publish more instead of better. Ideas are spread into multiple papers, people push to be listed as authors, citations are fought for, and some become dishonest and start with citation cartels, "hidden" citations in papers (printed small in white-on-white, meaning it's indexed by citation crawlers but not visible to reviewers) and so forth.

This also destroys the peer review system upon which many venues depend. Peer reviews were never meant to catch cheaters. The huge number of low-to-medium quality papers in some fields (ML, CV) overworks reviewers, leading to things like CVPR forcing authors to be reviewers or face desk rejection. AI papers, AI reviews of dubious quality slice in even more.

Ultimately the only true fix for this is to remove the incentives. Funding and careers should no longer depend on the sheer number of papers and citations. The issue is that we have not really found anything better yet.


Replies

ahartmetztoday at 1:11 PM

As for an alternative, how about using the social fabric of researchers and institutes instead? A few centuries of science ran on it before somebody had the great idea to introduce "objective" metrics which made things worse. Reintroducing that today would probably cause a larger spread in the quality of research, which is good: research is kind of a "hit-driven industry" - higher highs are the most important thing. The best researchers will do the best research, probably better without carrot and stick than with.

show 4 replies
BrenBarntoday at 11:00 AM

What you describe is still a problem with the institutions, because it is ultimately the institutions that provide the incentives (in the form of jobs). You're right that they're using bad metrics, but it is the institutions who are making those bad decisions based on the bad metrics.

There are lots of better things, like people making hiring and firing decisions based on their evaluation of the content of papers they have actually read, instead of just a number. If someone is publishing so many papers that a hiring committee can't even read a meaningful fraction of them, that should be a red flag in itself, rather than a green one.

show 2 replies
khafratoday at 10:49 AM

There's imperfect ways to work with goodhartable metrics. https://www.lesswrong.com/posts/fuSaKr6t6Zuh6GKaQ/when-is-go... talks about some of them (in the context of when they go bad).

newscluestoday at 11:33 AM

The incentive to disprove bad science ought to be greater.

show 2 replies