logoalt Hacker News

mnw21camyesterday at 5:17 PM3 repliesview on HN

A while back I wrote a piece of (academic) software. A couple of years ago I was asked to review a paper prior to publication, and it was about a piece of software that did the same-ish thing as mine, where they had benchmarked against a set of older software, including mine, and of course they found that theirs was the best. However, their testing methodology was fundamentally flawed, not least because there is no "true" answer that the software's output can be compared to. So they had used a different process to produce a "truth", then trained their software (machine learning, of course) to produce results that match this (very flawed) "truth", and then of course their software was the best because it was the one that produced results closest to the "truth", whereas the other software might have been closer to the actual truth.

I recommended that the journal not publish the paper, and gave them a good list of improvements to give to the authors that should be made before re-submitting. The journal agreed with me, and rejected the paper.

A couple of months later, I saw it had been published unchanged in a different journal. It wasn't even a lower-quality journal, if I recall the impact factor was actually higher than the original one.

I despair of the scientific process.


Replies

timryesterday at 5:53 PM

If it makes you feel any better, the problem you’re describing is as old as peer review. The authors of a paper only have to get accepted once, and they have a lot more incentive to do so than you do to reject their work as an editor or reviewer.

This is one of the reasons you should never accept a single publication at face value. But this isn’t a bug — it’s part of the algorithm. It’s just that most muggles don’t know how science actually works. Once you read enough papers in an area, you have a good sense of what’s in the norm of the distribution of knowledge, and if some flashy new result comes over the transom, you might be curious, but you’re not going to accept it without a lot more evidence.

This situation is different, because it’s a case where an extremely popular bit of accepted wisdom is both wrong, and the system itself appears to be unwilling to acknowledge the error.

BLKNSLVRyesterday at 8:48 PM

It seems that the failure of the scientific process is 'profit'.

Schools should be using these kinds of examples in order to teach critical thinking. Unfortunately the other side of the lesson is how easy it is to push an agenda when you've got a little bit of private backing.

a123b456cyesterday at 7:28 PM

Many people do not know that Impact Factor is gameable. Unethical publications have gamed it. Therefore a higher IF may or may not indicate higher prominence. Use Scimago journal rankings for non-gameable scores.

show 1 reply