logoalt Hacker News

SeanLukeyesterday at 3:31 PM10 repliesview on HN

I developed and maintain a large and very widely used open source agent-based modeling toolkit. It's designed to be very highly efficient: that's its calling card. But it's old: I released its first version around 2003 and have been updating it ever since.

Recently I was made aware by colleagues of a publication by authors of a new agent-based modeling toolkit in a different, hipper programming language. They compared their system to others, including mine, and made kind of a big checklist of who's better in what, and no surprise, theirs came out on top. But digging deeper, it quickly became clear that they didn't understand how to run my software correctly; and in many other places they bent over backwards to cherry-pick, and made a lot of bold and completely wrong claims. Correcting the record would place their software far below mine.

Mind you, I'm VERY happy to see newer toolkits which are better than mine -- I wrote this thing over 20 years ago after all, and have since moved on. But several colleagues demanded I do so. After a lot of back-and-forth however, it became clear that the journal's editor was too embarrassed and didn't want to require a retraction or revision. And the authors kept coming up with excuses for their errors. So the journal quietly dropped the complaint.

I'm afraid that this is very common.


Replies

mnw21camyesterday at 5:17 PM

A while back I wrote a piece of (academic) software. A couple of years ago I was asked to review a paper prior to publication, and it was about a piece of software that did the same-ish thing as mine, where they had benchmarked against a set of older software, including mine, and of course they found that theirs was the best. However, their testing methodology was fundamentally flawed, not least because there is no "true" answer that the software's output can be compared to. So they had used a different process to produce a "truth", then trained their software (machine learning, of course) to produce results that match this (very flawed) "truth", and then of course their software was the best because it was the one that produced results closest to the "truth", whereas the other software might have been closer to the actual truth.

I recommended that the journal not publish the paper, and gave them a good list of improvements to give to the authors that should be made before re-submitting. The journal agreed with me, and rejected the paper.

A couple of months later, I saw it had been published unchanged in a different journal. It wasn't even a lower-quality journal, if I recall the impact factor was actually higher than the original one.

I despair of the scientific process.

show 3 replies
bargle0yesterday at 3:50 PM

If you’re the same Sean Luke I’m thinking of:

I was an undergraduate at the University of Maryland when you were a graduate student there in the mid nineties. A lot of what you had to say shaped the way I think about computer science. Thank you.

show 2 replies
conspyesterday at 8:52 PM

This reminds me of my former college who asked me to check some code from a study, which I did not know it was published, and told him I hope he did not write it since it likely produced the wrong results. They claimed some process was too complicated to do because it was post O(2^n) in complexity, decided to do some major simplification of the problem, and took that as the truth in their answer. End result was the original algorithm was just quadratic, not worse, given the data set was easily doable in minutes at best (and not days as claimed) and the end result did not support their conclusions one tiny bit.

Our conclusion was to never trust psychology majors with computer code. And like with any other expertise field they should have shown their idea and/or code to some CS majors at the very least before publishing.

oawiejrlijyesterday at 5:49 PM

When I was a grad student I contacted a journal to tell them my PI had falsified their data. The journal never responded. I also contacted my university's legal department. They invited me in for an hour, said they would talk to me again soon, and never spoke to me or responded to my calls again after that. This was in a Top-10-in-the-USA CS program. I have close to zero trust in academia. This is why we have a "reproducibility crisis".

show 2 replies
trogdoryesterday at 5:34 PM

> it became clear that the journal's editor was too embarrassed

How sad. Admitting and correcting a mistake may feel difficult, but it makes you credible.

As a reader, I would have much greater trust in a journal that solicited criticism and readily published corrections and retractions when warranted.

show 1 reply
orochimaaruyesterday at 8:33 PM

I think the publish or perish academic culture makes it extremely susceptible to glossing over things like this - especially for statistical analysis. Sharing data, algorithms, code and methods for scientific publications will help. For papers above a certain citation count, which makes them seem "significant", I'm hoping google scholar can provide an annotation of whether the paper is reproducible and to what degree. While it won't avoid situations like what the author is talking about, it may force journal editors to take rebuttals and revisions more seriously.

From the perspective of the academic community, there will be lower incentive to publish incorrect results if data and code is shared.

ameligranayesterday at 9:26 PM

I take the occasion to say that I helped making/rewriting a comparison between various agent-based modelling software at https://github.com/JuliaDynamics/ABMFrameworksComparison, not sure if this correctly represents all of them fairly enough, but if anyone wants to chime in to improve the code of any of the frameworks involved, I would be really happy to accept any improvement

show 1 reply
contrarian1234yesterday at 11:22 PM

maybe naiive but isnt this what "comments" in journals are for?

theyre usually published with a response by the authors

cannonpalmsyesterday at 7:41 PM

Is this the kind of thing that retractions are typically issued for, or would it simply be your responsibility to submit a new paper correcting the record? I don't know how these things work. Thanks.

show 1 reply