this seems strange to me, shouldn’t we expect a high quality journal to retract often as we gather more information?
obviously this is hyperbole of two extremes, but i certainly trust a journal far more if it actively and loudly looks to correct mistakes over one that never corrects anything or buries its retractions.
a rather important piece of science is correcting mistakes by gathering and testing new information. we should absolutely be applauding when a journal loudly and proactively says “oh, it turns out we were wrong when we declared burying a chestnut under the oak tree on the third thursday of a full moon would cure your brothers infected toenail.”
> shouldn’t we expect a high quality journal to retract often as we gather more information?
This is complicated, and kinda sad tbh. But no.You need to carefully think about what "high quality journal" means. Typically it is based on something called Impact Factor[0]. Impact factor is judged by the number of citations a journal has received in the last 2 years. It sounds good on paper, but I think if you think about it for a second you'll notice there's a positive feedback loop. There's also no incentive that it is actually correct.
For example, a false paper can often get cited far more than a true paper. This is because when you write the academic version of "XYZ is a fucking idiot, and here's why" you cite their paper. It's good to put their bullshit down, but it can also just end up being Streisand effect-like. Journal is happy with its citations. Both people published in them. They benefit from both directions. You keep the bad paper up for the record and because as long as the authors were actually acting in good faith, you don't actually want to take it down. The problem is... how do you know?
Another weird factor used is Acceptance Rates. This again sounds nice at first. You don't want a journal publishing just anything, right?[1] The problem comes when these actually become targets (which they are). Many of the ML conferences target about 25% acceptance rate[2]. It fluctuates year to year. It should, right? Some years are just better science than other years. Good paper hits that changes things and the next year should have a boom! But that's not the level of fluctuation we're talking about. If you look at the actual number of papers accepted in that repo you'll see a disproportionate number of accepted papers ending in a 0 or 5. Then you see the 1 and 6, which is a paper being squeezed in, often for political reasons. Here, I did the first 2 tables for you. You'll see that has a very disproportionate ending of 1 and 6 and CV loves 0,1,3 These numbers should convince you that this is not a random process, though they should not convince you it is all funny business (much harder to prove). But it is at least enough to be suspicious and encourage you to dig in more.
There's a lot that's fucked up about the publishing system and academia. Lots of politics, lots of restricted research directions, lots of stupid. But also don't confuse this for people acting in bad faith or lying. Sure, that happens. But most people are trying to do good and very few people in academia are blatantly publishing bullshit. It's just that everything gets political. And by political I don't mean government politics, I mean the same bullshit office politics. We're not immune from that same bullshit and it happens for exactly the same reasons. It just gets messier because if you think it is hard to measure the output of an employee, try to measure the output of people who's entire job it is to create things that no one has ever thought of before. It's sure going to look like they're doing a whole lot of nothing.
So I'll just leave you with this (it'll explain [1])
As a working scientist, Mervin Kelly (Director of Bell Labs (1925-1959)) understood the golden rule,
"How do you manage genius? You don't."
https://1517.substack.com/p/why-bell-labs-worked
There's more complexity like how we aren't good at pushing out frauds and stuff but if you want that I'll save it for another comment.[0] https://en.wikipedia.org/wiki/Impact_factor
[1] Actually I do. As long as it isn't obviously wrong, plagiarized, or falsified, then I want that published. You did work, you communicated it, now I want it to get out into the public so that it can be peer reviewed. I don't mean a journal's laughable version of peer review (3-4 unpaid people that don't study your niche and are more concerned with if it is "novel" or "impactful" quickly reading your paper and you're one of 4 on their desk they need to do this week. It's incredibly subjective and high impact papers (like Nobel Price winning papers) routinely get rejected). Peer review is the process of other researchers replicating your work, building on it, and/or countering it. Those are just new papers...
[2] https://github.com/lixin4ever/Conference-Acceptance-Rate
I think it might come to understanding what is "high quality" journal. Maybe such journal should be focused on much more proven and mature things. Where there would be lot less retractions as they have more mature and thus more proven information.
But I think problem is what is seen is "high quality" == "high impact". Which means that prestige and visibility is important things. Which likely lowers the threshold quite a lot as being first to publish possibly valid something is seen as important.