The innumeracy is load-bearing for the entire media ecosystem. If readers could do basic proportional reasoning, half of health journalism and most tech panic coverage would collapse overnight.
GPTZero of course knows this. "100 hallucinations across 53 papers at prestigious conference" hits different than "0.07% of citations had issues, compared to unknown baseline, in papers whose actual findings remain valid."
I’m not sure that’s fair in this context.
In the past, a single paper with questionable or falsified results at a top tier conference was big news.
Something that casts doubt on the validity of 53 papers at a top AI conference is at least notable.
> whose actual findings remain valid
Remain valid according to who? The same group that missed hundreds of hallucinated citations?