I’m not sure that’s fair in this context.
In the past, a single paper with questionable or falsified results at a top tier conference was big news.
Something that casts doubt on the validity of 53 papers at a top AI conference is at least notable.
> whose actual findings remain valid
Remain valid according to who? The same group that missed hundreds of hallucinated citations?
Which of these papers had falsified results and not bad citations?
What is the base rate of bad citations pre-AI?
And finally yes. Peer review does not mean clicking every link in the footnotes to make sure the original paper didn't mislink, though I'm sure after this bruhaha this too will be automated.