logoalt Hacker News

aardvark92yesterday at 2:18 AM0 repliesview on HN

Saw the same thing first hand with Pathology data. Image analysis is far more straightforward problem than fMRI, but sorry, I do not trust your AI model that matches our pathologist’s scoring with 98.5% accuracy. Our pathologists are literally guesstimating these numbers and can vary by like 10-20% just based on the phase of the moon, whether the pathologist ate lunch yet, what slides he looked at earlier that day…that’s not even accounting for inter-pathologist variation…

Also saw this irl with a particular NGS diagnostic. This model was initially 99% accurate, P.I. smelled BS, had the grad student crunch the numbers again, 96% accurate, published it, built a company around this product —-> boom, 2 years later it was retracted because the data was a lot of amplified noise, spurious hits, overfitting.

I don’t know jack compared to the average HN contributor, but even I can smell the BS from a mile away in some of these biomedical AI models. Peer review is broken for highly-interdisciplinary research like this.