94.5% is actually terrible.
If you have a prevalence of 10 in 1000, how do the numbers shake out?
Well, you test all 1,000. If we assume a 95% accuracy for false-positive and false negatives?
Of the 990 that you test that don't have the disease, the test will false state 50 do have the disease. Yikes!
And of the 10 that do have the disease? You'll miss 1 of them.
I got bad news about the specificity for most things this serious. Think the only one we absolutely nail is infectious disease detection.
Spoilers: It's anywhere between 1-15 and 5-30% for false positives and 1-15/5-40 for false negatives. That's imaging, biomarkers, cancer screenings, etc
Like, where do you think the concept of "second opinions" came from? Whimsy? Lets go ask a second doctor if I actually have cancer, it'll be fun!
> 94.5% is actually terrible.
This statement is quite broad and misses several important factors.
First of all, a test's sensitivity and specificity. The math in your example assumes a balanced test, but on what basis? The math comes out quite different for high-sensitivity or high-specificity tests. (Unfortunately, I could not find the numbers for the test in the linked article.)
Secondly, whom are we testing? The prevalence rate in your example (1%) is unrealistically low even for the general population. But would we screen the general population? No, we'd screen high-risk groups: the elderly, those with certain APOE genotypes etc. Predictive values of a test depend hugely on the prevalence rate.
Lastly, it depends on how the results are used. If it's a high-sensitivity test used to decide whom to send to the next tier in a multi-tier diagnostic system, it could actually be quite effective at that (very rarely missing the disease while greatly reducing the need for more expensive or more invasive testing).
This improves the diagnostic accuracy from around 75% to 95%.
It's not terrible. This is a relatively good number. Diagnostics is just terribly difficult.