logoalt Hacker News

dahartyesterday at 5:51 PM1 replyview on HN

You’re still bringing up different issues than this article we are commenting on.

> There’s no such thing as “overestimating in baseline samples”

What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

> What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant.

No. What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.


Replies

timrtoday at 4:40 AM

> What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

The entire point of a control is to test for that sort of contamination (or more generally, for malfunctions in the experimental workflow). In the case of a negative control, specifically, you're looking for an "positive" where one should not exist. If an experiment is set up such that you can obtain differential contamination in the controls but not the experimental arms, as you've described, then the entire experiment is invalid.

> What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.

The control cannot be "mis-measured", any more or less than the other arms can be "mis-measured". You treat them identically, otherwise the control is not a control. Neither example you've given are exceptions: if the assay mistakes chemical B for chemical A, then it will also do so for the non-controls. If the experimental process contaminates the controls, it will also contaminate the non-controls.

What you're missing is that there's no absolute "correct" measurement -- yes, the control may itself be contaminated with something you don't even know about, thus "understating" the absolute measurement of whatever thing you're looking for, but the absolute measurement was never the goal. You're looking for between-group differences, nothing more.

Just to make it clearer, if I were going to run an extremely naïve experiment of this sort (i.e. detection of trace chemical contamination C via super-sensitive assay A) with any hope of validity, I'd want to do multiple replications of a dilution series, each with independent negative and positive controls. I'd then use something like ANOVA to look for significant deviations across the group means. This is like the "science 101" version of the experimental design. Any failure of any control means the experiment goes in the trash. Any "significant" result that doesn't follow the expected dilution series patterns, again, goes in the trash.

(This is, of course, after doing everything you can to mitigate for baseline levels of the contaminant in the lab environment, which is a process that itself probably requires multiple failed iterations of the experiment I just described.)

Most of the plastic contamination papers I have read are far, far from even that naïve baseline.

show 1 reply