logoalt Hacker News

tomrod01/21/20252 repliesview on HN

You reduce the uncertainty of the remaining 20 by substantially increasing sample size across a randomly selected sample.

Unfortunately for these studies you have multiple selection criteria that are nonrandom:

(1) interest in the study

(2) adherence to protocol of the study

(3) reporting back in

If nutrition science wants to be serious, their N should not be in the 10s but rather the 10,000s.

That has an expense, but for important things it is absolutely the right thing to do.


Replies

lm2846901/21/2025

Until they track absolutely everything including each trial subject microbiome, hormone profile, &co over time, I still feel it just won't cut it.

Plus it doesn't even matter what is true for the statistical average, given the infinite amount of variables and outcomes one glass of wine might be statistically beneficial but absolutely terrible for your own health because you have one specific gene combination or one specific microbiome mix. Which means you'd have to go through the same regimen of analysing and tracking all the parameters for yourself for it to be applicable

show 1 reply
leoc01/21/2025

I suspect (I'm not an expert) that for subjects like nutrition, experimental psychology and so on the next big step forward isn't scientific but political: figuring out how to somehow get funders, researchers and others lined up behind a Big Science model where a very few organisations run experiments with those truly large participation numbers. There are obvious risks in switching to such a model, but if small or middling experiments simply can't answer the open questions then there may be no better alternative.