Hi, author of the blog post here! Thank you for writing in with your concerns. First:
> Please be very careful when someone tries to tell you that supplements are miraculous and pharmaceutical drugs don’t work at all.
I'll concede I unintentionally gave the tone that one should replace antidepressants with supplements, even though the conclusion specifically writes: "(Don't quit your existing antidepressants if they're net-positive for you!) you may also want to ask your doctor about Amitriptyline, or those other best-effect-size antidepressants."
I have now edited the intro to more explicitly say "you can take these supplements alongside traditional antidepressants! You can stack interventions!"
===
> and nobody noticed this massive discrepancy until now?
Researchers have noticed it for 13 years! From the linked Ghaemi et al 2024 meta-analysis ( https://pmc.ncbi.nlm.nih.gov/articles/PMC11650176/ ):
> Several meta-analyses of epidemiological studies have suggested a positive relationship between vitamin D deficiency and risk of developing depression (Anglin et al., 2013; Ju, Lee, & Jeong, 2013).
> Although some review studies have presented suggestions of a beneficial effect of vitamin D supplementation on depressive symptoms (Anglin et al., 2013; Cheng, Huang, & Huang, 2020; Mikola et al., 2023; Shaffer et al., 2014; Xie et al., 2022), none of these reviews have examined the potential dose-dependent effects of vitamin D supplementation on depressive symptoms to determine the optimum dose of intervention. Some of the available reviews, owing to the limited number of trials and methodological biases, were of low quality (Anglin et al., 2013; Cheng et al., 2020; Li et al., 2014; Shaffer et al., 2014). Considering these uncertainties, we aimed to fill this gap by conducting a systematic review and dose–response meta-analysis of randomized control trials (RCTs) to determine the optimum dose and shape of the effects of vitamin D supplementation on depression and anxiety symptoms in adults regardless of their health status.
===
> even common OTC pain meds can have effect sizes lower than 0.4 depending on the study. Have you ever taken Tylenol or Ibuprofen and had a headache or other pain reduced? Well you’ve experience what a drug with a small effect size on paper can do for you.
I must push back: that's an effect of 0.4 plus placebo effect and time.
There's now RCTs of open-label placebos (where subjects are told it's placebo), which show even open-label placebos are still powerful for pain management. So, I stand by 0.4 being a small effect; even if you took a placebo you know to be placebo, you'd feel a noticeable reduction in pain/headache.
EDIT: Here's a systematic review of Open-Label Placebos, published in Nature in 2021: https://www.nature.com/articles/s41598-021-83148-6.pdf
> We found a significant overall effect (standardized mean difference = 0.72, 95% Cl 0.39–1.05, p < 0.0001, I2 = 76%) of OLP.
In other words, if the effect on antidepressants vs placebo is ~0.4, and the effect of a placebo vs no placebo (just time) is ~0.7, that means the majority of the effect of antidepressants & OTC pain meds is due to placebo.
(I don't mean this in an insulting way; the fact that placebo alone has a "large" effect is a big deal, still under-valued, and means something important for how mood/cognition can directly impact physical health!)
A point I think is crucial to mention is that “effect size” is just standardized mean difference.
If a minority of patients benefit hugely and most get no benefit, then you get a modest effect size.
This is probably why this discussion always has a lot of people saying “yeah, it didn’t help me at all” and a few saying “it changed my life.”
I believe we should be focusing on more relevant statistical methods for assessing this hypothesis formally. Basically, using mean differences is GIGO if you assume you’re comparing a bimodal or highly skewed distribution to a bell curve.
> Researchers have noticed it for 13 years! From the linked Ghaemi et al 2024 meta-analysis
You’re cherry picking papers. Others have already shared other studies showing no significant effects of Vitamin D intervention.
For any popular supplement you can find someone publishing papers with miraculous results, showing huge effect sizes and significant outcomes. This has been going on for decades.
With Omega-3s the larger the trial size, the smaller the outcome. The largest trials have shown very little to no detectable effect.
I think a lot of people are skeptical about pharmaceuticals because they see the profit motive, but they let their guard down when researchers and supplement pushers who have their own motives start pushing flawed studies and cherry picked results.
> In other words, if the effect on antidepressants vs placebo is ~0.4, and the effect of a placebo vs no placebo (just time) is ~0.7, that means the majority of the effect of antidepressants & OTC pain meds is due to placebo.
You keep getting closer to understanding why these effect size studies are so popular with alternative medicine and supplement sellers: They’re so easy to misinterpret or to take out of context.
According your numbers, taking Tylenol would be worse than placebo alone! 0.4 vs 0.7
Does this make any sense to you? It should make you pause and think that maybe this is more complicated than picking singular numbers and comparing them.
In this domain of cherry picking studies and comparing effect sizes, you’ve reached a conclusion where Vitamin D is far and away more effective than anything, placebo is better than OTC pain medicines, and OTC pain meds are worse than placebo.
It’s time for a reality check that maybe this methodology isn’t actually representative of reality. You’re writing at length as if these studies you picked are definitive and your numeric comparisons tell the whole story, but I don’t think you’ve stopped to consider if this is even realistic.