logoalt Hacker News

D-Machinelast Tuesday at 3:23 PM3 repliesview on HN

I would argue in fact almost all fMRI research is unreliable, and formally so (test-retest reliabilities are in fact quite miserable: see my post below).

https://news.ycombinator.com/item?id=46289133

EDIT: The reason being, with reliabilities as bad as these, it is obvious almost all fMRI studies are massively underpowered, and you really need to have hundreds or even up to a thousand participants to detect effects with any statistical reliability. Very few fMRI studies ever have even close to these numbers (https://www.nature.com/articles/s42003-018-0073-z).


Replies

mattkrauselast Tuesday at 10:14 PM

That depends immensely on the type of effect you're looking for.

Within-subject effects (this happens when one does A, but not when doing B) can be fine with small sample sizes, especially if you can repeat variations on A and B many times. This is pretty common in task-based fMRI. Indeed, I'm not sure why you need >2 participants expect to show that the principle is relatively generalizable.

Between-subject comparisons (type A people have this feature, type B people don't) are the problem because people differ in lots of ways and each contributes one measurement, so you have no real way to control for all that extra variation.

show 1 reply
SubiculumCodelast Tuesday at 5:03 PM

Yes on many of those fronts, although not all those papers support your conclusion. The field did/does too often use tasks with to few trials, with to few participants. That always frustrated me as my advisor rightly insisted we collect hundreds of participants for each study, while others would collect 20 and publish 10x faster than us.

show 2 replies
cayceplast Tuesday at 5:19 PM

which is why the good labs follow up fMRI results and then go in with direct neurophysiological recording...

show 1 reply