Yes on many of those fronts, although not all those papers support your conclusion. The field did/does too often use tasks with to few trials, with to few participants. That always frustrated me as my advisor rightly insisted we collect hundreds of participants for each study, while others would collect 20 and publish 10x faster than us.
The small sample sizes is rational response from scientists in the face of a) funding levels and b) unreasonable expectations from hiring/promotion committees.
cog neuro labs need to start organizing their research programs more like giant physics projects. Lots of PIs pooling funding and resources together into one big experiment rather than lots of little underpowered independent labs. But it’s difficult to set up a more institutional structure like this unless there’s a big shift in how we measure career advancement/success.
Yes, well "almost all" is vague and needs to be qualified. Sample sizes have improved over the past decade for sure. I'm not sure if they have grown on median meaningfully, because there are still way too many low-N studies, but you do see studies now that are at least plausibly "large enough" more frequently. More open data has also helped here.
EDIT: And kudos to you and your advisor here.
EDIT2: I will also say that a lot of the research on fMRI methods is very solid and often quite reproducible. I.e. papers that pioneer new analytic methods and/or investigate pipelines and such. There is definitely a lot of fMRI research telling us a lot of interesting and likely reliable things about fMRI, but there is very little fMRI research that is telling us anything reliably generalizable about people or cognition.