>and it equally drives me mad in those areas where academics set "puzzles" and conclude that people's inabilty to "solve" them is some cognitive deficiency.
Does that actually happen in academia? It seems to mostly be a social media thing.
There is one notable example that I'm aware of, but it really is a cognitive deficiency.
The contrapositive is a rule that says that "A => B" is the same as "not B => not A". This is very confusing to people, and few can follow verbally why it works.
But here is a fun experiment. People are presented with a selection of envelopes, all face down, and are asked to verify the fact that, "All unstamped envelopes are small." They immediately begin turning over the large envelopes, then have trouble explaining their (correct) reasoning!
Here is a correct implication process for their actions.
"All unstamped envelopes are small." => "unstamped envelope => small envelope" => "not small envelope => not unstamped" => "large envelope => stamped"
At which point it is easier to just check the large envelopes!
So frequently I'd say its the main case. Researchers are extremely poor at controlling for competing explanations, in many cases, strongly incentivsed not to.
Suppose you're writing a paper what do you write: option A) Average People Cannot Understand Probaility!?!?!, option B) Inexperienced test takers with unfamiliar notation fail to grasp meaning of a novel question; option C) survey participants on technical questions often do not adopt a literal interpretation of question; D) etc. etc.
In general researchers are extremely loath to, or poor at, recognising there's 101 alternative explanations for any research result using human participants and 99.99% of the time just publish the paper that says the experiment evidences their preferred conclusion.