So frequently I'd say its the main case. Researchers are extremely poor at controlling for competing explanations, in many cases, strongly incentivsed not to.
Suppose you're writing a paper what do you write: option A) Average People Cannot Understand Probaility!?!?!, option B) Inexperienced test takers with unfamiliar notation fail to grasp meaning of a novel question; option C) survey participants on technical questions often do not adopt a literal interpretation of question; D) etc. etc.
In general researchers are extremely loath to, or poor at, recognising there's 101 alternative explanations for any research result using human participants and 99.99% of the time just publish the paper that says the experiment evidences their preferred conclusion.
"Democrats perform poorly on history tests designed by Republicans."
These sorts of puzzles aren't used by researchers though, so I'm not sure I follow the rest. They almost always seemed to be used by people trained in logical thinking to consider problems in different ways. They only seem get to the larger public when someone shares one on social media and people with no background in logic complain that they are poorly written or have no answer, since they don't realize that these sorts of puzzles presuppose some background in logic and familiarity with the setup.