I've critiqued it plenty in other comments, including that exact issue. However, that doesn't mean they "gave people surveys with a lot of questions" to p-hack, it seems like a study designed (albeit not well designed) to test one specific hypothesis. I see no reason to question that they did the methods as described in the paper, which were designed to test this very specific thing (they didn't even test "childlike wonder" in general, just self-reported Mario-induced childlike wonder), but their conclusions aren't supported by their data. If they were p-hacking as you accuse them of, why not have more questions? Why not survey non-Mario players too so there's a new variable to create significant results out of a null?