Admittedly not a statistician, but I think the article is missing the point. The reason why people circle the P values is because nobody actually cares about the thing the p-value is measuring. What they actually care about is whether the null hypothesis is true or some other hypothesis is true. You can wave your hands around about how actually when you said it was significant what you were really saying was something technical about a hypothetical world where the null hypothesis is factually true, and so it's unfair to circle your p value because technically your statement about this hypothetical world is still true. This is not a good argument against p value circling, but rather it merely demonstrates that the technical definition of a p value is not relevant to the real world.
The fact remains that for things which are claimed to be true but turn out to not be true later, the p values that were provided in the paper are very often near the significance threshold. Not so much for things which are obviously and strongly true. This is direct evidence of something that we already know, which is thst nobody cares about p values per se, they only use them to communicate information about something being true or false in the real world, and the technical claim of "well maybe x or y is true, but when I said p=0.49 I was only talking about a hypothetical world where x is true, and my statement about that world still holds true" is no solace.
I understood the point of the article to be exploring the extent to which p-values can be interpreted as strength of evidence in favor of the alternative hypothesis. I don't think anyone is spending all this energy on p-values because they think people care about the p-values.