TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles. I think that's high - if you're including backing into static objects in car parks and the like, you can look at workshop data and extrapolate that a lower figure might be closer to the mark.
TFA also does a comparison with other self-driving car companies, which you acknowledge, but dismiss: however, we can't harmonize crash definitions and reporting practices as you would like, because Tesla is obfuscating their data.
TFA's main point is that we can't really know what this data means because Tesla keep their data secret, but others like Waymo disclose everything they can, and are more transparent about what happened and why.
TFA is actually saying Tesla should open up their data to allow for better analysis and comparison, because at the moment their current reporting practice make them look crazy bad.
> TFA's main point is that we can't really know what this data means because Tesla keep their data secret
If that's so, then the article title is very poor.
> TFA does a comparison with average (estimated), low-speed contact events that are not police-reported by humans, of one incident every 200,000 miles.
Where does it say that? I see "However, that figure doesn’t include non-police-reported incidents. When adding those, or rather an estimate of those, humans are closer to 200,000 miles between crashes, which is still a lot better than Tesla’s robotaxi in Austin."
All but one of the Tesla crashes obviously involved significant property damage or injuries (the remaining one is ambiguous).
So, based on the text of the article, they're assuming only 2/5ths of property damage / injury accidents are reported to the police. That's lower than I would have guessed (don't people use their car insurance, which requires the police report?), but presumably backed by data.