That... is not really an extraordinary claim. That has been many people's null hypothesis since before this technology was even deployed, and the rationale for it is sufficiently borne out to play a role in vigilance systems across nearly every other industry that relies on automation.
A safety system with blurry performance boundaries is called "a massive risk." That's why responsible system designers first define their ODD, then design their system to perform to a pre-specified standard within that ODD.
Tesla's technology is "works mostly pretty well in many but not all scenarios and we can't tell you which is which"
It is not an extraordinary claim at all that such a system could yield worse outcomes than a human with no asssistance.