To me, this sounds like:
If AI was good at a certain task then it was a bad task in the first place.
Which is just run of the mill dogmatic thinking.