I really wish we'd stop arguing about AI with an "some automation failed, so all automation is bad" approach.
Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.
Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.
Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.
Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.
And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.
The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.
I think you may have missed a subtle point: there is an especial risk from automation which almost always works correctly. The aviation industry calls the phenomenon "automation fatigue". It's very difficult for humans to stay alert and monitor systems like these, and the use of the systems tends to lead to de-skilling over time in the very skills required to monitor them and fix the (rare but fatal - at least in aviation) error cases when they occur.
I think you're maybe oversimplifying a bit. I dont think the argument here is that "AI" is not 100% so we shouldn't use AI. There are issues we need to be aware of.
Specifically, AI companies want to inflate the utility of AI because that's how they make money. There should be guardrails where appropriate. Unfortunately, as usual, we need to make mistakes before we can learn from them.
Robotaxis do exist, but they are not made equal. Tesla's for instance are 4x worse than humans: https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...