I generally agree with you, but am trying to see the world through the new AI lens. Having a machine make human errors isn't the end of the world, it just completely changes the class of problems that the machine should be deployed to. It definitely should not be used for things that need those strict verifiable processes. But it can be used for those processes where human errors are acceptable, since it will inevitably make those some classes of error...just without needing a human to do so.
Up until modern AI, problems typically fell into two disparate classes: things a machine can do, and things only a human can do. There's now this third fuzzy/brackish class in between that we're just beginning to explore.
I can agree with you. And in a discussion with adults working together to address our issues I will.
The issue is that we don't have exact proof that AI is suitable for tasks and the people doing those are already laid off.
The economy now is propped up only by the belief that AI will be so successful that it will eliminate most of the workforce. I just don't see how this ends well.
Remember, regulations are written in blood. And I think we're about to write many brand new regulations.