> they're doing as well as professionals do without oversight on production environments
The difference is that if a human does it there usually is done accountability, you’ll be asked how it happened and expected to learn from it. And if you do it again your social score goes down, nobody will trust you and you’ll be consider a liability. If a cli tool does it the outcome is different, you might stop saying the tool or you might blame yourself for not giving the tool enough context. And if it does it again you might just shrug it off with “well of course, it’s just a tool”.
Accountability according to reputation is exactly what is happening for AI providers. All these articles about Claude destroying systems makes people trust Claude less, and maybe even “fire” Claude by choosing another AI provider with better safeguards or low privileges built in.