Superficially, these look the same, but at least to me they feel fundamental different. Maybe it’s because if I have the ability to read the script and take the time to do so, I can be sure that it won’t cause a catastrophic outcome before running it. If I choose to run an agent in YOLO mode, this can just happen if I’m very unlucky. No way to proactively protect against it other than not use AI in this way.
I've seen many smart people make bone headed mistakes. The more I work with AI, the more I think the issue is that it acts too much like a person. We're used to computers acting like computers, not people with all their faults heh.