Agent-style AI can run shell commands. You have to accept them but some people live dangerously and say Yes To All.
Yep, it's not as far fetched as it would've been a year ago. A scenario where you're running an agent in 'yolo mode', it opening up some poisonous readme / docs / paper, and then executing the wrong shell command.
I've been letting Gemini run gcloud and "accept all"ing while I've been setting some things up for a personal project. Even with some limits in place it is nervewracking, but so far no issues and it means I can go and get a cup of tea rather than keep pressing OK. Pretty easy to see how easy it would be for rogue AI to do things when it can already provision its own infrastructure.