This is a microcosm of a much larger problem. When AI writes code, reviews code, and now apparently manages its own git operations — who's actually in control of the codebase?
The "dangerously-skip-permissions" flag getting blamed here is telling. We're building tools where the safe default is friction, so users disable the safety to get work done, and then the tool does something destructive. That's not a user error — that's a design pattern that reliably produces failures at scale.
The broader data is concerning: AI-generated code has 2.74x more security vulnerabilities than human-written code, and reviewing it takes 3.6x longer. Now add autonomous git operations to that mix. The code review problem becomes a code ownership problem — if the AI is writing it, reviewing it, and managing the repository, what exactly is the human's role? We dug into this at sloppish.com/ghost-in-the-codebase