I don't disagree with you entirely here. I probably wasn't clear enough on what I was trying to convey.
Right now AI / Agentic coding doesn't seem is a train we are going to be able to stop; and at the end of the day is tool like any other. Most of what seems to be happening is people let AI fully take the wheel not enough specs, not enough testing, not enough direction.
I keep experiment and tweaking how much direction to give AI in order to product less fuckery and more productive code.
Sorry for coming off combative - I'm mostly fatigued from "criti-hype" pieces we've been deluged with the last week. For what it's worth I think you're right about the inevitability but I also think it's worth pushing a bit against the pre-emptive shaping of the Overton window. I appreciate the comment.
I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.
I also think that there is probably a sweet spot for automation that does one or two simple things and fails noisily outside the confidence zone (aviation metaphor: an autopilot that holds heading and barometric altitude and beeps loudly and shakes the stick when it can't maintain those conditions), and a sweet spot for "perfect" automation (aviation metaphor: uh, a drone that autonomously flies from point A to point B using GPS, radar, LIDAR, etc...?). In between I'm afraid there be dragons.