I’m honestly kind of amazed that more people aren’t seeing the value, because my experience has been almost the opposite of what you’re describing.
I agree with a lot of your instincts. Shipping unreviewed code is wrong. “Validate behavior not architecture” as a blanket rule is reckless. Tests passing is not the same thing as having a system you can reason about six months later. On that we’re aligned.
Where I diverge is the conclusion that agentic coding doesn’t produce net-positive results. For me it very clearly does, but perhaps it's very situation or condition dependent?
For me, I don’t treat the agent as a junior engineer I can hand work to and walk away from. I treat it more like an extremely fast, extremely literal staff member who will happily do exactly what you asked, including the wrong thing, unless you actively steer it. I sit there and watch it work (usually have 2-3 agents working at the same time, ideally on different codebases but sometimes they overlap). I interrupt it. I redirect it. I tell it when it is about to do something dumb. I almost never write code anymore, but I am constantly making architectural calls.
Second, tooling and context quality matter enormously. I’m using Claude Code. The MCP tools I have installed make a huge different: laravel-boost, context7, and figma (which in particular feels borderline magical at converting designs into code!).
I often have to tell the agent to visit GitHub READMEs and official docs instead of letting it hallucinate “best practices”, the agent will oftentimes guess and get stack, so if it's doing that, you’ve already lost.
Third, I wonder if perhaps starting from scratch is actually harder than migrating something real. Right now I’m migrating a backend from Java to Laravel and rebuilding native apps into KMP and Compose Multiplatform. So the domain and data is real and I can validate against a previous (if buggy) implimentation). In that environment, the agent is phenomenal. It understands patterns, ports logic faithfully, flags inconsistencies, and does a frankly ridiculous amount of correct work per hour.
Does it make mistakes? Of course. But they’re few and far between, and they’re usually obvious at the architectural or semantic level, not subtle landmines buried in the code. When something is wrong, it’s wrong in a way that’s easy to spot if you’re paying attention.
That’s the part I think gets missed. If you ask the agent to design, implement, review, and validate itself, then yes, you’re going to get spaghetti with a test suite that lies to you. If instead you keep architecture and taste firmly in human hands and use the agent as an execution engine, the leverage is enormous.
My strong suspicion is that a lot of the negative experiences come from a mismatch between expectations and operating model. If you expect the agent to be autonomous, it will disappoint you. If you expect it to be an amplifier for someone who already knows what “good” looks like, it’s transformative.
So while I guess plenty of hype exists, for me at least, they hype is justified. I’m shipping way (WAY!) more, with better consistency, and with less cognitive exhaustion than ever before in my 20+ years of doing dev work.