You do learn how to control claude code and architect/orient things around getting it to deliver what you want. That's a skill that is both new and possibly going to be part of how we work for a long time (but also overlaps with the work tech leads and managers do).
My proto+sqlite+mesh project recently hit the point where it's too big for Claude to maintain a consistent "mental model" of how eg search and the db schemas are supposed to be structured, kept taking hacky workarounds by going directly to a db at the storage layer instead of the API layer, etc. so I hit an insane amount of churn trying to get it to implement some of the features needed to get it production ready.
Here's the whackamole/insanity documented in git commit history: https://github.com/accretional/collector/compare/main...feat...
But now I know some new tricks and intuition for avoiding this situation going forward. Because I do understand the mental model behind what this is supposed to look like at its core, and I need to maintain some kind of human-friendly guard rails, I'm adding integration tests in a different repo and a README/project "constitution" that claude can't change but is accountable for maintaining, and configuring it to keep them in context while working on my project.
Kind of a microcosm of startups' reluctance to institute employee handbook/kpis/PRDs followed by resignation that they might truly be useful coordination tools.
Yeah, this is close to my experience with it as well. The AI spits out some tutorial code and it works, and you think all your problems are solved. Then in working with the thing you start hitting problems you would have figured out if you had built the thing from scratch, so you have to start pulling it apart. Then you start realizing some troubling decisions the AI made and you have to patch them, but to do so you have to understand the architecture of the thing, requiring a deep dive into how it works.
At the end of the day, you've spent just as much time gaining the knowledge, but one way was inductive (building it from scratch) while the other is deductive (letting the AI build it and then tearing it apart). Is one better than the other? I don't know. But I don't think one saves more time than the other. The only way to save time is to allow the thing to work without any understanding of what it does.
I agree with this sentiment a lot. I find my experience matches this. It's not necessarily fast at first, but you learn lessons along the way that develop a new set of techniques and ways of approaching the problem that feel fundamental and important to have learnt.
My fun lesson this week was there's not a snowballs chance in hell GitHub Copilot can correctly update a Postman collection. I only realised there was a Postman MCP server after battling through that ordeal and eventually making all the tedious edits myself.