> On the other hand, code produced with AI and reviewed by humans can be perfectly good, maintainable, and indistinguishable from regular old code.
I have yet to see a single example of this. The way you make AI generated code good and maintainable is by rewriting it yourself.
I know it's unpopular to say (here), but I see it all the time. Myself I sometimes cannot recognize what I wrote and what the agent wrote. It's just that I often have a physical memory of typing it, but that's it. (I also saw a lot of garbage, to be fair.)
There is quite a bit of skill to it, however. You cannot just take an AI from blank to "good code" without doing work. Yes, it takes work and quite a bit of it. By this I mean you have to write a good code style guide and a proper explanation of your architectural style(s), your preferences, your goals, plenty of examples, etc. Proper thought has to be put into this.
If you come across bad code, you need to investigate not castigate: why did this happen? How can we prevent this in the future? Those sort of processes need to become second nature. They actually should be already, because it's not that much different from managing a bunch of humans.
Humans come with lots of implicit knowledge and you also select them to match your company's style when you're hiring them. When they sit down at their keyboards you (and society) has already guided them towards a desirable path. (And even then they often still misfire.)
AI agents operate different. Their range of expression is completely alien to us. We cannot be both von Neumanns and complete morons. LLMs have no problem there. It takes a good while to get used to that.