Here’s the thing, they say all the same things you just said in this comment. Yet, the code I end up having to work in is still bad. It’s 5x longer than it needs to be and the naming is usually bad so it takes way longer to read than human code. To top it off, very often it doesn’t integrate completely with the other systems and I have to rewrite a portion which takes longer because the code was designed to solve for a different problem.
If you are really truly reviewing every single line in a way that it is the same as if you hand wrote it… just hand write it. There’s no way you’re actually saving time if this is the case. I don’t buy that people are looking at it as deeply as they claim to be.
> If you are really truly reviewing every single line in a way that it is the same as if you hand wrote it… just hand write it.
I think this is only true for people who are already experts in the codebase. If you know it inside-out, sure, you can simply handwrite it. But if not, the code writing is a small portion of the work.
I used to describe it as this task will take 2 days of code archaeology, but result in a +20/-20 change. Or much longer, if you are brand new to the codebase. This is where the AI systems excel, in my experience.
If the output is +20/-20, then there's a pretty good chance it nailed the existing patterns. If it wrote a bunch more code, then it probably deserves deeper scrutiny.
In my experience, the models are getting better and better at doing the right thing. But maybe this is also because I'm working in a codebase where there are many example patterns in the codebase to slot into and the entire team is investing heavily in the agent instructions and skills, and the tooling.