Yes I too have found newer models (mostly Opus) to be much better at iterative development. With that being said if I have very strong architectural/developmental steer on what I believe the output should be [mostly for production code where I thoroughly review absolute everything] it’s better to have a documented spec with everything covered rather than trying to clean up via an agent conversation. In the team I’m in we keep all plan.mds for a feature, previously before AI tooling we created/revised these plans in Confluence, so to some degree reworking the plan is more an artefact of the previous process and not necessarily a best practice I don’t think.
Understandable. Certainly my style is not applicable to everyone. I tend to "grow" my software more organically. Usually because the more optimal structure isn't evident until you are actually looking at how all the contracts fit together or what dependencies are needed. So adding a lot of plan/documentation just slows me down.
I tend to create a very high level plan, then code systems, then document the resulting structure if I need documentation.
This works well for very iterative development where I'm changing contracts as I realize the weak point of the current setup.
For example, I was using inheritence for specialized payloads in a pipeline, then realized if I wanted to attach policies/behaviours to them as they flow through the pipeline, I was better off just changing the whole thing to a payload with bag of attached aspects.
Often those designs are not obvious when making the initial architectural plan. So I approach development using AI in much the same way: Generate code, review, think, request revision, repeat.
This really only applies when establishing architecturs though, which is generally the hardest part. Once you have an example, then you can mostly one-shot new instances or minor enhancements.