I attempt to solve most agent problems by treating them as a dumb human.
In this case I would ask for smaller changes and justify every change. Have it look back upon these changes and have it ask itself are they truly justified or can it be simplified.