I think a reasonable and sensible goal is for us to not mix the accidental and the essential. If we let AI handle what's accidental (as in not central to the solution of the essential problem) developers can focus on the essential only. The current threat is that both types become intertwingled in a code-base, sometimes irreparably.
Fortran made that distinction clear. The compiler handled the accidental complexity of converting instructions to code, but never really obscured the boundary.
Take VB as an example from wayback. For the purposes of presenting a simple data-entry dialog, it removed the accidental complexity of dealing with Windows' message loop and resource files etc., which was painful. The essential complexity was in what the system did with the data. I suppose that the AI steering that needs to happen is to direct the essential down the essential path, and the accidental down the accidental path, and let a dev handle the former and the agent handle the latter (after all, it's accidental).
But, that'll take judgement - deciding in which camp each artifact exists and how it's managed. It might be a whole field of study, but it won't be new.
Converting instructions to code is essential complexity.
If you give up on doing the work necessary to understand what is and is not critically important, you are no longer competent or responsible.
At that point the roles have switched and you are the mindless drone, toiling to serve AI.
https://strangestloop.io/essays/things-that-arent-doing-the-...