> We need no AI for this one: If I could only maintain code I wrote, I'd have to work alone.
I think you missed the whole point. This is not about you understanding a particular change. This is about the person behind the code change not understanding the software they are tasked to maintain. It's a kin to the discussion about the fundamental differences between script kiddies vs hackers.
With LLMs and coding agents, there is a clear pressure to turn developers into prompt kiddies: someone who is able to deliver results when the problem is bounded, but is fundamentally unable to understand what he did or the whole system being used.
This is not about sudden onsets of incompetence. This is about a radical change in workflows that no longer favor or allow research to familiarize with projects. You no longer need to pick through a directory tree to know where things are, or nagivate through code to check where a function is called or what component is related to which component. Having to manually open a file to read or write to it represents a learning moment that allows you to recall and understand how and why things are done. With LLMs you don't even understand what is there.
Thus developers who lean heavily on LLMs don't get to learn what's happening. Everyone can treat the project as a black box, and focus on observable changes to the project's behavior.
> Everyone can treat the project as a black box, and focus on observable changes to the project's behavior.
This is a good thing. I don’t need to focus on oil refineries when I fill my car with gas. I don’t know how to run a refinery, and don’t need to know.