> What exactly do we mean this? Because it is obviously common for human coders to tackle learning how an unfamiliar and complex codebase works so that they can modify it (new hires do it all the time).
I agree with you, BUT: I find it much harder to get my head around a medium sized vibe coded project than a medium size bespoke coded project. It's not even close.
I don't know what codebases will look like if/when they become "fully agentic". Right now, LLM-agents get worse, not better, as a codebase grows, and as more if it is coded (or worse architected) by LLM.
Humans get better over time in a project and LLMs get worse, and this seems fundamental to the LLM architecture really. The only real way I see for codebases to become fully agentic right now is if they're small enough. That size grows as context sizes that new models can deal with grows.
If that's how this plays out - context windows get large enough that LLM-agents can work fine in perpetuity in medium or large size projects - I wonder if the resulting projects will be extremely difficult for humans to wrap their heads around. That is, if the LLM relies on looking at massive chunks of the codebase all at once, we could get to the point of fully agentic codebases without having to tackle the problem of LLMs being terrible at architecture, because they don't need it.
And is "model collapse" a thing when LLMs are trained on 100% LLM-generated code? Fun times ahead.