My team has experienced this over the past 6 months for sure.
The core of the article is “ AI-assisted development potentially short-circuits this replenishment mechanism. If new engineers can generate working modifications without developing deep comprehension, they never form the tacit knowledge that would traditionally accumulate. The organization loses knowledge not just through attrition but through insufficient formation.”
But is it possible this phenomenon is transient?
Isn’t part of the presumed value add of LLM coding agents in the meta-realm around coding; e.g. that well-structured human+LLM generated code (green field in particular) will be organized in such a way that the human will not have to develop deep comprehension until needed (e.g. for bug fix/optimization) and then only for a working set of the code, with the LLM bringing the person up to speed on the working set in question and also providing the architectural context to frame the working set properly?
In my view with current LLMs: they still produce far too much bloat and unclean solutions when not targeting them at very specific issues/features, making LLMs essentially a requirement for any debugging or features for the lifecycle of the product/service.