I think we’re still missing a breakthrough in synthetic data generation for code in order to LLMs to come into their own. Something can ingest the documentation of all the different ecosystems and generate fine tuning to improve the accuracy of recall.
LLMs may get there when real reasoning is figured out, I'm expecting that to require a totally different approach used in combination with LLMs as the language unit.
Do we really want that though? As soon as these systems can reason through software problems and code novel solutions, there is no need for humans to be involved.
Likely we couldn't be involved at all, those systems would come up with solutions we likely would have a hard time comprehending and it would be totally reasonable for the system to create its own programming language and coding conventions that work better for it when the constraint of human readability is removed.