None of these articles address how we'll go from novice to expert, as either self-taught or through the educational system, and all the bloggers got their proverbial "10k hours" before LLMs were a thing. IMO This isn't abstractions, the risk is wholesale outsourcing of learning. And no, I don't accept the argument that correct and LLMs errors is the same as correcting a junior devs errors because the junior dev would (presumably) learn and grow to become a senior. The technology doesn't exist for an LLM to do the same today and there's no viable path in that direction.
Can someone tell me what the current thinking is on how we'll get over that gap?
> how we'll go from novice to expert
You spent the proverbial 10k hours like before. I don't know by AI has to lead to the lack of learning. I don't find people stop learning digital painting so far, even digital painting, from my perspective, is even more "solved" than programming by machines.
I heard that Pixar had a very advanced facial expression simulation system a decade ago. But I am very willing to bet that when Pixar hires animators they still prefer someone who can animate by hand (either in Maya or frame-by-frame on paper).
I can tell you the current thinking of most of the instructors I know: teach the same fundamentals as always, and carefully add a bit of LLM use.
To use LLMs effectively, you have to be an excellent problem-solver with complex technical problems. And developing those skills has always been the goal of CS education.
Or, more bluntly, are you going to hire the junior with excellent LLM skills, or are you going to hire the junior with excellent LLM skills and excellent technical problem-solving skills?
But they do have to be able to use these tools in the modern workplace so we do cover some of that kind of usage. Believe me, though, they are pretty damned good at it without our help. The catch is when students use it in a cheating way and don't develop those problem-solving skills and then are screwed when it comes time to get hired.
So our current thinking is there's no real shortcut other than busting your ass like always. The best thing LLMs offer here is the ability to act as a tutor, which does really increase the speed of learning.
> I don't accept the argument that correct and LLMs errors is the same as correcting a junior devs errors because the junior dev would (presumably) learn and grow to become a senior. The technology doesn't exist for an LLM to do the same today and there's no viable path in that direction.
But the technology does exist. The proof is in the models you can use today, on two lines:
First, what you describe is exactly what the labs are doing. We went from "oh, look, it writes poems and if you ask for code it almost looks like python" 3 years ago. Since then, the models can handle most programming tasks, with increasing difficulty and increasing accuracy. What seemed SF 3 years ago is literally at your fingertips today. Project scaffolding, searching through codebases, bug finding, bug solving, refactorings, code review. All of these are possible today. And it all became possible because the labs used the "signals" from usage + data from subsidising models + RL + arch improvements to "teach" the models more and more. So if you zoom out, the models are "learning", even if you or I can't teach them in the sense you meant.
Secondly, when capabilities become sufficiently advanced, you can do it locally, for your own project, with your own "teachings". With things like skills, you can literally teach the models what to do on your code base. And they'll use that information in subsequent tasks. You can even use the models themselves for this! A flow that I use regularly is "session retro", where I ask the model to "condense the learnings of this session into a skill". And then those skills get invoked on the next task dealing with the same problem. So the model doesn't have to scour the entire code base to figure out where auth lives, or how we handle migrations, and so on. This is possible today!