Generally I agree with your takes and find them very reasonable but in this case I think your deep experience might be coloring your views a bit.
LLMs can hurt less experienced engineers by keeping them from building an intuition for why things work a certain way, or why an alternative won't work (or conversely, why an unconventional approach might not only be possible, but very useful and valuable!).
I think problem solving is optimization in the face of constraints. Generally using LLMs IME, the more you're able to articulate and understand your constraints, and prescriptively guide the LLM towards something it's capable of doing, the more effective they are and the more maintainable their output is for you. So it really helps to know when to break the rules or to create/do something unconventional.
Another way to put it is that LLMs have commodified conventional software so learning when to break or challenge convention is going to be where most of the valuable work is going forward. And I think it's hard to actually do that unless you get into the weeds and battle/try things because you don't understand why they won't work. Sometimes they do
What would you have us do, though?
Stifle the tools, somehow?
You’ve had nontechnical devs since npm, or before!
No: people that care to understand the whole stack, and be able to provide that value, will still exist and shine.
I think it's very easy to harm your learning by leaning into LLMs.
What I don't believe is that it HAS to be like this. Maybe it's my natural optimism showing through here, but I'm confident it's possible to accelerate rather than slow down your learning progress with LLMs, if you're thoughtful about how you apply them.
An open question for me is how feasible it is to teach people how to teach themselves effectively using this new technology.
I have a core belief that everything is learnable, if people are motivated to learn. I have no idea how to help instill that motivation in people who don't yet have it though!