People say this constantly, but it's a qualitatively different jump from all previous abstraction layers. Previously, the part of your brain you had to use, and the way you had to think, changed from old layer X to new layer Y, but they were still very similar qualitatively. A person who was good at and enjoyed layer X either naturally was good at and enjoyed layer Y, or they could achieve both of those things after a little time. But with LLMs, the jump is much more lateral.
To do the thing I hate and use an analogy: It's not like asking a furniture maker to start using power tools; it's like asking a furniture maker to start telling a robot to make the furniture, in English. Yes, the people who were already good at furniture-making will have an advantage in how to direct the robot - but the salient point is that it's a recipe for misery for many people.
You should've turned the sarcasm detector on.
Hmm. I use AI to write almost all my code, and I feel that the "part of the brain" I use is mostly the same. Pre-AI I spent a lot of time thinking about code architecture, schemas, APIs, etc. Post-AI I spend a lot of time thinking about essentially the exact same thing. Yea, there are some things that I used to think about that I don't now - the fiddly bits, like why my parentheses weren't balanced or what field I was missing that was causing a 3rd-party API to fail. But the work feels more similar than different.