I think not using AI is a manifestation of one's inability or unwillingness to LEARN. To your point, if you can't learn, you will fall behind.
On the contrary, using AI is like outsourcing your DIY to a professional joiner.
Sure, he'll get it done twice as fast and you might notice some tricks as you look over his shoulder. But when you need a second door hung, you'll either have to start learning from scratch or call him again.
Yes, but it goes both ways. Using AI can be a great way to be productive while purposefully NOT learning how the sausage is made—say, boilerplate code in some devops system that you don't care about—allowing your attention to be focused on the part of the stack you actually care about.
Or a manifestation of having risk aversion that isn't easily swayed by peer pressure...
Everyone seems to know you can't trust the AI output, and that it is on you to review it. But whenever I talk to people who claim to be getting big benefits, there is always a moment they reveal that they are not really reviewing the output. They are just going with it.
Similarly, so many who claim to use AI as a search index eventually seem to just trust the summary instead of checking the references to figure out whether it is regurgitating fact or fiction.
I don't really know if these users always had low quality standards or low diligence, or whether the tool usage degrades them. But I see the correlation among the friends-of-friends network I can observe.