I've never learned so much and so fast as I do with LLMs.
1. Most of the learning before, especially technical related involved a lot of google searching for the information I needed. LLM here removes a lot of the friction and boring parts of the process.
2. At work I can leverage LLMs for some very mundane tasks, again, mostly related to information gathering. There was a time when I needed days to connect the dots in some very convoluted code written by your average developer. Even more to figure out the purpose of choices and how they connected to the business domain, often in situations where the relevant stakeholders and people with the know-how left the company. This kind of work, made of tons and tons of paper pages with my notes would sincerely exhaust me. And this kind of work has been the bulk of my career because coding was never the hard part. On this LLMs are increasingly better. This leaves me a lot of energy more to actually investigate the overall architectural decisions and technical details both of my projects and their dependencies (which have never been as easy to traverse).
3. Since I am less mentally exhausted (the only way to get mentally exhausted with LLMs is if you're "half vibecoding" so producing tons and tons of code which you are actually thoroughly reviewing) I have way more space to dedicate to learning. I do it both by practicing manual coding stuff for fun or editing the things I don't like in the work codebases I see, or by doing more katas on codewars or leetcode exercises. Also, I end up just in general asking more questions I would've not made just out of sheer curiosity and often learn a lot of things that suddenly "click". Another thing I do is way more spaced repetition exercises on topics I care (such as the many odd things you can learn with a language like C or metaprogramming coolness you encounter in Ruby and similar) on Remnote.
Honestly I don't get how you can learn less by having such a tool that removes so much friction.
But of course, if every AI naysayer conflates every LLM usage with vibecoding and with delegating the thinking and reasoning to LLM messages then sure, they are a disaster used like that. But that's on the user, not the tool.