> I and the people I work with are using agents to learn new topics so fast.
I'm a person who loves learning but I don't really understand this claim. My brain quickly reaches a saturation point when learning new topics. I need to leave and come back multiple times until I begin to understand, but this seems to me to be a normal part of the process. It's the struggle that forms the connections in my brain.
Being spoon-fed information isn't the same as learning, to me. Are you also using AI to test you on your new knowledge? Does it administer these tests periodically? Or are you just reviewing notes and saying to yourself "I know this now"?
How are you ensuring you've learned anything at all?
> Being spoon-fed information isn't the same as learning, to me
It's like it distills it for you. I feel like you're thinking of an example like trying to learning operating systems by reading wikipedia articles (i.e. it gives you a high level summary but nothing more).
The way I see it, code says a lot, but it takes time to scroll through it and cmd+click back and forth. But if you just ask the AI "where's x thing happening around this file" it will just point you right to it. So I feel like less cognitive energy is spent dealing with the syntactic quirks of code and more is spent on the essential algorithmic task.
I don't really like using it to summarize natural language written by one author or group, like a paper for example, that just feels like laziness to me.
Reminds me of the book "Make It Stick: The Science of Successful Learning" and its comparison of spaced repetition and cramming.
Cramming often feels more satisfying, more like you're learning, but actually leads to worse retention. Spaced repetition that includes the struggle of recalling something just at the edge of being forgotten, on the other hand, feels worse but leads to much higher retention.