But how much of that time is truly spent on learning relevant knowledge, and how much of it is just (now) useless errata? Take vector search for an example. Pre-GPT, I would spend like an hour chasing down a typo, like specifying 1023 instead of 1024 or something. This sort of problem is now trivially solved in minutes by a LLM that fully understands the API surface area. So what exactly do I lose by not spending that hour chasing it down? It has nothing to do with learning vector search better, and an LLM can do it better and faster than I can.
I think people fool themselves with this kind of thing a lot. You debug some issue with your GH actions yaml file for 45 minutes and think you "learned something", but when are you going to run into that specific gotcha again? In reality the only lasting lesson is "sometimes these kinds of yaml files can be finnicky". Which you probably already knew at the outset. There's no personal development in continually bashing your head into the lesson of "sometimes computer systems were set up in ways that are kind of tricky if you haven't seen that exact system before". Who cares. At a certain point there is nothing more to the "lesson". It's just time consuming trial and error kind of gruntwork.