Seems like a decent balance to me. They note that there's no substitute for experiential learning. The harder you work, the more you get out of it. But there's a balance to be struck there with time spent.
What I do worry about is that all senior developers got that experiential education working hard, and they're applying it to their AI usage. How are juniors going to get that same education?
Recently after a month of heavily AI assisted programming, I spent a few days programming manually.
The most striking thing to me was how frustrating it was.
"Oh my god, I've melted my brain" I thought. But I persisted in my frustration -- basically continuous frustration, for seven hours straight -- and was able to complete the task.
Then I remembered, actually, it was always like that. If I had attempted this task in 2019 (or a similar task, in terms of difficulty and novelty), it would have been the same thing. In fact I have many memories of such experiences. It is only that these days I am not used to enduring the discomfort of frustration for so long, without reaching for the "magically fix everything (probably)" button.
If I had to guess, I'd say that is the main "skill" being lost. There's a quote like that, I think.
Genius ... means transcendent capacity of taking trouble, first of all. —Thomas Carlyle
Lately I have had the cursed vision as I'm building a new IoT product. I have to learn _so_ much, so I have stopped using claude code. I find directly altering my code too hands off.
Instead I still use claude in the browser mainly for high level thinking/architecture > generating small chunks of code > copying pasta-ing it over. I always make sure I'm reading said library/code docs as well and asking claude to clarify anything I'm unsure of. This is akin to when I started development using stackoverflow just 10x productive. And I still feel like I'm learning along the way.
Post is clearly very heavily glued together/formatted and more by an LLM, but it's sort of fascinating how bits and spurts of the author's lowercase style made it through unscathed.
I do AI trainings, and the framework I try to teach is "Using AI as a Learning Accelerator, not a Learning Replacement"
I get where the author is coming from, but (I promise from an intellectually honest place) does it really matter?
Modeling software in general greatly reduced the ability of engineers to compute 3rd, 4th and 5th order derivatives by hand when working on projects and also broke their ability to create technical drawing by hand. Both of those were arguably proof of a master engineer in their field, yet today this would be mostly irrelevant when hiring.
Are they lesser engineers for it? Or was it never really about derivatives and drawings, and all about building bridges, engines, software that works?
How many people could, from scratch, build a ball point pen?
Do we have to understand the 100 years of history behind the tool or the ability to use it? Some level of repair knowledge is great. Knowing the spring vs ink level is also helpful.
I respect this choice, but also I feel like one might need to respect that it may end up not being particularly "externally" valuable.
Which is to say, if it's a thing you love spending your time on and it tickles your brain in that way, go for it, whatever it is.
But (and still first takeaways) if the goal is "making good and useful software," today one has to be at least open to the possibility that "not using AI" will be like an accountant not using a calculator.
Has anyone measured whether doing things with AI leads to any learning? One way to do this is to measure whether subsequent related tasks have improvements in time-to-functional-results with and without AI, as % improvement. Additionally two more datapoints can be taken: with-ai -> without-ai, and without-ai -> with-ai
> What scares me most is an existential fear that I won’t learn anything if I work in the “lazy” way.
You're basically becoming a manager. If you're wondering what AI will turn you into just think of that manager.
The missing step seems to be identifying what is worth learning and your goals. Will learning X actually benefit you? We already do this with libraries, they save us a great deal of time partially by freeing us from having to learn everything required to implement that library, and we use them despite those libraries often being less than ideal for the task.
From the author:
>ai-generated code is throw-away code
Mate, most code I ever written across my career has been throw away code. The only exception being some embedded code that's most likely on the streets to this day. But most of my desktop and web code has been thrown away by now by my previous employers or replaced by someone else's throwaway code.
Most of us aren't building DOOM, the Voyager probe or the Golden Gate bridge here, epic feats of art and engineering designed to last 30-100+ years, we're just plumbers hacking something quickly to hold things together until the music chairs stop playing and I have no issue offloading that to a clanker if I can, so i can focus on the things I enjoy doing. There's no shame in that and no pride in that either, I'm just paid to "put the fries in the bag", that's it. Do you think I grew up dreaming about writing GitHub Actions yaml files for a living?
Oh and BTW, code being throwaway, is the main reason demand and pay for web SW engineers has been so high. In industries where code is one-and-done, pay tends to scale down accordingly since a customer is more than happy to keep using your C app on a Window XP machine down in the warehouse, instead of keep paying you to keep rewriting it every year in a facier framework in the cloud.