The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.
To be honest, I’m looking at leaving software because the job has turned into a different sort of thing than what I signed up for.
So I think this article is partly right, Bob is not learning those skills which we used to require. But I think the market is going to stop valuing those skills, so it’s not really a _problem_, except for Bob’s own intellectual loss.
I don’t like it, but I’m trying to face up to it.
I'm glad you've posted this comment because I strongly feel more people need to see sentiment, and push back against what many above want to become the new norm. I see capitulation and compliance in advance, and it makes me sad. I also see two very valid, antipodal responses to this phenomenon: Exit from the industry, and malicious compliance through accelerationism.
To the reader and the casual passerby, I ask: Do you have to work at this pace, in this manner? I understand completely that mandates and pressure from above may instill a primal fear to comply, but would you be willing to summon enough courage to talk to maybe one other person you think would be sympathetic to these feelings? If you have ever cared about quality outcomes, if for no other reason than the sake of personal fulfillment, would it not be worth it to firmly but politely refuse purely metrics-focused mandates?
> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
"Being able to deliver using AI" wasn't the point of the article. If it was the point, your comment would make sense.
The point of the program referred to in the article is not to deliver results, but to deliver an Alice. Delivering a Bob is a failure of the program.
Whether you think that a Bob+AI delivers the same results is not relevant to the point of the article, because the goal is not to deliver the results, it's to deliver an Alice.
They aren't going away but for some they may become prohibitively expensive after all the subsidies end.
I do think coding with local agents will keep improving to a good level but if deep thinking cloud tokens become too expensive you'll reach the limits of what your local, limited agent can do much more quickly (i.e. be even less able to do more complex work as other replies mention).
> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.
I dread the flip side of this which is dealing with obtuse bullshit like trying to understand why Oracle ADF won’t render forms properly, or how to optimize some codebase with a lot of N+1 calls when there’s looming deadlines and the original devs never made it scalable, or needing to dig into undercommented legacy codebases or needing to work on 3-5 projects in parallel.
Agents iterating until those start working (at least cases that are testable) and taking some of the misery and dread away makes it so that I want to theatrically defenestrate myself less.
Not everyone has the circumstance to enjoy pleasant and mentally stimulating work that’s not a frustrating slog all the time - the projects that I actually like working on are the ones I pick for weekends, I can’t guarantee the same for the 9-5.
It's the next level of abstraction. Bob is still learning, he's just learning a different set of skills than Alice.
Also, the premise that it took each of them a year to do the project means Bob was slacking because he probably could've done it in less than a month.
> So if Bob can do things with agents, he can do things.
Yes, but how does he know if it worked? If you have instant feedback, you can use LLMs and correct when things blow up. In fact, you can often try all options and see which works, which makes it ”easy” in terms of knowledge work. If you have delayed feedback, costly iterations, or multiple variables changing underneath you at all times, understanding is the only way.
That’s why building features and fixing bugs is easy, and system level technical decision making is hard. One has instant feedback, the other can take years. You could make the ”soon” argument, but even with better models, they’re still subject to training data, which is minimal for year+ delayed feedback and multivariate problems.
>The thing is, agents aren’t going away...
Aren't they currently propped up by investor money?
What happens when the investors realize the scam that it is and stop investing or start investing less...
> if Bob can do things with agents, he can do things.
This point is directly addressed in the paper: Bob will ultimately not be able to do the things Alice can, with or without agents, because he didn't build the necessary internal deep structure and understanding of the problem space.
And if Alice later on ends up being a better scientist (using agents!) than Bob will ever be, would you not say there was something lost to the world?
Learning needs a hill to climb, and somebody to actually climb it. Bob only learned how to press an elevator button.
There is still a lot of engineering to be done with LLMs. Maybe not exactly writing code but I think a lot of optimization problems will be there no matter what.
Some people treat toilet as magic hole where they throw stuff in flush and think it is fine.
If you throw garbage in you will at some point have problems.
We are in stage where people think it is fine to drop everything into LLM but then they will see the bill for usage and might be surprised that they burned money and the result was not exactly what they expected.
Many things have come and gone in this fashion oriented industry. Everyone is already bored to hell by AI output.
AI in software engineering is kept afloat by the bullshitters who jump on any new bandwagon because they are incompetent and need to distract from that. Managers like bullshit, so these people thrive for a couple of years until the next wave of bullshit is fashionable.
> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.
I am in the same boat, but close enough to retirement that I'm less "scared" about it. For me I'm moving up the chain; not people management, but devoting a lot more of my time up the abstraction continuum. Looking a lot more at overall designs and code quality and managing specs and inputs and requirements.
I wrote some design docs past few days for a big project the team is embarking on. We never had that before, at least not in the level of detail (per time quantum) that I was able to produce. Used 2 models from 2 companies - one to write, one to review, and bounce between them until the 3 of us agree.
Honestly it didn't take any less time than I would have done it alone, but the level of detail was better, and covered more edge cases. Calling it a "win" right now. I still enjoy it, as most of the code I/we was/are writing is mostly fancy CRUD anyway, and doesn't have huge scaling problems to solve (and too few devs I feel are being honest about their work, here).
> if Bob can do things with agents, he can do things
I’ve been reminded lately of a conversation I had with a guy at hacker space cafe around ten years ago in Berlin.
He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.
He was lamenting that these days, software was written in higher level languages, and that more and more programmers no longer had the same level of knowledge about the lower level workings of computers. He had a valid point and I enjoyed talking to him.
I think about this now when I think about agentic coding. Perhaps over time most software development will be done without the knowledge of the higher level programming languages that we know today. There will still be people around that work in the higher level programming languages in the future, and are intimately familiar with the higher level languages just like today there are still people who work in assembly even if the percentage of people has gotten lower over time relative to those that don’t.
And just like there are areas where assembly is still required knowledge, I think there will be areas where knowledge of the programming languages we use today will remain necessary and vibe coding alone wont cut it. But the percentage of people working in high level languages will go down, relative to the number of people vibe coding and never even looking at the code that the LLM is writing.
Can you run an industry level LLM at home?
If not, you're changing learning to cook for Uber only meals.
And since the alternative is starving, Uber will boil the pot.
Don't give up your self sufficiency.
> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
Following the model of how startups have worked for the last 20 years or so, I expect agents to eventually be locked-down/nerfed/ad-infested for higher payments. We are enjoying the fruits of VC money at the moment and they are getting everyone addicted to agents. Eventually they need to turn a profit.
Not sure how this plays out, but I would hang on to any competencies you have for anyone (or business) that wants to stick around in software. Use agents strategically, but don't give up your ability to code/reason/document, etc. The only way I can see this working differently is that there are huge advances in efficiency and open-source models.
I think a good analogy is people not being able to work on modern cars because they are too complex or require specialised tools. True I can still go places with my car, but when it goes wrong I'm less likely to be able to resolve the problem without (paid for) specialised help.
I understand your point, but this is a purely utilitarian view and it doesn’t account for the fact that, even if agents may do everything, it doesn’t mean they should, both in a normative and positive sense.
There is a vast range of scenarios in which being more or less independent from agents to perform cognitive tasks will be both desirable and necessary, at the individual, societal and economic level.
The question of how much territory we should give up to AI really is both philosophical and political. It isn’t going to be settled in mere one-sided arguments.
Some people probably enjoyed writing assembly (I am not one of those people, especially when I had to do it on paper in university exams) and code agents probably can do it well - but for the hard tasks, the tasks that are net new, code agents will produce bad results and you still need those people who enjoy writing that to show the path forward.
Code agents are great template generators and modifiers but for net new (innovative! work it‘s often barely usable without a ton of handholding or „non code generation coding“
> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.
You're still working on intellectually stimulating programming problems. AI doesn't go all the way with any reliability, it just provides some assistance. You're still ultimately responsible for getting things right, even with key AI help.
I don't like it either. But what is really guaranteeing other markets from flunking similarly later on? What's to say other jobs are going to be any better? Back in college, most of my peers would say "I'm not cut out for anything else. This is it". They were, sure enough, computer and/or math people at heart from an early age.
More importantly, what's gonna be the next stable category of remote-first jobs that a person with a tech-adjacent or tech-minded skillset can tack onto? That's all I care about, to be honest.
I may hate tech with a passion at times and be overly bullish on its future, but there's no replacing my past jobs which have graced me and many others with quality time around family, friends, nature and sports while off work.
Bob can't do things, Bob's AI can do things that Bob asks it to do. And the AI can only do things that have been done before, and only up to a certain level of complexity. Once that level is reached, the AI can't do things anymore, and Bob certainly isn't going to do anything about that, because Bob doesn't know how to do anything himself. One has to question what value Bob himself even brings to the table.
But let's assume Bob continues to have an active role, because the people above him bought in to the hype and are convinced that "prompt engineer" is the job of the future. When things inevitably start falling apart because the Bobs of the world hit a wall and can't solve the problems that need to be solved (spoiler: this is already happening), what do we do? We need Alices to come in and fix it, but the market actively discourages the existence of Alice, so what happens when there are no more Alices left? Do we just give up and collectively forget how to do things beyond a basic level?
I have a feeling that, yes, we as a species are just going to forget how to do things beyond a certain level. We are going to forget how to write an innovative science paper. We are going to forget how to create websites that aren't giant, buggy piles of React spaghetti that make your browser tab eat 2GB of RAM. We've always been forgetting, really - there are many things that humans in the past knew how to do, but nobody knows how to do today, because that's what happens when the incentive goes missing for too long. Price and convenience often win over quality, to the point that quality stops being an option. This is a form of evolutionary regression, though, and negatively affects our quality of life in many ways. AI is massively accelerating this regression, and if we don't find some way to stop it, I believe our current way of life will be entirely unrecognizable in a few decades.
Being able to deliver junior-level work isn't the goal of training juniors.
Programmers have ultimate (or they had) skill to solve anything if you have enough resources.
Now, you don't do thing and do other things when LLMs get stuck. There is no "given enough time I can do it".
I can't see how somebody would go solving slop bugs (slugs :)) in heavy AI generated codebase.
Hope, I'm wrong but that's somehing I personally encountered. Stay sharp.
> So if Bob can do things with agents, he can do things.
But he does things wrong.
Agents may not go away, but they are going to fall off significantly once people wake up to how bad they are at making software. It's like in the early 00s when business execs were stoked about the idea that they could cut costs by hiring bottom rate Indian contractors: it turned out to be a disaster for quality, and eventually there was a shift back towards having staff in the US. The same thing is going to happen with LLMs.
>The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
He'll get things (papers, code, etc) which he can't evaluate. And the next round of agents will be trained on the slop produced by the previous ones. Both successive Bob's and successive agents will have less understanding.
No - you need to understand the details in order to do the “high level” work.
The thing is Bob can use HammerAsAService™ to put in a nail. It is so cheap! Way cheaper than buying an actual hammer.
The problem with unlearning generic tools and relying on ones you rent by big corporations is that it is unreliable in the long term. The prices will be rising. The conditions will worsen. Oh nice that Bob made a thing using HammerAsAService™, but the terms of conditions (changing once a week) he accepted last week clearly say it belongs to the company now. Bob should be happy they are not suing him yet, but Bob isn't sure whether the thing that came out a month after was independently developed by that company or not just a clone of his work. Bob wishes he knew how to use a hammer.
> So if Bob can do things with agents, he can do things.
I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.
That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.
> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.
Can he? If he outsources all his thinking and understanding to agents, can he then fix things he doesn't know how to fix without agents?
Any skill is practice first and foremost. If Bob has had no practice, what then?
> I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.
It’s not for me. Being a middle manager, with all of the liability and none of the agency, is not what I want to do for a living. Telling a robot to generate mediocre web apps and SVGs of penguins on bicycles is a lousy job.
> agents aren’t going away
Why not? Once the true cost of token generation is passed on to the end user and costs go up by 10 or 100 times, and once the honeymoon delusion of "oh wow I can just prompt the AI to write code" fades, there's a big question as to if what's left is worth it. If it isn't, agents will most certainly go away and all of this will be consigned to the "failed hype" bin along with cryptocurrency and "metaverse".
> The thing is, agents aren’t going away.
Let’s wait until they a business model that creates profit.
Most of them won’t go away, but many will become outdated or slow or enshittificated.
Imagine building your career based on the quality of google‘s search
[dead]
The whole premise is bad. If the supervisor can do it in 2 months, then they can do it in 2 weeks with AI.
Didn't PhD projects used to be about advancing the state of art?
Maybe we'll get back to that.
> So if Bob can do things with agents, he can do things.
The problem arrises when Bob encounters a problem too complex or unique for agents to solve.
To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.
The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.