They will never admit it, but many are scared of losing their jobs.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
> I’m shipping in hours what used to take days. Not prototypes. Real, structured, well-architected software.
> If I don’t understand what it’s doing, it doesn’t ship. That’s non-negotiable.
Holy LinkedIn
"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.
I am running local offline small models in the old fashioned REPL style, without any agentic features. One prompt at a time.
Instead of asking for answers, I ask for specific files to read or specific command line tools with specific options. I pipe the results to a file and then load it into the CLI session. Then I turn these commands into my own scripts and documentation (in Makefile).
I forbid the model wandering around to give me tons of irrelevant markdown text or generated scripts.
I ask straight questions and look for straight answers. One line at a time, one file at a time.
This gives me plenty of room to think what I want and how I get what I want.
Learning what we want and what we need to do to achieve it is the precious learning experience that we don’t want to offload to the machine.
Lost me at "I’m building something right now. I won’t get into the details. You don’t give away the idea."
"But guided? The models can write better code than most developers. " <- THIS PART! I get that "senior" developers feel a certain kind of way about it but the truth is that AI really DOES write better code than most developers. I'm not saying ALL developers but AI (at least in my experience with Claude) does the "coding" part much better. They might not be ready to get it perfect yet but they're getting closer every couple of months. It won't be long now. This scares people. I prefer to embrace this AI movement. There is no stopping it no matter how much people complain about it. We all know that. What I'm realizing is that instead of spending all that time actually WRITING the code I have more time to THINK about what I want to do. It reduces the cognitive load :)
I'm glad I am no longer in tech because I just don't want to do this.
This is not a dig at AI. If I take this article at face value, AI makes people more productive, assuming they have the taste and knowledge to steer their agents properly. And that's possibly a good thing even though it might have temporary negative side effects for the economy.
>But the AI is writing the traversal logic, the hashing layers, the watcher loops,
But unfortunately that's the stuff I like doing. And also I like communing with the computer: I don't want to delegate that to an agent (of course, like many engineers I put more and more layers between me and the computer, going from assembly to C to Java to Scala, but this seems like a bigger leap).
< I enjoy writing code. Let me get that out of the way first.
< I haven’t written a boilerplate handler by hand in months. I haven’t manually scaffolded a CLI in I don’t know how long. I don’t miss any of it.
Sounds like the author is confused or trying too hard to please the audience. I feel software engineering has higher expectation to move faster now, which makes it more difficult as a discipline.
I personally code data structures and algorithms for 1 - 2 hrs a day, because I enjoy it. I find it also helps keeps me sharp and prevents me from building too much cognitive debt with AI generated code.
I find most AI generated code is over engineered and needs a thorough review before being deployed into production. I feel you still have to do some of it yourself to maintain an edge. Or at least I do at my skill level.
Right now I'm working two AI-jobs. I build agents for enterprises and I teach agent development at a university. So I'm probably too deep to see straight.
But I think the future of programming is english.
Agent frameworks are converging on a small set of core concepts: prompts, tools, RAG, agent-as-tool, agent handoff, and state/runcontext (an LLM-invisible KV store for sharing state across tools, sub-agents, and prompt templates).
These primitives, by themselves, can cover most low-UX application business use cases. And once your tooling can be one-shotted by a coding agent, you stop writing code entirely. The job becomes naming, describing, and instructing and then wiring those pieces together with something more akin to flow-chart programming.
So I think for most application development, the kind where you're solving a specific business problem, code stops being the relevant abstraction. Even Claude Code will feel too low-level for the median developer.
The next IDE looks like Google Docs.
Strangely we never hear gushing pieces on how great gcc is. If you have to advertise that much or recruit people with AI mania, perhaps your product isn't that great.
> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code. You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.
I've been programming for literally my entire life. I love it, it's part of me, and there hasn't been more than a week in 30 years that I haven't written some code.
This is the first time that I feel a level of anxiety when I am not actively doing it. What a crazy shift that I am still so excited and enamored by the process after all of this time.
But there's also the double edged sword. I am also having a really hard time moderating my working hours, which I naturally struggle with anyway, even more. Partly because I am having so much fun and being so productive. But also because it's just so tempting to add 1 more feature, fix one more bug.
I don't agree with the headline of "we're all AI engineers now", but I do agree that AI is more of a multiplier than anything. If you know what you're doing, you go faster, if you don't, you're just making a mess at a record pace.
I'm not sure how this sustains though; like, I can't help but think this technology is going to dull a lot of people's skills, and other people just aren't going to develop skills in the first place. I have a feeling a couple years from now this is going to be a disaster (I don't think AGI is going to take place and I think the tools are going to get a lot more expensive when they start charging the true costs)
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
>I think we all might be AI Engineers now, and I’m not sure how I feel about that.
Except the rest of the article strongly implies he feels pretty good about it, assuming you can properly supervise your agents.
So far the issue for me is that you can generate more crap by far than you can keep an eye on.
Once you have your 50k line program that does X are you really going to go in there and deeply review everything? I think you're going to end up taking more and more on trust until the point where you're hostage to the AI.
I think this is what happens to managers of course - becoming hostage to developers - but which is worse? I'm not sure.
A 2026 AI Engineer is a 1996 Software Architect. I don't need to be the one manually implementing the individual widgets of a system, I can delegate their implementation to developers (agents).
I'm being a little facetious, but I don't think it's far off the mark from what TFA is saying, and it matches my experience over the past few months. The worst architects we ever worked with were the ones who couldn't actually implement anything from scratch. Like TFA says, if you've got the fundamentals down and you want to see how far you can go with these new tools, play the role of architect for a change and let the agents fly.
I’ve always designed systems along the classic path: requirements → use cases → schematization. With AI, I continue in the same spirit (structure precedes prompting), but now the foundational layer of my systems is axioms and constraints, and the architecture emerges through structured prompts. Any AI on the shift is an aide in building systems that are logically grounded. This is where the “all of us as AI engineers” claim becomes subtle. Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning.
Saw the edit: I think that clarification was important. The core point resonates with me personally. The shift isn't about writing less code, it's about where the real judgment lives. Knowing what to build, how to decompose a problem, which patterns to reach for - and critically, when the model is confidently wrong. Without that foundation you're not moving faster, you're just making bad decisions faster. The scope point resonates too. Small, well-defined tasks with verifiable output is where agents actually shine.
> I can still reverse a binary tree without an LLM. I can still reason about time complexity, debug a race condition by reading the code, trace a memory leak by thinking.
All your incantations can't protect you
> Honestly?
oh no... this is one of my "uncanny valley" AI tropes
> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win. It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
The perception seems to be that AI is only causing security vulnerabilites (see: openclaw injection in npm (Clinejection)). But the article's optimistic tone much reflects my own, and if it were all bad, then nobody would be using AI. But it's mostly good, and with the benchmarks, it's a statistical fact that it helps more than it hurts. It's just math at a certain point.
Very much on the same page as the author, I think AI is a phenomenal accelerant.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
There's a baseline value in going fast.
Maybe I'm entirely out of the loop and a complete idiot, but I am really not sure at all what people mean when they talk about this stuff. I use AI agents every day, but people who say they spend 'most of my time writing agents and tools' must be living in an absolutely different world.
I don't understand how people are making anything that has any level of usefulness without a feedback loop with them at the center. My agents often can go off for a few minutes, maybe 10, and write some feature. Half of the time they will get it wrong, I realize I prompted wrong, and I will have to re-do it myself or re-do the prompt. A quarter of the time, they have no idea what they're doing, and I realize I can fix the issue that they're writing a thousand lines for with a single line change. The final quarter of the time I need to follow up and refine their solution either manually or through additional prompting.
That's also only a small portion of my time... The rest is curating data (which you've pretty much got to do manually), writing code by hand (gasp!), working on deployments, and discussing with actual people.
Maybe this is a limitation of the models, but I don't think so. To get to the vision in my head, there needs to be a feedback loop... Or are people just willing to abdicate that vision-making to the model? If you do that, how do you know you're solving the problem you actually want to?
No we can't, because the teams are being reduced in headcount to the few lucky ones allowed to wear the AI hat.
This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.
That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage.
The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.
It sounds a bit no-true-scotsman to me.
I think he is absolutely right. But what if he is not right? Then he is also absolutely right. He is just always absolutely right right?. Even when he is not right? Yes he is always absolutely right.
I agree wholeheartedly with all that is said in this article. When guided, AI amplifies the productivity of experts immensely.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
Finally a take that I can agree with.
I would think an AI engineer is one who is, you know, engineering AI.
We might all be AI users now, though.
I find really sad how people are so stubborn to dismiss AI as a slop generator. I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.
[dead]
[dead]
[dead]
[dead]
what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.
The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.
Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.