> And there was something else: most early startups need to pivot, changing direction as they learn more about what the market wants and what is technically possible. By lowering the costs of pivoting, it was much easier to explore the possibilities without being locked in or even explore multiple startups at once: you just tell the AI what you want.
In my experience so far, AI prototyping has been a powerful force for breaking analysis paralysis.
In the last 10 years of my career, the slow execution speed at different companies wasn't due to slow code writing. It was due to management excesses trying to drive consensus and de-risk ideas before the developers were even allowed to write the code. Let's circle back and drive consensus in a weekly meeting with the stakeholders to get alignment on the KPIs for the design doc that goes through the approval and sign off process first.
Developers would then read the ream and realize that perfection was expected from their output, too, so development processes grew to be long and careful to avoid accidents. I landed on a couple teams where even small changes required meetings to discuss it, multiple rounds of review, and a lot of grandstanding before we were allowed to proceed.
Then AI comes along and makes it cheap to prototype something. If it breaks or it's the wrong thing, nobody feels like they're in trouble because we all agree it was a prototype and the AI wrote it. We can cycle through prototypes faster because it's happening outside of this messy human reputation-review-grandstanding loop that has become the norm.
Instead of months of meetings, we can have an LLM generate a UI and a backend with fake data and say "This is what I want to build, and this is what it will do". It's a hundred times more efficient than trying to describe it to a dozen people in 1-hour timeslots in between all of their other meetings for 12 weeks in a row.
The dark side of this same coin is when teams try to rely on the AI to write the real code, too, and then blame the AI when something goes wrong. You have to draw a very clear line between AI-driven prototyping and developer-driven code that developers must own. I think this article misses the mark on that by framing everything as a decision to DIY or delegate to AI. The real AI-assisted successes I see have developers driving with AI as an assistant on the side, not the other way around. I could see how an MBA class could come to believe that AI is going to do the jobs instead of developers, though, as it's easy to look at these rapid LLM prototypes and think that production ready code is just a few prompts away.
When thinking about automation people overindex on their current class biases. For 20 years we heard that robots were going to take over the “burger flipper” jobs. Why was it so easy to think that robots could replace fast food workers? Because they were the lowest rung on the career ladder, so it felt natural that they would be the first ones to get replaced.
Similarly, it’s easy to think that the lowly peons in the engineering world are going to get replaced and we’ll all be doing the job of directors and CEOs in the future, but that doesn’t really make sense to me.
Being able to whip your army of AI employees 3% better than your competitor doesn’t (usually) give any lasting advantage.
What does give an advantage is: specialized deep knowledge, building relationships and trust with users and customers, and having a good sense of design/ux/etc.
Like maybe that’s some of the job of a manager/director/CEO, but not anyone that I’ve worked with.
I thought I had a great startup idea. It was niche, but a solid global market. It was unique. There was a genuine pain point being solved. My MVP solved it. The pricing worked, the tiers were sound.
At least ChatGPT, Gemini and Claude told me it was. I did so many rounds of each one evaluating the other, trying to poke holes etc. Reviewing the idea and the "research", the reasoning. Plugging the gaps.
Then I started talking to real people about their problems in this space to see if this was one of them. Nope, not really. It kinda was, but not often enough to pay for a dedicated service, and not enough of a pain to move on from free workarounds.
Beware of AI reviewing AI. Always talk to real people to validate.
The "management as superpower" framing assumes people thoughtfully evaluate AI output. In practice, most users either review everything (slow, defeats the speed benefit) or review almost nothing (fast, but you're trusting the AI entirely). The MBAs who did well probably had domain expertise to spot wrong answers quickly, that's the actual superpower, not generic "management skill
Is there any hope of turning this around so I can still do fun work and AI can take over the management roles instead?
> I think many people have the skills they need, or can learn them, in order to work with AI agents - they are management 101 skills.
I like his thinking but many professional managers are not good at management. So I'm not sure about the assumption that "many people" can easily pick this up.
5 years ago: ML-auto-complete → You had to learn coding in depth
Last Year: AI-generated suggestions → You had to be an expert to ask the right questions
Now: AI-generated code → You should learn how to be a PM
Future: AI-generated companies → You must learn how to be a CEO
Meta-future: AI-generated conglomerates → ?
Recently I realized that instead of just learning technical skills, I need to learn management skills. Specifically, project management, time management, writing specifications, setting expectations, writing tests, and in general, handling and orchestrating an entire workflow.And I think this will only shift to the higher levels of the management hierarchy in the future. For example, in the future we will have AI models that can one-shot an entire platform like Twitter. Then the question is less about how to handle a database and more about how to handle several AI generated companies!
While we're at the project manager level now, in the future we'll be at the CEO level. It's an interesting thing to think about.
It’s hard to take this author seriously given there’s no way they reviewed the work their students did.
> I find it interesting to watch as some of the most well-known software developers at the major AI labs note how their jobs are changing from mostly programming to mostly management of AI agents.
"AI labs"
Can we stop this misleading language. They're doing product development. It's not a "laboratory" doing scientific research. There's no attempt at the scientific method. It's a software firm and these are software developers/project managers.
Which brings me to point 2. These guys are selling AI tooling. Obviously there's a huge desire to dogfood the tooling. Plus, by joining the company, you are buying into the hype and the vision. It would be more surprising if they weren't using their own tools the whole time. If you can't even sell to yourself...
The limiting factor at work isn't writing code anymore. It's deciding what to build and catching when things go sideways.
We've been running agent workflows for a while now. The pattern that works: treat agents like junior team members. Clear scope, explicit success criteria, checkpoints to review output. The skills that matter are the same ones that make someone a good manager of people.
pglevy is right that many managers aren't good at this. But that's always been true. The difference now is that the feedback loop is faster. Bad delegation to an agent fails in minutes, not weeks. You learn quickly whether your instructions were clear.
The uncomfortable part: if your value was being the person who could grind through tedious work, that's no longer a moat. Orchestration and judgment are what's left.