As a /senior/ developer I really dislike blanket statements. I've seen the same amount of failures caused by
> “Do we really need that?” > “What happens if we don’t do this?” > “Can we make do for now? Maybe come back to this later when it becomes more important?”
as with experimenters. Every system is different, every product is different. If I were building firmware for a CT scanner, my approach towards trying out new things would be different than a CRUD SaaS with 100 clients in a field that could benefit from a fresh perspective.
There are definitely ways to have eager/very open seniors drive systems into hard to get out corners. But then there are people that claim PHP5 is all you need.
Most proof of concepts I've seen get traction turned into production.
A rewrite?
I recall a few times everyone promised, if this gets promoted then we will rewrite it from zero. Never happened.
The article touches on responsability, accountability. There is none for risk taker. By definition. You have a crazy idea, you rush it out, you hope clients bite. You profit. It's not even your problem how to make it work, scale, not cost more to run than we sell it for.
The loop on the right. There are companies, two of them are very popular these days, they took it to an extreme. You ship something fast, and since it only scales linearly you go raise money. Successful companies, countless users, some of them even pay. Who's to blame? The senior developer, or simply someone reasonable who asks, how's that sustainable, what's the way out of this? Those are fired, so whoever's left is a believer.
What I found is that my willingness to communicate and share my expertise is usually not in demand with more junior developers. In general, I find developers uninterested in finding a mentor. They don't look at your linked in profile, they don't look at you as a possible source of knowledge and expertise.
So it's not like I have nothing to share after 30 years of experience in the industry, I just have nobody to share it with.
A really competent senior figures out what the prevailing culture of the company is now, and what it will need to be in 5 years, and adapts as they go. Startups with 5 people maybe don't need extra complexity costing runway. A 500 person business may need that complexity because now there are second-order effects that need to be mitigated for every business decision. It's not a black-and-white "always avoid complexity" it's "add complexity when it makes sense" and even that question has a lot of nuance because sometimes the business just needs to survive for another couple of months.
Complexity, if it can be reduced to a single measurable dimension, is only one of several factors in a solution space.
There are other properties such as, maintainability, scalability, reliability, resilience, anti-fragility, extensibility, versatility, durability, composability. Not all apply.
Being able to talk about tradeoffs in terms of solution spaces, not just along a single dimension, is one of what I consider the differentiator between a senior and staff+ developer.
I tripped over the double-entendre of the teaser quote and then found it ironic that the author is a copy writer.
>> “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
> And so, to me, a copywriter, what’s happening here is that the same message is meaning two different things to two different audiences.
I couldn't tell whether to parse this as "We will be faster without those slow developers", or more cynically as "We don't need developers to slow us down; We can now be slow with ai agents". I suspect that with creeping complexity the latter reading will hold up better for large projects.
I do a bit of both... I pay attention to new tools, libraries and languages. But will rarely recommend them initially. That said, I also tend to fight complexity to an extreme degree KISS/YAGNI are my top enterprise development keystones.
> The avoider, the reducer, the recycler.
As this kind of person, it can be alienating in some teams / companies.
What I've found works best is to convey how the added complexity will affect non-engineers. You have to understand the incentives and trade offs though, and sometimes it's better to take the loss.
If you have the fortune of sticking around with the same leaders for awhile, a few rounds of being vocal, but compromising, will work in your favor. When that complexity comes back around to bite them in the way you described, you will earn some trust.
In my experience the solution proposed will rarely result in a less complex solution. Quick MVPs have the tendency to stick around. As soon as a customer starts using some product or feature, the cost of pivoting goes up. If you wish to experiment, do it on a segment.
I may be missing something, but the "left" and "right" loops strike me as slightly different words for the same exact thing.
The company provides (offer | service) to the (market | user) and receives (feedback | payment).
The service IS the offer, the userbase IS the market, and payment IS the feedback signal.
Right?
EDIT - expanded on original comment to add:
The author's point might be lost on me but seems to be that framing things with one of those sets of labels vs the other may correspond to use of "complexity" vs "uncertainty" as the element targeted for reduction, and choosing those labels carefully in turn correlates to "senior" devs' persuasiveness in prioritization battles with product owners. To which my response would be, "maybe?". (shrug)
I'm not a copywriter by trade but I care about words and may have just been nerd-sniped.
Even with AI, there is a clear difference between juniors and seniors.
None of the things I can think of have anything to do with avoiding problems.
To some degree, having 5+ agents working on different projects is similar to leading a team of 5+ people. The skills translate well.
The senior is also able to understand what the agents do, review and challenge it. Juniors often can't.
And finally, the senior has a deeper understanding of what the business and problem domain are, and can therefore guide the AI more effectively towards building the right thing.
Interesting article. I appreciate the range of perspectives here, and the overall pitch to keep the most experienced in frame along side new-fangled advancements (AI).
The "speed" loop reminds me a lot of RAD. In fact, AI might be _the_ thing that helps us deliver on RAD's promises from decades ago.
https://www.geeksforgeeks.org/software-engineering/software-...
I found that the proposers of features "want everything" because they don't know what is critical - they're therefore totally unwilling to accept anything other than "the full monty". So as a senior developer you cannot propose any faster route.
As you might imagine, a lot of these ideas fell by the wayside but we had to develop them in full.
One could say in order to be a senior developer in any area, more-than-good- communication skills are required.
This is engagement bait. I almost fell for it.
The polarization of speed vs scale concern on team is interesting.
Maps to what we believe on our team - functional vs non-functional. AI ships functional features fast but developers are more important than ever in making sure the non-functional aspects are taken care of
Speed… speed… velocity… speed. All I hear about these days. Every meeting.
Honest question does high velocity / first mover ever really pay off these days?
I don't feel like having the first AI slop to the market has actually paid off for anyone? Am I wrong? Am I missing something? Am I out of touch?
The way I see it, first movers do a lot of work proving the idea works, and everyone else swoops in with better product or at least at a cheaper rate.
Beyond that, let's take the company I work for, for example. We have an ingrained and actually relatively happy customer base on a subscription model. I feel like the only thing increased velocity can do is rapidly ruin their experience.
> I don't like the kind of senior developer that says "I found this new tool and it’s pretty cool ..."
Remember that the first half of this statement, the part listed here, is great. I love playing with new tools.
The only bad part is the implicit bit after the dots: "we should use this in our product." You don't want cool things anywhere near your product, unless the cool thing is that they remove complexity.
I think that if this becomes an actual problem, there will be such a massive incentive to add AI to the scale/compression/risk avoidance side that there will be automated tools specialized in that kind of work.
I feel like this is shooting from the hip from a single point of view from some semi-large corpo.
I agree with the author's premise - that one feedback loop optimizes for speed, and the other for scale - but I don't think the market is bearing the conclusion - that AI should be utilized to enable more rapid experimentation, where we better scale what works.
Many vendors seem to be learning (or not learning, but just throwing their weight against it anyway) that adding hastily-generated AI features are causing customer dissatisfaction, as more people brand the features "slop".
In the best case, the users give the company more chances. Infinitely more chances.
In a worse case, the users assume the new feature will always be bad, given their first impression. It's hard for a vendor to make people reconsider a first impression.
The absolute worst case is that AI enables a new market, but the first attempts are so poor that the first movers make people write that market off as a dead end, leading to a lost opportunity.
I feel this is about as accurate and relevant as if I were to write an article on senior copywriters.
> Your thoughts, senior software developer?
The senior should also start using AI to increase the amount of work done to stabilise the system, in a careful manner. More benchmarks, better testing, better safety net when delivering software, automated security reviews, better instrumentation, and so on.
> And this is how AI affects the two loops
There should be another image illustrating that the amount of mitigations done from senior side, red-/blue-team style.
This is well-put, but the problem comes when you’ve got leadership looking at what appears to be a fully-functioning version of the product that the market is clearly indicating to them is sufficient to drive revenue. Budgeting the 6 weeks or whatever to translate from “the working version” to “the trustworthy version” is a hard pitch.
This is why part of a senior developer’s job is designing and developing the fast version in a way that, if it goes into production, won’t burn the building down. This is the subtle art of development: recognizing where the line is for “good enough” to ship fast without jeopardizing the long-term health of the company. This is also the part that AI is absolutely atrocious at - vibe code is fast, that’s the pitch, but it’s also basically disposable (or it’s not fast - I see all you “exhaustive spec/comprehensive tests/continuous iteration” types, and I see your timelines, too). If you can convince the org that’s the tradeoff, great, but I had a hell of a time doing it back when code was moving at human speed, and now you just strapped rockets onto the shitty part of the system and are trying to convince leadership that rocket-speed is too fast.
Wow I'm only done with part one and the author pegged me to a t.
I can/have done this without AI and it tends to be disasterous. Management declares we need X fast. Okay, we can build that really fast, but it won't scale. Management says fine, just build it. We do. Management now wants to build Y fast. But wait, what about X? Nevermind, just build Y now. Okay, we're building Y, and X collapses... because it wasn't built to scale. Now we're being called in at 2 am to fix X while also expected to ship Y tomorrow. Sure, they'll glow you up and tell everyone what a hero you were for coming to the rescue at 2 am, but on that six month performance review, the blowup is used as reason to withhold raises and promotions. They don't lose any sleep of course, just you, the developer.
Probably because unlike apprenticeships a senior developer isn’t an owner. This creates a situation where imparting knowledge means you have less time to do your own packed work stack.
You can't force people to feel what you feel. One can (pr|t)each, without experiencing, other can only mimic or rebel. That's how cult is formed.
Irrespective of the linked post, let me say why I (being sort-of-a senior developer) fail to communicate my expertise. In no particular order:
1. I am discouraged or forbidden from devoting time to communicating my expertise; they would rather use it. Well, often, they'd rather I did the grunt work to facilitate the use of my expertise.
2. Same, but devoting time to preparing materials which communicate my expertise.
3. A lot of my expertise is a bunch of hunches and intuitions, a "sense of smell" for things. And that's difficult to communicate.
4. My junior colleagues don't get time off their other duties to listen to "expertise sharing", when it does not immediately promote the project they're working on.
5. Many of my junior colleagues lack enough fundamentals (IMNSHO) for me to share all sorts of expertise with them. That is, to share B with them I would need to first teach them A, and knowing A is not much of an expertise; but they're inexperienced, maybe fresh out of university.
6. My expertise may only be partially or very-partially relevant to many of my colleagues; but I can't just divide the expertise up.
7. For good reasons or bad, I have trouble separating my expertise from various ethical/world-view principles, which fundamentally disagree with the way things are done where I'm at. So, such sharing is to some extent a subversive diatribe against the status quo.
8. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I am apprehensive to talk about what I feel I actually don't know enough about - which may just result in my appearing presumptuous and not knowledgeable enough.
9. My expertise on some matters is very partial - and what I know just underlines for me how much I _don't_ know. So, I try to polish and complete my expertise before sharing it - and that's a path you can walk endlessly, never reaching a point where you feel ready to share.
10. Tried sharing some expertise in the past, few people attended the session, I got demotivated.
11. Tried sharing some expertise in the past, few people were engaged enough to follow what I was saying, I got demotivated.
12. Shared some expertise in the past, got a positive feedback, but then those people who seemed to appreciate what I said did not implement/apply any of it, even though they could have and really should have.
This is copy. I'm only interested in content.
In 2026 the answer is "job security"
The unspoken observation on the reason why this happens is it almost always political in the organization to make themselves more valuable and harder to fire / layoff.
That includes gate-keeping behaviour such as not handing off knowledge, sham performance reviews to prevent ambitious juniors from over-taking them (even with AI) and being over-critical to others but absent and contrarian when the same is done to them.
That leverage does not work anymore in the age of AI as having "expensive" seniors begging for a pay-rise can cost the company an extra amount of $$$. So it is temping to lay them off for another one that is a yes person that will accept less.
In the age of AI, I would now expect such experience to include both building and working at a startup instead of being difficult to work with for the sake of a performance review.
FTA: “AI agents are the future of software development. We won’t need developers anymore to slow down the progress of a business.”
Almost all business presidents, CEOs, and owners are thinking this. I guarantee you they are sick and tired of developers taking forever on every project. Now they can create the apps themselves.
My comment isn't meant to debate every nitty-gritty detail about code quality, security, stability, thinking of every aspect of how the code works, does it scale, etc. All of those things are extremely important. However, most leadership never cared about any of that anyways. They only heard those as excuses why developers took so long. Over the last decade they put up with it begrudgingly.
You know all the developers that wanted to complain about IT, cybersecurity, DevOPs, cloud architects for getting in their way and if they only had administrator access then they could get everything done themselves because they are experts in networking and everything else? Well, those developers are about to have the worst day ever when every single person on the planet can generate code and will be "experts" in everything as well.
> Ah, well, it can’t yet do the one thing senior developers still do. Take responsibility.
If only higher-ups would recognize that. Instead we see left and right mass layoffs, restructurings and clueless higher-ups who clearly drank not just a bottle of koolaid but a barrel.
> The ‘Speed’ version allows the rest of the business to continue learning from the market, as the senior developers build a trailing version of the system that’s well-reviewed and understandable.
Yeah... that doesn't fly. The beancounters don't care. The "speed" version works, so why even invest a single cent into the "scale" version? That's all potential profit that can be distributed to shareholders. And when it (inevitably) all crashes down, the higher ups all have long since cashed out, leaving the remaining shareholders as bagholders, the employees without employment and society to pick up the tab. Yet again.
[flagged]
[flagged]
[dead]
I don't necessarily disagree with this conclusion, but the way it is written has a lot of AI prose smell that was extremely distracting for me.
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.