Had the cost of building custom software dropped 90%, we would be seeing a flurry of low-cost, decent-quality SaaS offering all over the marketplace, possibly undercutting some established players.
From where I sit, right now, this does not seem to be the case.
This is as if writing down the code is not the biggest problem, or the biggest time sink, of building software.
As I said in a previous post:
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!” And it ‘is’ a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
If your job is pumping out low-effort websites that are essentially marketing tools for small businesses, it must feel like magic. I think the more magical it feels for your use case, the less likely your use case will be earning you a living 2 years from now.
> I'm sure every organisation has hundreds if not thousands of Excel sheets tracking important business processes that would be far better off as a SaaS app.
Far better off for who? People constantly dismiss spreadsheets, but in many cases, they are more powerful, more easily used by the people who have the domain knowledge required to properly implement calculations or workflow, and are more or less universally accessible.
These kind of future prediction posts keep coming, and I'm tired of them. Reality is always more boring, less extreme, and slower at changing, because there are too many factors involved, and the authors never account for everything.
Maybe we should collect all of these predictions, then go back in 5-10 years and see if anyone was actually right.
I love the hand drawn chart. Apparently "Open Source" was invented around 2005, which significantly reduced development cost, then AWS was invented in 2011 or so and made development even cheaper, but then, oh no, in 2018 "complexity" happened and development became harder!
It's not just about "building" ... who is going to maintain all this new sub-par code pushed to production every day?
Who is going to patch all bugs, edge cases and security vulnerabilities?
Good write-up. I don't disagree with any of his points, but does anybody here have practical suggestions on how to move forward and think about one's career? I've been a frontend (with a little full stack) for a few years now, and much of the modern landscape concerns me, specifically with how I should be positioning myself.
I hear vague suggestions like "get better at the business domain" and other things like that. I'm not discounting any of that, but what does this actually mean or look like in your day-to-day life? I'm working at a mid-sized company right now. I use Cursor and some other tools, but I can't help but wonder if I'm still falling behind or doing something wrong.
Does anybody have any thoughts or suggestions on this? The landscape and horizon just seems so foggy to me right now.
I contracted briefly on a post-LLM-boom Excel modernization project (which ended up being consulting mainly, because I had to spend all my time explaining key considerations for a long-running software project that would fit their domain).
The company had already tried to push 2 poor data analysts who kind of new Python into the role of vibe coding a Python desktop application that they would then distribute to users. In the best case scenario, these people would have vibe coded an application where the state was held in the UI, with no concept of architectural seperation and no prospects of understanding what the code was doing a couple months from inception (except through the lens of AI sycophancy), all packaged as a desktop application which would generate excel spreadsheets that they would then send to each other via Email (for some reason, this is what they wanted - probably because it is what they know).
You can't blame the business for this, because there are no technical people in these orgs. They were very smart people in this case, doing high-end consultancy work themselves, but they are not technical. If I tried to do vibe chemistry, I'm sure it would be equally disastrous.
The only thing vibe coding unlocks for these orgs by themselves is to run headfirst into an application which does horrendous things with customer data. It doesn't free up time for me as the experienced dev to bring the cost down, because again, there is so much work needed to bring these orgs to the point where they can actually run and own an internal piece of software that I'm not doing much coding anyway.
> I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.
I'm not sure about this. The tests I've gotten out in a few hours are the kind I'd approve if another dev sent then but haven't really ended up finding meaningful issues.
This article mentions cost to ship, but ignores that the largest cost of any software project isn't consumed by how long it takes to get to market, but by maintenance and addition of new features. How is agentic coding doing there? I've only seen huge, unmaintainable messes so far.
I strongly agree. It may be even more than 90%. For example yesterday I was able to use lovable (and Claude code web) on my phone to build out an almost 1:1 replacement (for my purposes) for an expensive subscription based app for working out: https://strengthquest.lovable.app/
This is simply unimaginable level of productivity— in one day on my phone, I can essentially build and replace expensive software. Unreal days we are living in.
Man, that's a big title. I can't wait to see the data on how the cost has dropped so far.
> AI Agents however in my mind massively reduce...
Nevermind. It's a vibe 90%.
I think the cost of prototyping has definitely gone down.
Developing production grade software which you want to people to rely on and pay for it is not gone down so much. The "weak" link is still human.
Debugging complex production issues needs intimate knowledge of the code. Not gonna happen in next 3-4 years atleast.
> written an entire unit/integration test suite in a few hours
It’s often hard to ground how “good” blog writers are, but tidbits like this make it easy to disregard the author’s opinions. I’ve worked in many codebases where the test writers share the authors sentiment. They are awful and the tests are at best useless and often harmful.
Getting to this point in your career without understanding how to write effective tests is a major red flag.
Did I miss something or is there actually no evidence provided that costs have dropped?
I am a believer in the new agentic coding tools (I wasn't 6 months ago) but the delays and time it takes to build something haven't really changed even though everyone I know is using them. What I see is what has always been there:
Product doesn't understand the product because if it was easy to understand then someone else would have solved the problem already and we wouldn't have jobs. This means you need to iterate and discuss and figure out just like always. The iterations can be bolder, bigger, etc and maybe a bit faster but ultimately software scales np so a 10x improvement in -individual- capability doesn't scale to 10x improvement in -organizational- capability.
Let me put it another way. If your problem was so simple you could write a 200 word prompt to fully articulate it then you probably don't have much of a moat and aren't providing enough value to be competitive.
The author teaches AI workshops. Nothing wrong with that, but I think it should be disclosed here. A lot of money is riding on LLMs being financially successful which explains a lot of the hype.
Where are the billions of dollars spent on GPUs and new data centers accounted for in this estimation?
I think the whole software industry has tried to obscure the fact that most companies who hire software engineers are writing exactly the same code as every other company. How many people here have written the same goddamn webapp at the last 3 companies they've been to? Anyone ever wonder why nobody just publishes blueprints to software and licenses that blueprint to a single engineer to customize? Because there's a lot less money in doing that, versus selling a lot more software add-ons/SaaS/etc.
There is no value-add to hiring software engineers to build basic apps. That's what AI will be good for: repeating what has already been written and published to the web somewhere. The commoditized software that we shouldn't have been paying to write to begin with.
But AI won't help you with all the rest of the cost. The maintenance, which is 80% of your cost anyway. The infrastructure, monitoring, logging, metrics, VCS hosting, security scanning, QA, product manager, designer, data scientist, sales/marketing, customer support. All of that is part of the cost of building and running the software. The software engineers that churn out the initial app is a smaller cost than it seems. And we're still gonna need skilled engineers to use the AI, because AI is an idiot savant.
Personally I think 50% cost reduction in human engineers is the best you can expect. That's not nothing, but that's probably like a 10% savings on total revenue expenditure.
I must be holding wrong then because I do use Claude Code all the time and I do think its quite impressive… still I cant see where the productivity gains go nor am I even sure they exist (they might, I just cant tell for sure!)
I keep seeing articles like these popup. I am in the industry but not in the “AI” industry. What I have no concept of, is the current subsidized, VC funded, anywhere close to what the final product will be? I always fall back to the Uber paradox. Yes it was great at first, now it’s 3x what it cost and has only given cabs pricing power. This was good for consumers to start but now it’s just another part of the k shaped economy. So is that ultimately where AI goes? Top percent can afford a high monthly subscription and the not so fortunate get there free 5 minutes per month
This wouldn't be the first time that the cost of software radically dropped. It happened back during the early 1960s for the first time when IBM introduced the System 360, which included backward compatibility for the 1401. Prior to this point, the maximum lifespan of software was tied to that of the computer in question. The software would be re-written for the next architecture, every time a new computer was purchased.
The advent of the PC, and the appearance of Turbo Pascal, Visual Basic, and spreadsheets that could be automated made it possible for almost anyone to write useful applications.
If it gets cheaper to write code, we'll just find more uses for it.
If the cost of building software dropped so much - where is that software?..
Was there an explosion of useful features in any software product you use? A jump in quality? Anything tangible an end user can see?..
Copying GPL code, with global search&replace of the brand names, has always lowered the cost of software 'development' dramatically.
Im really liking it for writing boring code.
As an example I wanted a plugin for visual studio. In the past I would have spent hours on it or just not bothered but I used Claude code to write it, it isn’t beautiful or interesting code, it lacks tests but it works and saves me time. It isn’t worth anything, won’t ever be deployed into production, I’ll likely share it but won’t try to monetise it, it is boring ugly code but more than good enough for its purpose.
Writing little utility apps has never been simpler and these are probably 90% cheaper
Can someone help me out how to get started in this kind of coding setup?
I haven't written production code for the last 8 years, but has prior development experience for about 17 years (ranging from C++, full stack, .NET, PHP and bunch of other stuff).
I used AI at personal level, and know the basics. Used Claude/Github to me help fix and write some pieces of the code in languages I wasn't familiar with. But it seems like people talking and deploying large real world projects in short-"er" amount of time. An old colleague of mine whom I trust mentioned his startup is developing code 3x faster than we used to develop software.
Is there resource that explains the current best practices (presumably it's all new)? Where do I even start?
It's fascinating to read these comments - I believe everyone. Some are getting huge productivity gains and others very little - so perhaps we are not in the same business. I know that I've ranged over various work - all called software development and the variety of work was quite different - some I wouldn't call challenging but still needed a lot of manual labor - perhaps this is the type of work that finds easy wins from AI automation. Still other work was much more challenging but I've never really attempted to use AI in my work because it was forbidden by policy. I've used AI at home for fun projects and it has helped me with languages I've never used before but I've never come close to 90% productivity boost. Anyway, fascinating!
I think AI can be really powerful tool. I am more productive with it than not, but a lot of my time interacting with AI is reviewing its code, finding problems with it (I always find some issues with it), and telling it what to do differently multiple times, and eventually giving up, and fixing up the code by hand. But it definitely has reduced average time it takes me to implement features. But I also worry that not everyone would be responsible and check/fix AI generated code.
pretty decent article - but what it misses is most of these agents are trained on bad code - which is open source.
so what does this mean in practice? for people working on proprietary systems (cost will never go down) - the code is not on github, maybe hosted on an internal VCS - bitbucket etc. the agents were never trained on that code - yeah they might help with docs (but are they using the latest docs?)
for others - the agents spit bad code, make assumptions that don't exist, call api's that don't exist or have been deprecated ?
each of those you need an experienced builder who has 1. technical know-how 2. domain expertise ? so has the cost of experienced builder(s) gone down ? I don't think so - I think it has gone up
what people are vibecoding out there - is mostly tools / apps that deal in closed systems (never really interact with the outside world), scripts were ai can infer based on what was done before etc but are these people building anything new ?
I have also noticed there's a huge conflation with regards to - cost & complexity. zirp drove people to build software on very complex abstractions eg kubernetes, nextjs, microservices etc - hence people thought they needed huge armies of people etc. however we also know the inverse is true that most software can be built by teams of 1-3 people. we have countless proof of this.
so people think to reduce cost is to use a.i agents instead of addressing the problem head-on - built software in a simpler manner. will ai help - yeah but not to the extent of what is being sold or written daily.
The only cost that's dropped by 90% is writing unoriginal blog posts
Then why is all my software slower, buggier, and with a worse UX?
> Software engineering has got - in my opinion, often needlessly - complicated, with people rushing to very labour intensive patterns such as TDD, microservices, super complex React frontends and Kubernetes.
TDD as defined by Kent Beck (https://tidyfirst.substack.com/p/canon-tdd ) doesn't belong in that list. Beck's TDD is a way to order work you'd do anyway: slice the requirement, automate checks to confirm behavior and catch regressions, and refactor to keep the code healthy. It's not a bloated workflow, and it generalizes well to practices like property-based testing and design-by-contract.
> I've had Claude Code write an entire unit/integration test suite in a few hours (300+ tests) for a fairly complex internal tool. This would take me, or many developers I know and respect, days to write by hand.
I should have stopped reading here. People who think that the time it takes to write some code is the only metric that matters are only marginally better than people who rank employees by lines of code.
Betteridge's law proven correct once again. The answer to the headline is: no. Perhaps it will be true in the future, nobody knows.
I'm skeptical the extent to which people publishing articles like this use AI to build non-trivial software, and by non-trivial I mean _imperfect_ codebases that have existed for a few years, battle tested, with scars from hotfixes to deal with fires and compromises to handle weird edge cases/workarounds and especially a codebase where many developers have contributed to it over time.
Just this morning I was using Gemini 3 Pro working on some trivial feature, I asked it about how to go about solving an issue and it completely hallucinated a solution suggesting to use a non-existing function that was supposedly exposed by a library. This situation has been the norm in my experience for years now and, while this has improved over time, it's still very, very common occurrence. If it can't get these use cases down to an acceptable successful degree, I just don't see how much I can trust it to take the reins and do it all with an agentic approach.
And this is just a pure usability perspective. If we consider the economics aspect, none of the AI services are profitable, they are all heavily subsidized by investor cash. Is it sustainable long term? Today it seems as if there is an infinite amount of cash but my bet is that this will give in before the cost of building software drops by 90%.
> This takes a fairly large mindset shift, but the hard work is the conceptual thinking, not the typing.
But the hard work always was the conceptual thinking? At least at and beyond the Senior level, for me it was always the thinking that's the hard work, not converting the thoughts into code.
> Jevons Paradox says that when something becomes cheaper to produce, we don't just do the same amount for less money. Take electric lighting for example; while sales of candles and gas lamps fell, overall far more artificial light was generated.
Can’t wait to debug all that stuff.
Software Development is much more than writing code. Writing code may have become 90% easier, but a lot of the other development tasks haven't appreciably changed due to AI, although that might come. So, for now at least the answer to the question posed in the headline is no.
An exception might be building something that is well specified in advance, maybe because it's a direct copy of existing software.
The cost of writing software dropped but the complexity ballooned so we’re on the side of needing ai assistants to write it all for us.
it has for me. I'm probably paying less than 10%, saving on seas, occasional contract fees for custom integrations, and zapier fees linking them together.
I've no idea what's going on in the enterprise space, but in the small 1-10 employee space, absolutely
It depends. For AI to work for large projects (did a post on this forever ago in AI terms. https://getstream.io/blog/cursor-ai-large-projects/)
But you need: a staff level engineer to guide it, great standardization and testing best practices. And yes in that situation you can go 10-50x faster. Many teams/products are not in that environment though.
Alternate take: The cost of building software will remain the same but software will need to be 10x as feature-rich to remain competitive.
If you can build it in a weekend so can I. So you're going to have to figure out bigger things to build.
How we would design a rigorous study that measures total cost of ownership when teams integrate AI assistance, including later maintenance and defect rates, rather than just initial output speed?
No, it did not. Thanks for asking.
Let’s say you’re right. Do we still want to, though? I mean. At some point we will no longer have the skill to babysit the AI agent.
> One objection I hear a lot is that LLMs are only good at greenfield projects. I'd push back hard on this. I've spent plenty of time trying to understand 3-year-old+ codebases where everyone who wrote it has left.
Where I am, 3 year old is greenfield, and old and large is 20 years old and has 8million lines of nasty c++. I’ll have to wait a bit more I think …
The cost of building the first version of FB has dropped 90% The cost of building the next FB stays the same
More sophisticated tools mean more refined products.
If an easier and cheaper method for working carbon fiber becomes broadly available, it won't mean you get less money; it means you'll now be cramming carbon fiber in the silverware, in the shoes, in baby strollers, EVERYWHERE. The cost of a carbon fiber bike will drop 90%, but teams will be doing a LOT more.
You could say the cost per line of code has dropped 90%, but the number of lines of code written will 100x.
The cost of writing simple code has dropped 90%.
If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.
Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.