People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read almost any of it and is no longer possible for anyone to maintain neither with nor without LLMs. And the LLM generated garbage will have drowned out all of the good quality code that ever existed and no one will be able to find even human generated code anymore on the internet.
Makes me want to just give up programming forever and never use a computer again.
30 years down the line a human will wake up in his climate controlled bed in an idyllic large scale people-zoo, think about what information he wants, and immediately his 900TB ferroelectric compute-in-memory exobrain will read his thoughts via his brain-computer-interface, and render a custom 3d visualization of that information floating in front of him. There will be no separate code stage, just neural rendering of data to pixels.
First, most software is already a hot mess.
Second, LLM code can be less of a hot mess than human written code if you put in the time to train/prompt/verify/review.
Generating perfect well patterned SOLID and unit tested code with no warnings or anti-patterns has never been easier.
By then, the fix will be easy. Fire up the latest LLM, point it at your codebase and tell it "rewrite this from scratch. do it well. fix the architecture mistakes"
I can't get used to vibe-coded projects on Github. One that I was using for a little while is about a year old, with 40,000 commits and 15,000 PRs. And it has "lite" in its name; it's supposed to be the simple alternative. There were so many bugs. I fixed one, submitted a PR, but it was off the first page in hours. It will never be merged. I moved to a different project with a bit less... velocity, and it has been way smoother.
I'm generally pro "llm assisted coding" or whatever you want to call it. But I do somethings think about the Butlerian Jihad from Dune.
If 30 years down the line I still have to look at code, maintain code, or even worry in the slightest about code, something went deeply wrong.
Why are we pretending everyone's code is an etalon of quality? Most software out there is probably hot mess already. No think behind it, let alone ultrathink.
> is no longer possible for anyone to maintain neither with nor without LLMs.
That's what the Tech-Priests are for.
Hello from assembly programmers to present day javascript folks. Joke aside, I sometimes think how VS Code is written in such layers and layers of code - ~200mb of minified code - Java based IDEs were worser with almost 1GB of code (libs/dependencies). And VS Code did beat native editors (Sublime) of its time to dominate now - may be because of the business model (open & free vs freemium). But it does the job quite well IMO. And it enabled swarms of startups to go to market including billion $ wrappers - including Cursor, Antigravity and almost all UI coding agents. I remember backend developers (Java/C++ type) looking down upon Javascript developers as if we are from an inferior planet or something.
How many of us remember that VSCode is actually a browser wrapped inside a native frame?
People, as a rule, don't really "go backwards." We didn't really walk back on the industrial revolution, and we're probably not going to walk back from this day-and-age's activities. It's only unsettling until the changes are accepted. The old timers can vie for a time before "all this" when they were children and all their needs were given to them by their now-deceased parents, and the cycle can continue on, yet again.
Have you ever encountered the very common real life situation where there's some software that works, and you have a binary for it but you either don't have the source code or it doesn't compile for whatever reason? This is the pre-LLM world. Now, do you think LLMs make this situation better or worse? You may not know what's wrong with your software or how to fix it, but unlike in the past you can throw compute at trying to figure it out, or replicating a subset of it, or even replicating all of it depending on what it is. I think LLMs are making this situation better not worse.
Why does it matter, as long as it accomplishes the task?
There is nothing in the post to support the statement. An interesting personal confession, but it does not establish that vibe coding and agentic engineering are converging as a general phenomenon.
As a piece of meat, I look forward to charge rates of $10,000 an hour, to fix code out the vibe code generation.
If that is the case market forces would likely favor hand written code and all the slop will be forgotten (unless the slop works fine and is stable).
Have you seen Windows? We already have thirty years of slop.
> People in the future are going to wonder what the hell we were thinking, when 30 years down the line everything is a hot mess of billions of lines of code generated by LLMs that no human has read
--
It's just as likely that people will be surprised that we used to have billions of lines of human generated code, that no LLM ever approved.
By then AI would be good enough to clean them all up....like I dont get these dooming scenarios they always assume that we are going to be stuck with LLMs and there wont be anything new coming.
> Makes me want to just give up programming forever and never use a computer again.
LLMs aren’t the first thing to come along and change how people develop applications.
You had the rise of frameworks like Django, Rails, etc. Also the rise of SPAs. And also the rise of JS as a frontend+backend language.
In a 3-5 yeats we’ll have adapted to the new norm like we have in the past
Have you ever worked on a legacy codebase with actual good code? I struggle to see the difference between your predicted future and today's reality when it comes to working with legacy disasters.
I think it’s a mistake to think that we will be blindly going in this direction for many years and then suddenly collectively wake up and realize what have we done. It’s a great filter and a great opportunity.
If LLMs stop improving at the pace of the last few years (I believe they already are slowing down) then they will still manage to crank out billions lines of code which they themselves won’t be able to grep and reason through, leading to drop in quality and lost revenue for the companies that choose to go all-in with LLMs.
But let’s be realistic - modern LLMs are still a great and useful tool when used properly so they will stay. Our goal will be to keep them on track and reduce the negative impact of hallucinations.
As a result software industry will move away from large complex interconnected systems that have millions of features but only a few of them actively used, to small high quality targeted tools. Because their work will be easier to verify and to control the side effects.