At this point, it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem. Obviously, architecture matters. What might matter less is verbosity.
The reason we aimed for minimal "accidental complexity" up to now was directly related to the cost/pain of changing and maintaining that code. Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?
I think a bit of refactoring, renaming and restructuring has been helpful for maintainability but recently I've been a little less inclined to worry about the easy readability of function bodies and fine implementation details. It still feels wrong but I can't justify the effort anymore.
I'm been in a community that makes a lot of cognitive training software. There's some core open source projects that were created without LLMs, but new projects are now mostly created by young people vibe-coding from scratch or forking and modifying the existing projects with an LLM.
The answer to your question is really obvious. The high-effort manually coded projects stick around and the low-effort vibe-coded projects are forgotten about quickly. In the end LLM-driven programming is always going to bring you to a dead-end. There's certain things where I can predict that they're going to fail because it's going to involve certain kinds of complexity they can't and will never be able to deal with. The code gets so bad that even if an expert programmer wanted to make changes it either wouldn't be possible or worth it. A lot of the time the vibecoders are so high off the low-effort sense of empowerment that they don't even realize what they made is completely broken.
Well written software has staying power because it can be understood and built upon. Understanding a problem deeply enough to devise an elegant solution even leads to new possibilities and ideas that will never be conceived with a more superficial understanding.
> it's worth asking whether lots of relatively straightforward verbose code is actually significantly worse than the least code necessary for the problem.
The question is wrong because reality isn't binary. "We've" never aimed for minimal, except maybe in the very early days or some real edge casesIf you're writing the minimal code you're either writing something very compact/simple[0], or you're wasting too much time and not balancing things.
If you're rewriting everything then you're wasting too much time and introducing too much complexity[1].
You can't write good code by slapping together a bunch of libraries but that doesn't mean you shouldn't use libraries either.
[0] "simple" is an overloaded term. If you're upset by me saying "simple", I'm using the other definition
[1] sed -i [0] "s/simple/complex/g"
A problem I’ve found is that when you’re adding functionality or refactoring it often leaves unused methods or types behind, at least with multiple devs working on the same codebase.
This unused code gets further modified as time goes on: new functionality is wired in, or it gets further refactored. Usually it’ll still have tests that cover it. It gives the impression of being live code, but it’s not: it’s zombified.
So you get situations where it gets wired up to something and then that something doesn’t work and you wonder why and so you start digging about and you discover it’s because it has been wired into a path that is never executed.
The fog of relatively recent changes sometimes makes it hard to figure out if the code should be unused or if someone just forgot to hook it in as part of a bigger piece of work. Then you find nobody else is really sure either.
So that extra complexity comes at a cost. It can slow you down or trip you up; catch you by surprise.
I don't think people are talking about the least code possible, just not incredibly verbose and inefficient like what you get by default from llms.
For example I have a game I've been working on for a few years, I do stuff like "implement this simple psuedo physics system to make the bot follow the character like so...etc"
After some planning and back and forth.
It returns mostly working code a little odd on some edge case.
But as I've hand coded this thing for years. I could easily look at it. Laugh my ass off, it had multiple classes and around 1k lines of code, all kinds of crazy non performant crap.
The exact thing I needed, I reprogrammed in around 5 lines of very simple code that did exactly what I needed with no edge case weirdness.
Now the vibe coders actually ship that shit. I like to read vibe code games now and again, and there is no possible way those guys are ever shipping a real game, as every single decision is verbose along with the worst performance decisions over and over everywhere.
Sure it can get you some cute little toy projects, but it will absolutely fall apart if you are trying to make real games.
Don't know about saas apps or whatever. Maybe that stuff doesn't matter at all.
> Hasn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?
I sincerely believe that extensive accidental complexity will ALSO be bad for AI agents. Their quality will diminish as their context windows get filled up with endless amounts of spaghetti and accidental complexity. I feel like we won't fully start feeling those effects for another year or so.
[dead]
>Isn't the economics of maintenance and change shifted so much that accidental complexity isn't actually all that expensive/painful?
Not while context windows cause decay and larger bills.
The AI's max cognitive load C is larger than a human's, but if codebase size grows unbounded the minimum context needed for a change will eventually surpass C.
It is also a bad idea to let your codebase become only readable by a machine when we are still in the dark about the role machines and people will take in the future. What if you have to go back to manual dev in a now gargantuan codebase?