Personally, I prefer vibe coding in the sense of stitching things together at the function-to-method level.
Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns.
The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again.
At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable.
In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well.
Eventually, a person is still needed.
If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility.
Yes that is all true. LLMs are excellent in providing a single function, but decision-makers extrapolated that capability so they thought LLMs can work on their own with minimal or no supervision. That's not going to be realistic for a very long time.
Yes, but I don't think having LLMs only write functions, and doing the architecture yourself qualifies as "vibe coding": rather "AI-assisted engineering" (which is what I do).
Vibe coding, to me, means having an LLM, with or without agents, do everything after an initial vague prompt. Which is why "anyone" can vibe code (because anyone can write general hand-waving imprecise instructions). This inevitably results in pointless demos and/or unmaintainable monsters.
its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people
the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.
i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.
if you get the base right though, the agent can make precise changes in large code bases
How long before they will raise the amount of context it can hold?
Or, it there a ceiling that we can't go passed?
We all got agents at work now and still the engineers haven't equalized
I've found the LLM limitation of codebase size is removed with correct design of the codebase.
If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension.
Your task management has to match, the organization of your ticketing system has to parallel the codebase.
With this the LLM can think at different scales at different times.
I agree. Language models are good at codegen, in some sense they are just another codegen tool, except instead of transforming a structured language (like a config file or markdown) into code, they can convert natural language into code. Genuinely useful for the repetitive boilerplate grunt work. If that's all you do, then I can see fearing getting replaced. Thankfully by handling the drudgery, it frees us up to work on more complex and cutting edge work.
Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me.
But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated.
The models are improving. The software that harnesses them is also improving. It wasn't that long ago that the models were quite bad at a lot of the tasks that they are excelling at today. I do agree there's probably a ceiling to what we can get out of these, but I also don't think we have quite hit that point yet.