There are certainly some valid criticisms of vibe coding. That said, it’s not like the quality of most code was amazing before AI came along. In fact, most code is generally pretty terrible and took far too long for teams to ship.
Many folks would say that if shipping faster allows for faster iterations across an idea then the silly errors are worth it. I’ve certainly seen a sharp increase on execs calling BS on dev teams saying they need months to develop some basic thing.
> if shipping faster allows for faster iterations across an idea then the silly errors are worth it.
There is more to the consideration here...
Maintenance costs compound over time, as does complexity.
If you're building something that doesn't need to last long or is really simple, sure it's worth it... But a lot of stuff does need to last a long time and continuously change.
Vibe coding is a trade off that only works up to a certain distance from the existing system. If you don't need to go far, it's great. But if you need long range then it's the wrong tool for the job at the moment.
I think you need a balance. I’ve seen products fall apart due to high error rate.
I like to think of intentionalists—people who want to understand systems—and vibe coders—people who just want things to work on screen expediently.
I think success requires a balance of both. The current problem I see with AI is that it accelerates the vibe part more than the intentionalist part and throws the system out of balance.
More important than code quality is a joint understanding of the business problem and the technical solution for it. Today, that understanding is spread across multiple parties (eng, pm, etc).
Code quality can be poor as long as someone understands the tradeoffs for why it's poor.
And you think people who don't understand the software telling people who do they're doing it wrong is an outright positive?
> I’ve certainly seen a sharp increase on execs calling BS on dev teams saying they need months to develop some basic thing.
Some of the teams I worked with in the years right before AI coding went mainstream had become really terrible about this. They would spend months forming committees, writing documents, getting sign-offs and approvals, creating Gantt charts, and having recurring meetings for the simplest requests.
Before I left, they were 3 months deep into meetings about setting up role based access control on a simple internal CRUD app with a couple thousand users. We needed about 2-3 roles. They were into pros and cons lists for every library and solution they found, with one of the front runners involving a lot of custom development for some reason.
Yet the entire problem could have been solved with 3 Boolean columns in the database for the 3 different roles. Any developer could have done it in an afternoon, but they were stuck in a mindset of making a big production out of the process.
I feel like LLMs are good at getting those easy solutions done. If the company really only needs a simple change, having an LLM break free from the molasses of devs who complicate everything is a breath of fresh air.
On the other hand, if the company had an actual complicated need with numerous and changing roles over time, the simple Boolean column approach would have been a bad idea. Having people who know when to use each solution is the real key.
This attitude just furthers our race to the bottom. I agree with iteration, but software quality is getting really laughable. I know we're still on the better side of what existed in the hands of consumers in the 90s, but anyway... Execs calling BS is further evidence of that race to the bottom.
When a team says that a "trivial" feature takes months to ship is not because of the complexity of the algorithm. It's because of the infrastructure and coordination work required for the feature to properly work. It is almost aways a failure of the technical infrastructure previously created in the company. An AI will solve the trivial aspects of the problem, not the real problem.