logoalt Hacker News

Breaking the spell of vibe coding

120 pointsby arjunbankerlast Friday at 7:22 PM95 commentsview on HN

Comments

daxfohlyesterday at 10:02 PM

I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level.

The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?

Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.

show 4 replies
jackfranklynyesterday at 10:30 PM

The bit about "we have automated coding, but not software engineering" matches my experience. LLMs are good at writing individual functions but terrible at deciding which functions should exist.

My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.

Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.

Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.

show 1 reply
Kerrickyesterday at 9:47 PM

> However, it is important to ask if you want to stop investing in your own skills because of a speculative prediction made by an AI researcher or tech CEO.

I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].

You can choose to grow in both areas.

[0]: https://kerrick.blog/articles/2025/kerricks-wager/

[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.

[2]: https://news.ycombinator.com/item?id=46702093

[3]: https://news.ycombinator.com/item?id=46719500

show 4 replies
vibe101today at 12:33 AM

I’ve learned the hard way that in coding, every line matters. While learning Go for a new job, I realised I had been struggling because I overused LLMs and that slowed my learning. Every line we write reflects a sense of 'taste' and needs to be fully controlled and understood. You need a solid mental model of how the code is evolving. Tech CEOs and 'AI researchers' lack the practical experience to understand this, and we should stop listening to them about how software is actually built.

theYipsteryesterday at 9:50 PM

Just because you’re a good programmer / software engineer doesn’t mean you’re a good architect, or a good UI designer, or a good product manager. Yet in my experience, using LLMs to successfully produce software really works those architect, designer, and manager muscles, and thus requires them to be strong.

show 1 reply
altcunnyesterday at 10:00 PM

The point about vibe coding eroding fundamentals resonates. I've noticed that when I lean too heavily on LLM-generated code, I stop thinking about edge cases and error handling — the model optimizes for the happy path and so do I. The real skill shift isn't coding vs not coding, it's learning to be a better reviewer and architect of code you didn't write yourself.

show 2 replies
abcde666777yesterday at 11:06 PM

It's astonishing to me that real software developers have considered it a good idea to generate code... and not even look at the code.

I would have thought sanity checking the output to be the most elementary next step.

tjryesterday at 10:13 PM

I see AI coding as something like project management. You could delegate all of the tasks to an LLM, or you could assign some to yourself.

If you keep some for yourself, there’s a possibility that you might not churn out as much code as quickly as someone delegating all programming to AI. But maybe shipping 45,000 lines a day instead of 50,000 isn’t that bad.

show 1 reply
atleastoptimalyesterday at 11:17 PM

I think most of the issues with "vibe coding" is trusting the current level of LLM's with too much, as writing a hacky demo of a specific functionality is 1/10 as difficult as making a fully-fledged, dependable, scalable version of it.

Back in 2020, GPT-3 could code functional HTML from a text description, however it's only around now that AI can one-shot functional websites. Likewise, AI can one-shot a functional demo of a saas product, but they are far from being able to one-shot the entire engineering effort of a company like slack.

However, I don't see why the rate of improvement will not continue as it has. The current generation of LLM's haven't been event trained yet on NVidia's latest Blackwell chips.

I do agree that vibe-coding is like gambling, however that is besides the point that AI coding models are getting smarter at a rate that is not slowing down. Many people believe they will hit a sigmoid somewhere before they reach human intelligence, but there is no reason to believe that besides wishful thinking.

claudeomusicyesterday at 11:15 PM

I think a big part of this discussion lost for a lot is a lot of people are trying to copy/paste how we’ve been developing software over the past twenty years into this new world which simply doesn’t work effectively.

The differences are subtle but those of us who are fully bought in (like myself) are working and thinking in a new way to develop effectively with LLMs. Is it perfect? Of course not - but is it dramatically more efficient than the previous era? 1000%. Some of the things I’ve done in the past month I really didn’t think were possible. I was skeptical but I think a new era is upon us and everyone should be hustling to adapt.

My favorite analogy at the moment is that for awhile now we’ve been bowling and been responsible for knocking down the pins ourselves. In this new world we are no longer the bowlers, rather we are the builders of bumper rails that keep the new bowlers from landing in the gutter.

show 1 reply
strawhatguyyesterday at 11:02 PM

Speaking just for myself, AI has allowed me to start doing projects that seemed daunting at first, as it automates much of the tedious act of actually typing code from the keyboard, and keeps me at a higher level.

But yes, I usually constrain my plans to one function, or one feature. Too much and it goes haywire.

I think a side benefit is that I think more about the problem itself, rather than the mechanisms of coding.

show 1 reply
samenameyesterday at 10:55 PM

The addiction aspect of this is real. I was skeptical at first, but this past week I built three apps and experienced issues with stepping away or getting enough sleep. Eventually my discipline kicked in to make this a more healthy habit, but I was surprised by how compelling it is to turn ideas into working prototypes instantly. Ironically, the rate limits on my Claude and Codex subscriptions helped me to pace myself.

show 1 reply
maplethorpeyesterday at 11:58 PM

I think tech journalism needs to reframe its view of slot machines if it's to have a productive conversation about AI.

Not everyone who plays slot machines is worse off — some people hit the jackpot, and it changes their life. Also, the people who make the slot machines benefit greatly.

atleastoptimalyesterday at 11:12 PM

That AI would be writing 90% of the code at Anthropic was not a "failed prediction". If we take Anthropic's word for it, now their agents are writing 100% of the code:

https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...

Of course you can choose to believe that this is a lie and that Anthropic is hyping their own models, but it's impossible to deny the enormous revenue that the company is generating via the products they are now giving almost entirely to coding agents.

show 2 replies
nkmnzyesterday at 10:58 PM

> A study from METR found that when developers used AI tools, they estimated that they were working 20% faster, yet in reality they worked 19% slower. That is nearly a 40% difference between perceived and actual times!

It’s not. It’s either 33% slower than perceived or perception overestimates speed by 50%. I don’t know how to trust the author if stuff like this is wrong.

show 3 replies
lazystaryesterday at 10:45 PM

i used to lose hours each day to typos, linting issues, bracket-instead-of-curly-bracket, 'was it the first parameter or the second parameter', looking up accumulator/anonymous function callback syntax AGAIN...

idk what ya'll are doing with AI, and i dont really care. i can finally - fiiinally - stay focused on the problem im trying to solve for more than 5 minutes.

show 1 reply
mathgladiatoryesterday at 9:29 PM

Ive come to the realization after maxing the x20 plan that I have to set clear priorities.

Fortunately, I've retired so I'm going focus on flooding the zone with my crazy ideas made manifest in books.

nkmnzyesterday at 11:05 PM

tl;dr - author cites a study from early 2025 which measured developer speed of “experienced open source developers” to be ~20% slower when supported by AI, while they’ve estimated to be ~20% faster.

Note: the study used sonnet-3.5 and sonnet-3.7; there weren’t any agents, deep research or similar tools available. I’d like to see this study done again with:

1. juniors ans mid-level engineers

2. opus-4.6 high and codex-5.2 xhigh

3. Tasks that require upfront research

4. Tasks that require stakeholder communication, which can be facilitated by AI

somewhereoutthyesterday at 11:50 PM

"Hell is other people's code"

Not sure why we'd want a tool that generates so much of this for us.

cmrdporcupineyesterday at 10:26 PM

"they don’t produce useful layers of abstraction nor meaningful modularization. They don’t value conciseness or improving organization in a large code base. We have automated coding, but not software engineering"

Which frankly describes pretty much all real world commercial software projects I've been on, too.

Software engineering hasn't happened yet. Agents produce big balls of mud because we do, too.

show 1 reply
egedevyesterday at 11:46 PM

[dead]