logoalt Hacker News

After two years of vibecoding, I'm back to writing by hand

663 pointsby mobitaryesterday at 1:36 PM508 commentsview on HN

Comments

recursivedoubtsyesterday at 2:05 PM

AI is incredibly dangerous because it can do the simple things very well, which prevents new programmers from learning the simple things ("Oh, I'll just have AI generate it") which then prevents them from learning the middlin' and harder and meta things at a visceral level.

I'm a CS teacher, so this is where I see a huge danger right now and I'm explicit with my students about it: you HAVE to write the code. You CAN'T let the machines write the code. Yes, they can write the code: you are a student, the code isn't hard yet. But you HAVE to write the code.

show 31 replies
GolDDranksyesterday at 4:56 PM

I feel like I'm taking crazy pills. The article starts with:

> you give it a simple task. You’re impressed. So you give it a large task. You’re even more impressed.

That has _never_ been the story for me. I've tried, and I've got some good pointers and hints where to go and what to try, a result of LLM's extensive if shallow reading, but in the sense of concrete problem solving or code/script writing, I'm _always_ disappointed. I've never gotten satisfactory code/script result from them without a tremendous amount of pushback, "do this part again with ...", do that, don't do that.

Maybe I'm just a crank with too many preferences. But I hardly believe so. The minimum requirement should be for the code to work. It often doesn't. Feedback helps, right. But if you've got a problem where a simple, contained feedback loop isn't that easy to build, the only source of feedback is yourself. And that's when you are exposed to the stupidity of current AI models.

show 16 replies
rich_sashayesterday at 2:47 PM

I came to "vibe coding" with an open mind, but I'm slowly edging in the same direction.

It is hands down good for code which is laborious or tedious to write, but once done, obviously correct or incorrect (with low effort inspection). Tests help but only if the code comes out nicely structured.

I made plenty of tools like this, a replacement REPL for MS-SQL, a caching tool in Python, a matplotlib helper. Things that I know 90% how to write anyway but don't have the time, but once in front of me, obviously correct or incorrect. NP code I suppose.

But business critical stuff is rarely like this, for me anyway. It is complex, has to deal with various subtle edge cases, be written defensively (so it fails predictably and gracefully), well structured etc. and try as I might, I can't get Claude to write stuff that's up to scratch in this department.

I'll give it instructions on how to write some specific function, it will write this code but not use it, and use something else instead. It will pepper the code with rookie mistakes like writing the same logic N times in different places instead of factoring it out. It will miss key parts of the spec and insist it did it, or tell me "Yea you are right! Let me rewrite it" and not actually fix the issue.

I also have a sense that it got a lot dumber over time. My expectations may have changed of course too, but still. I suspect even within a model, there is some variability of how much compute is used (eg how deep the beam search is) and supply/demand means this knob is continuously tuned down.

I still try to use Claude for tasks like this, but increasingly find my hit rate so low that the whole "don't write any code yet, let's build a spec" exercise is a waste of time.

I still find Claude good as a rubber duck or to discuss design or errors - a better Stack Exchange.

But you can't split your software spec into a set of SE questions then paste the code from top answers.

show 2 replies
simonwyesterday at 2:06 PM

> Not only does an agent not have the ability to evolve a specification over a multi-week period as it builds out its lower components, it also makes decisions upfront that it later doesn’t deviate from.

That's your job.

The great thing about coding agents is that you can tell them "change of design: all API interactions need to go through a new single class that does authentication and retries and rate-limit throttling" and... they'll track down dozens or even hundreds of places that need updating and fix them all.

(And the automated test suite will help them confirm that the refactoring worked properly, because naturally you had them construct an automated test suite when they built those original features, right?)

Going back to typing all of the code yourself (my interpretation of "writing by hand") because you don't have the agent-managerial skills to tell the coding agents how to clean up the mess they made feels short-sighted to me.

show 9 replies
kcexntoday at 3:21 AM

As people get more comfortable with AI. I think what everyone is noticing is that AI is terrible at solving problems that don't have large amounts of readily available training data. So, basically if there isn't already an open-source solution available online, it can't do it.

If what you're doing is proprietary, or even a little bit novel. There is a really good chance that AI will screw it up. After all, how can it possibly know how to solve a problem it has never seen before?

kaydubyesterday at 4:37 PM

How were you "vibe coding" 2 years ago?

There's been such a massive leap in capabilities since claude code came out, which was middle/end of 2025.

2 years ago I MAYBE used an LLM to take unstructured data and give me a json object of a specific structure. Only about 1 year ago did I start using llms for ANY type of coding and I would generally use snippets, not whole codebases. It wasn't until September when I started really leveraging the LLM for coding.

show 6 replies
radium3dtoday at 2:56 AM

I actually haven't come across situation 1 2 or 3 mentioned in the attached video. Generally I iterate on the code by starting a new prompt with the code provided, with enhancements, or provide the errors and it repairs the errors. Generally it gets it within 1-2 iterations. No emotions. Make sure your prompts do not contain fluff, and are straight what you want the code to accomplish and how you want it to accomplish it. I've gone back to code months later and have not had what you described as being shocked about bad code, it was quite easy to understand. Are you prompting the AI to also write variables and function names logically and utilize a common coding standard for whichever type of code you are having it write, such as wordpress coding standards or similar? Perhaps claude isn't the best, I have been experimenting with grok 4.1 thinking and grok expert at the mid-level paid tier. I'll take it a step further and adjust the code myself, start a new prompt and provide that updated code along with my further requests as well. I haven't hit the road blocks mentioned.

mettamageyesterday at 2:01 PM

> In retrospect, it made sense. Agents write units of changes that look good in isolation. They are consistent with themselves and your prompt. But respect for the whole, there is not. Respect for structural integrity there is not. Respect even for neighboring patterns there was not.

Well yea, but you can guard against this in several ways. My way is to understand my own codebase and look at the output of the LLM.

LLMs allow me to write code faster and it also gives a lot of discoverability of programming concepts I didn't know much about. For example, it plugged in a lot of Tailwind CSS, which I've never used before. With that said, it does not absolve me from not knowing my own codebase, unless I'm (temporarily) fine with my codebase being fractured conceptually in wonky ways.

I think vibecoding is amazing for creating quick high fidelity prototypes for a green field project. You create it, you vibe code it all the way until your app is just how you want it to feel. Then you refactor it and scale it.

I'm currently looking at 4009 lines of JS/JSX combined. I'm still vibecoding my prototype. I recently looked at the codebase and saw some ready made improvements so I did them. But I think I'll start to need to actually engineer anything once I reach the 10K line mark.

show 1 reply
ncrucesyesterday at 2:48 PM

> The AI had simply told me a good story. Like vibewriting a novel, the agent showed me a good couple paragraphs that sure enough made sense and were structurally and syntactically correct. Hell, it even picked up on the idiosyncrasies of the various characters. But for whatever reason, when you read the whole chapter, it’s a mess. It makes no sense in the overall context of the book and the preceding and proceeding chapters.

This is the bit I think enthusiasts need to argue doesn't apply.

Have you ever read a 200 page vibewritten novel and found it satisfying?

So why do you think a 10 kLoC vibecoded codebase will be any good engineering-wise?

show 6 replies
abcde666777today at 2:49 AM

This will sound arrogant, but I can't shake the impression that agent programming is most appealing to amateurs, where the kind of software they build is really just glorified UIs and data plumbing.

I work on game engines which do some pretty heavy lifting, and I'd be loath to let these agents write the code for me.

They'd simply screw too much of it up and create a mess that I'm going to have to go through by hand later anyway, not just to ensure correctness but also performance.

I want to know what the code is doing, I want control over the fine details, and I want to have as much of the codebase within my mental understanding as possible.

Not saying they're not useful - obviously they are - just that something smells fishy about the success stories.

reedf1yesterday at 3:16 PM

Karpathy coined the term vibecoding 11 months ago (https://x.com/karpathy/status/1886192184808149383). It caused quite a stir - because not only was it was a radically new concept, but fully agentic coding had only become recently possible. You've been vibe coding for two years??

show 8 replies
mauritsyesterday at 4:06 PM

I tell my students that they can watch sports on tv, but it will not make them fit.

On a personal note, vibe coding leaves me with that same empty hollow sort of tiredness, as a day filled with meetings.

show 2 replies
dv_dtyesterday at 2:21 PM

I think there is going to be an AI eternal summer. Both from developer to AI spec - where the AI implements to the spec to some level of quality, but then closing the gap after that is an endless chase of smaller items that don't all resolve at the same time. And from people getting frustrated with some AI implemented app, and so go off and AI implement another one, with a different set of features and failings.

h14htoday at 2:04 AM

It'd be easy to simply say "skill issue" and dismiss this, but I think it's interesting to look at the possible outcomes here:

Option 1: The cost/benefit delta of agentic engineering never improves past net-zero, and bespoke hand-written code stays as valuable as ever.

Option 2: The cost/benefit becomes net positive, and economics of scale forever tie the cost of code production directly to the cost of inference tokens.

Given that many are saying option #2 is already upon us, I'm gonna keep challenging myself to engineer a way past the hurdles I run into with agent-oriented programming.

The deeper I get, the more articles like this feel like the modern equivalent of saying "internet connections are too slow to do real work" or "computers are too expensive to be useful for regular people".

AtomicOrbitaltoday at 2:32 AM

after+30 years writing code in a dozen languages building systems from scratch I love vibe coding ... it's drinking from a fire hose ... in two months I vibe coded a container orchestration system which I call my kubernetes replacement project all in go with a controller deciding which VM to deploy containers onto, agents on each host polling etcd for requests created by the controller ... it's simple understandable maintainable extendable ... also vibe coded go cdk to deploy AWS RDS clusters, API gateway, handful of golang lambda functions, valkey elasticache and a full feature data service library which handles transactions and save points, cache ... I love building systems ... sure I could write all this from scratch by hand and I have but vibe coding quickly exposes me to the broad architecture decisions earlier giving me options to experiment on various alternatives ... google gemini in antigravity rocks and yes I've tried them all ... new devs should not be vibe coding for the first 5 years or more but I lucked into having decades of doing it by hand

noisy_boyyesterday at 5:39 PM

Are engineers really doing vibecoding in the truest sense of the word though? Just blindly copy/pasting and iterating? Because I don't. It is more of sculpting via conversation. I start with the requirements, provide some half-baked ideas or approaches that I think may work and then ask what the LLM suggests and whether there are better ways to achieve the goals. Once we have some common ground, I ask to show the outlines of the chosen structure: the interfaces, classes, test uses. I review it, ask more questions/make design/approach changes until I have something that makes sense to me. Only then the fully fleshed coding starts and even then I move at a deliberate pace so that I can pause and think about it before moving on to the next step. It is by no means super fast for any non-trivial task but then collaborating with anyone wouldn't be.

I also like to think that I'm utilising the training done on many millions of lines of code while still using my experience/opinions to arrive at something compared to just using my fallible thinking wherein I could have missed some interesting ideas. Its like me++. Sure, it does a lot of heavy lifting but I never leave the steering wheel. I guess I'm still at the pre-agentic stage and not ready to letting go fully.

drowntogeyesterday at 4:52 PM

I always scaffold for AI. I write the stub classes and interfaces and mock the relations between them by hand, and then ask the agent to fill in the logic. I know that in many cases, AI might come up with a demonstrably “better” architecture than me, but the best architecture is the one that I’m comfortable with, so it’s worse even if it’s better. I need to be able to find the piece of code I’m looking for intuitively and with relative ease. The agent can go as crazy as it likes inside a single, isolated function, but I’m always paranoid about “going too far” and losing control of any flows that span multiple points in the codebase. I often discard code that is perfectly working just because it feels unwieldy and redo it.

I’m not sure if this counts as “vibe coding” per se, but I like that this mentality keeps my workday somewhat similar to how it was for decades. Finding/creating holes that the agent can fill with minimal adult supervision is a completely new routine throughout my day, but I think obsessing over maintainability will pay off, like it always has.

kshri24today at 1:39 AM

Accurate and sane take! Current models are extremely good for very specific kinds of tasks. But beyond that, it is a coin toss. Gets worse as the context window goes beyond a few ten thousand tokens. If you have only vibe-coded toy projects (even with the latest fad - Ralph whatever) for anything serious, you can see how quickly it all falls apart.

It is quite scary that junior devs/college kids are more into vibe coding than putting in the effort to actually learn the fundamentals properly. This will create at least 2-3 generations of bad programmers down the line.

aerhardtyesterday at 7:35 PM

I don't predict ever going back to writing code by hand except in specific cases, but neither do I "vibe code" - I still maintain a very close control on the code being committed and the overall software design.

It's crazy to me nevertheless that some people can afford the luxury to completely renounce AI-assisted coding.

c1505today at 3:07 AM

It still feels like gambling to me when I use AI code assistants to generate large chunks of code. Sometimes, it will surprise me with how well it does. Other times, it infuriatingly doesn't follow very precise instructions for small changes. This is even when I use it in the way that I often ask for multiple options for solutions and implementations and then choose between them after the AI tool does the course rating.

There are many instances where I get to the final part of the feature and realize I spent far more time coercing AI to do the right thing than it would have taken me to do it myself.

It is also sometimes really enjoyable and sometimes a horrible experience. Programming prior to it could also be frustrating at times, but not in the same way. Maybe it is the expectation of increased efficiency that is now demanded in the face of AI tools.

I do think AI tools are consistently great for small POCs or where very standard simple patterns are used. Outside of that, it is a crapshoot or slot machine.

oxag3nyesterday at 9:22 PM

I tried vibe-coding few years back and switched to "manual" mode when I realized I don't fully understand the code. No, I did read each line of code and understood it, I understood the concepts and abstractions, but I didn't understand all nuances, even those at the top of documentation of libraries LLM used.

I tried minimalist example where it totally failed few years back, and still, ChatGPT 5 produced 2 examples for "Async counter in Rust" - using Atomics and another one using tokio::sync::Mutex. I learned it was wrong then the hard way, by trying to profile high latency. To my surprise, here's quote from Tokio Mutex documentation:

Contrary to popular belief, it is ok and often preferred to use the ordinary Mutex from the standard library in asynchronous code.

The feature that the async mutex offers over the blocking mutex is the ability to keep it locked across an .await point.

rtp4meyesterday at 2:42 PM

I never trust the opinion of a single LLM model anymore - especially for more complex projects. I have seen Claude guarantee something is correct and then immediately apologize when I feed a critical review by Codex or Gemini. And, many times, the issues are not minor but are significant critical oversights by Claude.

My habit now: always get a 2nd or 3rd opinion before assuming one LLM is correct.

show 2 replies
CodeWriter23yesterday at 7:50 PM

My high school computer lab instructor would tell me when I was frustrated that my code was misbehaving, "It's doing exactly what you're telling it to do".

Once I mastered the finite number of operations and behaviors, I knew how to tell "it" what to do and it would work. The only thing different about vibe coding is the scale of operations and behaviors. It is doing exactly what you're telling it to do. And also expectations need to be aligned. Don't think you can hand over architecture and design to the LLM; that's still your job. The gain is, the LLM will deal with the proper syntax, api calls, etc. and work as a reserach tool on steroids if you also (from another mentor later in life) ask good questions.

show 1 reply
sailfastyesterday at 2:32 PM

I felt everything in this post quite emphatically until the “but I’m actually faster than the AI.”

Might be my skills but I can tell you right now I will not be as fast as the AI especially in new codebases or other languages or different environments even with all the debugging and hell that is AI pull request review.

I think the answer here is fast AI for things it can do on its own, and slow, composed, human in the loop AI for the bigger things to make sure it gets it right. (At least until it gets most things right through innovative orchestration and model improvement moving forward.)

show 1 reply
Painsawman123yesterday at 4:31 PM

In the long run, vb coding is going to undoubtedly rot people’s skills.if AGI is not showing up anytime soon, actually understanding what the code does,why it exists,how it breaks and who owns the fallout will matter just as much as it did before LLM agents showed up

it'll be really interesting to see in the decades to come what happens when a whole industry gets used to releasing black boxes by vb coding the hell out of it

altern8yesterday at 2:26 PM

I think that something in between works.

I have AI build self-contained, smallish tasks and I check everything it does to keep the result consistent with global patterns and vision.

I stay in the loop and commit often.

Looks to me like the problem a lot of people are having is that they have AI do the whole thing.

If you ask it "refactor code to be more modern", it might guess what you mean and do it in a way you like it or not, but most likely it won't.

If you keep tasks small and clearly specced out it works just fine. A lot better than doing it by hand in many cases, specially for prototyping.

AstroBenyesterday at 3:04 PM

The author also has multiple videos on his YouTube channel going over the specific issues hes had with AI that I found really interesting: https://youtube.com/@atmoio

andaiyesterday at 3:45 PM

It probably depends on what you're doing, but my use case is simple straightforward code with minimal abstraction.

I have to go out of my way to get this out of llms. But with enough persuasion, they produce roughly what I would have written myself.

Otherwise they default to adding as much bloat and abstraction as possible. This appears to be the default mode of operation in the training set.

I also prefer to use it interactively. I divide the problem to chunks. I get it to write each chunk. The whole makes sense. Work with its strengths and weaknesses rather than against them.

For interactive use I have found smaller models to be better than bigger models. First of all because they are much faster. And second because, my philosophy now is to use the smallest model that does the job. Everything else by definition is unnecessarily slow and expensive!

But there is a qualitative difference at a certain level of speed, where something goes from not interactive to interactive. Then you can actually stay in flow, and then you can actually stay consciously engaged.

bovermyeryesterday at 6:16 PM

Interacting with LLMs like Copilot has been most interesting for me when I treat it like a rubber duck.

I will have a conversation with the agent. I will present it with a context, an observed behavior, and a question... often tinged with frustration.

What I get out of this interaction at the end of it is usually a revised context that leads me figure out a better outcome. The AI doesn't give me the outcome. It gives me alternative contexts.

On the other hand, when I just have AI write code for me, I lose my mental model of the project and ultimately just feel like I'm delaying some kind of execution.

show 1 reply
ecshaferyesterday at 4:18 PM

I never really got onto "vibe coding". I treat AI as a better auto-complete that has stack overflow knowledge.

I am writing a game in Monogame, I am not primarily a game dev or a c sharp dev. I find AI is fantastic here for "Set up a configuration class for this project that maps key bindings" and have it handle the boiler plate and smaller configuration. Its great at give me an A start implementation for this graph. But when it becomes x -> y -> z without larger contexts and evolutions it falls flat. I still need creativity. I just don't worry too much about boiler plate, utility methods, and figuring out specifics of wiring a framework together.

timcobbyesterday at 1:55 PM

I'm impressed that this person has been vibecoding longer than vibecoding has been a thing. A real trailblazer!

show 2 replies
gregfjohnsonyesterday at 8:05 PM

One use case that I'm beginning to find useful is to go into a specific directory of code that I have written and am working on, and ask the AI agent (Claude Code in my case) "Please find and list possible bugs in the code in this directory."

Then, I can reason through the AI agent's responses and decide what if anything I need to do about them.

I just did this for one project so far, but got surprisingly useful results.

It turns out that the possible bugs identified by the AI tool were not bugs based on the larger context of the code as it exists right now. For example, it found a function that returns a pointer, and it may return NULL. Call sites were not checking for a NULL return value. The code in its current state could never in fact return a NULL value. However, future-proofing this code, it would be good practice to check for this case in the call sites.

ramon156yesterday at 3:02 PM

+1, ive lost the mental model of most projects. I also added disclaimers to my projects that part of it was generated to not fool anyone

yawnxyzyesterday at 4:38 PM

I like to use AI to write code for me, but I like to take it one step at a time, looking at what it puts out and thinking about if it puts out what I want it to put out.

As a PRODUCT person, it writes code 100x faster than I can, and I treat anything it writes as a "throwaway" prototype. I've never been able to treat my own code as throwaway, because I can't just throw away multiple weeks of work.

It doesn't aid in my learning to code, but it does aid in me putting out much better, much more polished work that I'm excited to use.

hgs3yesterday at 4:00 PM

I'm flabbergasted why anyone would voluntarily vibe code anything. For me, software engineering is a craft. You're supposed to enjoy building it. You should want to do it yourself.

show 3 replies
xcodevnyesterday at 3:42 PM

My observation is that vibe-coded applications are significantly lower quality than traditional software. Anthropic software (which they claim to be 90% vibe coded) is extremely buggy, especially the UI.

show 1 reply
raphinouyesterday at 4:06 PM

I use ai to develop, but at every code review I find stuff to be corrected, which motivates me to continuing the reviews. It's still a win I think though. I've incrementally increased my use of ai in development [1], but I'm at a plateau now I think. I don't plan to go over to complete vibe coding for anything serious or to be maintained.

1: https://asfaload.com/blog/ai_use/

jdlygayesterday at 3:11 PM

I've gone through this cycle too, and what I realized is that as a developer a large part of your job is making sure the code you write works, is maintainable, and you can explain how it works.

arendtioyesterday at 3:50 PM

There is certainly some truth to this, but why does it have to be black-and-white?

Nobody forces you to completely let go of the code and do pure vibe coding. You can also do small iterations.

spicymakiyesterday at 4:01 PM

I think what many people do no understand is that software development is communication. Communication from the customers/stake holders to the developer and communication from with the developer to the machine. At some fundamental level there needs to be some precision about what you want and someone/something needs to translate that into a system to provide that solution. Software can help check if there are errors, check constraints, and execute instructions precisely, but they cannot replace the fact that someone needs to tell the machine what to do (precise intent).

What AI (LLMs) do is raises the level of abstraction to human language via translation. The problem is human language is imprecise in general. You can see this with legal or science writing. Legalese is almost illegible to laypeople because there are precise things you need to specify and you need be precise in how you specify it. Unfortunately the tech community is misleading the public and telling laypeople they can just sit back and casually tell AI what you want and it is going to give you exactly what you wanted. Users are just lying to themself, because most-likely they did not take the time to think through what they wanted and they are rationalizing (after the fact) that the AI is giving them exactly what they wanted.

show 1 reply
pmontrayesterday at 4:05 PM

In my experience it's great a writing sample code or solving obscure problems that would have been hard to google a solution for. However it fails sometimes and it can't get past some block, but neither can I unless I work hard at it.

Examples.

Thanks to Claude I've finally been able to disable the ssh subsystem of the GNOME keyring infrastructure that opens a modal window asking for ssh passhprases. What happened is that I always had to cancel the modal, look for the passhprase in my password manager, restart what made the modal open. What I have now is either a password prompt inside a terminal or a non modal dialog. Both ssh-add to a ssh agent.

However my new emacs windows still open in an about 100x100 px window on my new Debian 13 install, nothing suggested by Claude works. I'll have to dig into it but I'm not sure that's important enough. I usually don't create new windows after emacs starts with the saved desktop configuration.

zemyesterday at 7:17 PM

I've never used an AI in agent mode (and have no particular plans to), but I do think they're nice for things like "okay, I have moved five fields from this struct into a new struct which I construct in the global setup function. go through and fix all the code that uses those fields". (deciding to move those fields into a new struct is something I do want to be doing myself though, as opposed to saying "refactor this code for me")

kmatthews812yesterday at 5:42 PM

Beware the two extremes - AI out of the box with no additional config, or writing code entirely by hand.

In order to get high accuracy PRs with AI (small, tested commits that follow existing patterns efficiently), you need to spend time adding agents (claude.md, agents.md), skills, hooks, and tools specific to your setup.

This is why so much development is happening at the plugin layer right now, especially with Claude code.

The juice is worth the squeeze. Once accuracy gets high enough you don't need to edit and babysit what is generated, you can horizontally scale your output.

gary17theyesterday at 6:41 PM

> In retrospect, it made sense. Agents write units of changes that look good in isolation. They are consistent with themselves and your prompt. But respect for the whole, there is not. Respect for structural integrity there is not. Respect even for neighboring patterns there was not.

That's exactly why this whole (nowadays popular) notion of AI replacing senior devs who are capable of understanding large codebases is nonsense and will never become reality.

flankstaekyesterday at 4:32 PM

Maybe I'm "vibecoding" wrong but to me at least this misses a clear step which is reviewing the code.

I think coding with an AI changes our role from code writer to code reviewer, and you have to treat it as a comprehensive review where you comment not just on code "correctness" but these other aspects the author mentions, how functions fits together, codebase patterns, architectural implications. While I feel like using AI might have made me a lazier coder, it's made me a me a significantly more active reviewer which I think at least helps to bridge the gap the author is referencing.

sheepscreekyesterday at 4:18 PM

Good for the author. Me, never going back to hands-only coding. I am producing more higher quality code that I understand and feel confident in. I tell AI to not just “write tests”, I tell it exactly what to test as well. Then I’ll often prompt it “hey did you check for the xyz edge cases?” You need code reviews. You need to intervene. You will need frequent code rewrites and refactors. But AI is the best pair-coding partner you could hope for (at this time) and one that never gets tired.

So while there’s no free lunch, if you are willing to pay - your lunch will be a delicious unlimited buffet for a fraction of the cost.

jstummbilligyesterday at 3:04 PM

The tale of the coder, who finds a legacy codebase (sometimes of their own making) and looks at it with bewilderment is not new. It's a curious one, to a degree, but I don't think it has much to do with vibe coding.

show 1 reply
periodjetyesterday at 7:37 PM

Great engagement-building post for the author’s startup, blog, etc. Contrarian and just plausible enough.

I disagree though. There’s no good reason that careful use of this new form of tooling can’t fully respect the whole, respect structural integrity, and respect neighboring patterns.

As always, it’s not the tool.

dudeinhawaiiyesterday at 4:00 PM

After reading the article (and watching the video), I think the author makes very clear points that comments here are skipping over.

The opener is 100% true. Our current approach with AI code is "draft a design in 15mins" and have AI implement it. The contrasts with the thoughtful approach a human would take with other human engineers. Plan something, pitch the design, get some feedback, take some time thinking through pros and cons. Begin implementing, pivot, realizations, improvements, design morphs.

The current vibe coding methodology is so eager to fire and forget and is passing incomplete knowledge unto an AI model with limited context, awareness and 1% of your mental model and intent at the moment you wrote the quick spec.

This is clearly not a recipe for reliable and resilient long-lasting code or even efficient code. Spec-driven development doesn't work when the spec is frozen and the builder cannot renegotiate intent mid-flight..

The second point made clearer in the video is the kind of learned patterns that can delude a coder, who is effectively 'doing the hard part', into thinking that the AI is the smart one. Or into thinking that the AI is more capable than it actually is.

I say this as someone who uses Claude Code and Codex daily. The claims of the article (and video) aren't strawman.

Can we progress past them? Perhaps, if we find ways to have agents iteratively improve designs on the fly rather than sticking with the original spec that, let's be honest, wasn't given the rigor relative to what we've asked the LLMs to accomplish. If our workflows somehow make the spec a living artifact again -- then agents can continuously re-check assumptions, surface tradeoffs, and refactor toward coherence instead of clinging to the first draft.

show 2 replies
jrm4yesterday at 3:07 PM

I feel like the vast majority of articles on this are little more than the following:

"AI can be good -- very good -- at building parts. For now, it's very bad at the big picture."

🔗 View 45 more comments