logoalt Hacker News

Automatic Programming

169 pointsby dvrptoday at 10:11 AM165 commentsview on HN

Comments

dugmartintoday at 11:24 AM

I have 30+ years of industry experience and I've been leaning heavily into spec driven development at work and it is a game changer. I love programming and now I get to program at one level higher: the spec.

I spend hours on a spec, working with Claude Code to first generate and iterate on all the requirements, going over the requirements using self-reviews in Claude first using Opus 4.5 and then CoPilot using GPT-5.2. The self-reviews are prompts to review the spec using all the roles and perspectives it thinks are appropriate. This self review process is critical and really polishes the requirements (I normally run 7-8 rounds of self-review).

Once the requirements are polished and any questions answered by stakeholders I use Claude Code again to create a extremely detailed and phased implementation plan with full code, again all in the spec (using a new file is the requirements doc is so large is fills the context window). The implementation plan then goes though the same multi-round self review using two models to polish (again, 7 or 8 rounds), finalized with a review by me.

The result? I can then tell Claude Code to implement the plan and it is usually done in 20 minutes. I've delivered major features using this process with zero changes in acceptance testing.

What is funny is that everything old is new again. When I started in industry I worked in defense contracting, working on the project to build the "black box" for the F-22. When I joined the team they were already a year into the spec writing process with zero code produced and they had (iirc) another year on the schedule for the spec. At my third job I found a literal shelf containing multiple binders that laid out the spec for a mainframe hosted publishing application written in the 1970s.

Looking back I've come to realize the agile movement, which was a backlash against this kind of heavy waterfall process I experienced at the start of my career, was basically an attempt to "vibe code" the overall system design. At least for me AI assisted mini-waterfall ("augmented cascade"?) seems a path back to producing better quality software that doesn't suffer from the agile "oh, I didn't think of that".

show 12 replies
jakkostoday at 11:21 AM

> Pre-training is, actually, our collective gift

I feel like this wording isn't great when there are many impactful open source programmers who have explicitly stated that they don't want their code used to train these models and licensed their work in a world where LLMs didn't exist. It wasn't their "gift", it was unwillingly taken from them.

> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.

I've seen LLMs generate code that I have immediately recognized as being copied a from a book or technical blog post I've read before (e.g. exact same semantics, very similar comment structure and variable names). Even if not legally required, crediting where you got ideas and code from is the least you can do. While LLMs just launder code as completely your own.

show 8 replies
permo-wtoday at 3:29 PM

How many times are we going to reinvent the wheel of LLM usage and applaud? Why every day is there another LLM usage article adding essentially nothing educational or significant to the discourse voted to the top of the frontpage? Am I just jaded? It feels like the bar for "Successful article on Hacker News" is so much lower for LLM discourse than for any other subject

show 1 reply
jpnctoday at 11:33 AM

How does it feel to see all your programming heroes turn into Linkedin-style influencers?

show 8 replies
rtpgtoday at 11:17 AM

Every time I hear someone mention they vibed a thing or claude gave them something, it just reads as a sort of admission that I'm about to read some _very_ "first draft"-feeling code. I get this even from people who spend a lot of time talking about needing to own code you send up.

People need to stop apologizing for their work product because of the tools they use. Just make the work product better and you don't have to apologize or waste people's time.

Especially given that you have these tools to make cleanup easier (in theory)!

show 2 replies
norirtoday at 12:04 PM

> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.

I disagree. The code you wrote is a collaboration with the model you used. To frame it this way, you are taking credit for the work the model did on your behalf. There is a difference between I wrote this code entirely by myself and I wrote the code with a partner. For me, it is analogous to the author of the score of an opera taking credit for the libretto because they gave the libretto author the rough narrative arc. If you didn't do it yourself, it isn't yours.

I generally prefer integrated works or at least ones that clearly acknowledge the collaboration and give proper credit.

show 5 replies
freestingotoday at 3:31 PM

I do not agree at all with his contrasting definitions of “vibe coding” vs “automatic programming”. If a knowledgeable software engineer can say that Claude’s code is actually theirs, so can everyone else. Otherwise, we could argue that Hell has written a book about itself using Dante Alighieri as its tool, given how much we still do not know about our brains, language, creative process, etc.

reidractoday at 11:23 AM

> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.

Is not a gift if it was stolen.

Anyway, in my opinion the code that was generated by the LLM is yours as long as you're responsible for it. When I look at a PR I'm reading the output of a person, independently of the tools that person used.

There's conflict perhaps when the submitter doesn't take full ownership of the code. So I agree with Antirez on that part

show 2 replies
jll29today at 12:25 PM

In the 1950s/1960s, the term "automatic programming" referred to compiler construction: instead of writing assembler code by hand, a FORula TRANslator (FORTRAN) could "magically" turn a mathematical formula into code "by itself".

"4GL" was a phase in the 1980s when very high level languages very provided by software companies, often integrating DB access and especially suited for particular domains. The idea was that one could focus more on the actual problem rather than having to write boilerplate needed to solving it.

LLMs permit to go from natural language specification to draft implementation. If one is lucky, it runs and produes the desired results right away; more often, one needs to revise the code base iteratively, again navigated by NL commands, to fix errors, to change the design based on reviewing the first shot at it, to add features etc.

rellfytoday at 1:02 PM

I arrived at a very similar conclusion since trying Claude Code with Opus 4.5 (a huge paradigm shift in terms of tech and tools). I've been calling it "zen coding", where you treat the codebase like a zen garden. You maintain a mental map of the codebase, spec everything before prompting for the implementation, and review every diff line by line. The AI is a tool to implement the system design, not the system designer itself (at least not for now...).

The distinction drawn between both concepts matters. The expertise is in knowing what to spec and catching when the output deviates from your design. Though, the tech is so good now that a carefully reviewed spec will be reliably implemented by a state-of-the-art LLM. The same LLM that produces mediocre code for a vague request will produce solid code when guided by someone who understands the system deeply enough to constrain it. This is the difference between vibe coding and zen coding.

Zen coders are masters of their craft; vibe coders are amateurs having fun.

And to be clear, nothing wrong with being an amateur and having fun. I "vibe code" several areas with AI that are not really coding, but other fields where I don't have professional knowledge in. And it's great, because LLMs try to bring you closer to the top of human knowledge on any field, so as an amateur it is incredible to experience it.

kklisuratoday at 1:57 PM

I think Antirez is gonna change his tune about this as soon as OpenAI et.al. start requesting royalties from software you built using their AI.

show 2 replies
sibellaviatoday at 1:35 PM

> That said, if vibe coding is the process of producing software without much understanding of what is going on [...], automatic programming is the process of producing software that attempts to be high quality and strictly following the producer's vision of the software [...], with the help of AI assistance.

He is absolutely right here, and I think in this article he has "shaped" the direction of future software engineering (which is already happening actually): we are moving closer and closer to a new way of writing code. But this time, for real. I mean that it will increasingly become the standard. Just as in the past an architect used to draw every detail by hand, while today much of the operational work is delegated to parametric software, CAD, BIM, and so on. The architect does not "draw less" because they know less, but because the value of their work has shifted. This is a concept we've repeated often in recent months, with the advent of Opus 4.5 and 5.2-Codex. But I think that here antirez has given it the right shape and also did well to distinguish it from mere vibecoding, which, as far as I'm concerned, are two radically different approaches.

xixixaotoday at 11:07 AM

This is a classic false dichotomy. Vibe coding, automatic coding and coding is clearly on a spectrum. And I can employ all the shades during a single project.

show 2 replies
conartist6today at 12:08 PM

Describes appropriation and then says "so it's not appropriation". Wat.

mccoybtoday at 11:14 AM

a better term might be “feedback engineering” or “verification engineering” (what feedback loop do I need to construct to ensure that the output artifact from the agent matches my specification)

This includes standard testing strategies, but also much more general processes

I think of it as steering a probability distribution

At least to me, this makes it clear where “vibe coding” sits … someone who doesn’t know how to express precise verification or feedback loops is going to get “the mean of all software”

marmalade2413today at 11:09 AM

I disagree with referring to this as automatic software as if it's a binary statement. It's very much a spectrum and this kind of software development is not fully automatic.

There's actually a wealth of literature on defining levels of software automation (such as: https://doi.org/10.1016/j.apergo.2015.09.013).

ameliustoday at 2:09 PM

This raises some questions:

* Does the spec become part of the repository?

* Does "true open source" require that?

* Is the spec what you edit?

motoboitoday at 12:34 PM

We maybe witnessing the last generation of master software artisans like antirez.

This is beautiful to see, their mastery harnessing the power of the intelligent machine tools to design, understand and build.

This is like seeing a master of image & light like michelangelo receiving a camera, photoshop and a printer. It's an exponential elevation of the art.

But to become a master like michelangelo one had to dedicate herself to the craft of manually mixing and applying materials to bend and modulate light, slowly building and consolidating those neural pathways by reflection and, most of all, practice, until those skills became as natural as getting up or bringing a hand to the mouth. When that happened, art flowed from her mind to the physical world and the body became the vessel of intuition.

A master like antirez had to wrap his head around concepts alien to the human mind. Bits, bytes, arrays, memory layout, processors, compilers, interfaces, abstractions, constraints, types, concurrency do not exist in the savannas that forged brains. Had to comprehend and learn to use his own cognitive capabilities and restrictions to know at what level to break the code units and the abstraction boundaries. At the very top, master this in a level so high that software became like Redis: beautiful, powerful and so elevated in the art that it became simpler, not more complex. It's Picasso drawing a dog.

The intelligent software building machines can do things no human manually can (given the same time, humans die, get old or get bored), but they are not brush and canvas. They function in another way, the mind needs other paths to master them. The path to master them is not the same path to master artisanal software building.

So, this new generation, wanting to build things not possible to the artisan, will become masters of another craft, one we right now cannot even comprehend or imagine, in the same way michelangelo could never imagine the level of control over light the modern photography masters have.

Me, not a master, but having dedicated my whole life to artisanal software building, am excited to receive and use the new tools, to experiment the new craft. Also frightened by the uncertainty of this new world.

What a time to be alive.

show 5 replies
proreztoday at 12:00 PM

Friendly reminder that almost nobody is working this way now. You (reader) don't have to spend 346742356 tokens on that refactor. antirez won't magically swoop in and put your employer out of business with the Perfect Prompt (and accompanying AI blog post). There's a lot of software out there and MoltBook isn't going to spontaneously put your employer out of business either.

Don't fall into the trap of thinking "if I don't heavily adopt Claude Code and agentic flows today I'll be working at Subway tomorrow." There's an unhealthy AI hype cottage industry right now and you aren't beholden to it. Change comes slowly, is unpredictable, and believe it or not writing Redis and linenoise.c doesn't make someone clairvoyant.

show 1 reply
layer8today at 12:32 PM

I don’t think that is a good term. We generally designate processes as “automatic” or “automation” that work without any human guidance or involvement at all. If you have to control and steer something, it’s not automatic.

falloutxtoday at 11:38 AM

May be a language issue but "Automatic" would imply something happening without any intervention. Also, I dont like that everyone is trying to coin a term for this but there is already a term called lite coding for this sort of a setup, I just coined it.

laserlighttoday at 11:35 AM

Have we ever had autocomplete programming? Then why have a new term for LLM-assisted programming?

show 1 reply
VadimPRtoday at 11:26 AM

"I automatically programmed it" doesn't really roll off the tongue, nor does it make much sense - I reckon we need a better term.

It certainly quicker (and at times, more fun!) to develop this way, that is for certain.

show 3 replies
kris_buildstoday at 1:15 PM

There's a hidden assumption in the waterfall vs agile debate that AI might actually dissolve: the cost of iteration.

Waterfall made sense when changing code was expensive. Agile made sense when you couldn't know requirements upfront. But what if generating code becomes nearly free?

I've been experimenting with treating specs as the actual product - write the spec, let AI generate multiple implementations, throw them away daily. The spec becomes the persistent artifact that evolves, while code is ephemeral.

The surprising part: when iteration is cheap, you naturally converge on better specs. You're not afraid to be wrong because being wrong costs 20 minutes, not 2 sprints.

Anyone else finding that AI is making them more willing to plan deeply precisely because execution is so cheap that plans can be validated quickly?

doe88today at 11:35 AM

@antirez if you reading this, it would be insigthful I think if you could share what is your current AI workflow, the tools you use, etc. Thanks!

show 1 reply
Havoctoday at 1:08 PM

>Vibe coding is the process of generating software using AI without being part of the process at all.

Even the most one shot prompt vibecoding is still getting high level intent from the person and then testing it in person. There is no "without being part of the process at all".

And from there its a gradient as to how much input & guidance is given.

This entire distinction he's trying to make here just doesn't make sense frankly. Trying to impose two categories on something that is clearly a continuous spectrum.

fwlrtoday at 11:07 AM

It’s very healthy to have the “strong anti-disclosure” position expressed with clarity and passion.

aleccotoday at 12:02 PM

> if vibe coding is the process of producing software without much understanding of what is going on (which has a place, and democratizes software production, so it is totally ok with me)

Strongly disagree. This is a huge waste of currently scarce compute/energy both in generating that broken slop and in running it. It's the main driver for the shortages. And it's getting worse.

I would hate a future without personal computing.

mgaunardtoday at 11:23 AM

I stopped reading at "soon to become the practice of writing software".

That belief has no basis at this point and it's been demonstrated not only that AI doesn't improve coding but also that the costs associated are not sustainable.

show 2 replies
rtafs155today at 12:03 PM

"When the process is actual software production where you know what is going on, remember: it is the software you are producing. Moreover remember that the pre-training data, while not the only part where the LLM learns (RL has its big weight) was produced by humans, so we are not appropriating something else."

What does that even mean? You are a failed novelist who does not have ideas and is now selling out his fellow programmers because he wants to get richer.

heavyset_gotoday at 12:24 PM

A reminder that that your LLM output isn't your intellectual property no matter how much effort you feel went into its prompting.

Copyright protects human creations and the US Copyright Office has made it clear that AI output cannot be copyrighted without significant creative alterations from humans of the output after it is generated.

show 1 reply
rvztoday at 10:53 AM

> I'm a programmer, and I use automatic programming. The code I generate in this way is mine. My code, my output, my production. I, and you, can be proud.

Disagree.

So when there is a bug / outage / error, due to "automatic programming" are you ready to be first in line to accept accountability (the LLM cannot be) when it all goes wrong in production? I do not think that would even be enough or whether this would work in the long term.

No excuses like "I prompted it wrong" or "Claude missed something" or "I didn't check over because 8 other AI agents said it was "absolutely right"™".

We will then have lots of issues such as this case study [0] where everything seemingly looks fine at first, all tests pass but in production, the logic was misinterpreted by the LLM with a wrong keyword, [0] during a refactor.

[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

show 3 replies
songodongotoday at 11:10 AM

Not that I necessarily disagree with any of it, but one word comes to mind as I read through it: “copium”

conartist6today at 12:12 PM

This was just such a worthless post that it made me sad. No arguments with moral weight or clarity. Just another hollowed out shell beeping out messages of doom...

keyletoday at 1:09 PM

It's getting silly. Every 3 days someone is trying to coin a new term for programming.

At the end of the day, you produce code for a compiler to produce other code, and then eventually run it.

It's called programming.

When carpenters got powertools, they didn't rename themselves automatic carpenters.

When architects started working with CAD instead of paper, they didn't become vibe architects, even though they literally copy-paste 3/5 of the content they produce.

Programming is evolving, there is a lot of senseless flailing because heads is spinning.

margorczynskitoday at 11:06 AM

Vibe coding is an idiotic term and it's a shame that it stuck. If I'm a project lead and just giving directions to the devs I'm also "vibe coding"?

I guess a large of that is that 1-2 years ago the whole process was much more non-deterministic and actually getting a sensible result much harder.

show 2 replies
mentalgeartoday at 12:25 PM

I prefer "LLM-assisted programming" as it captures the value/responsibilty boundary pretty exactly. I think it was coined by simonw here, but unfortuantely "vibe coding" become all encompassing instead of proper software engineers using "LLM-assistant" to properly distinguish themselves from vibe bros with very shallow knowledge.

noodletheworldtoday at 11:12 AM

Vibe Engineering. Automatic Programming. “We need to get beyond the arguments of slop vs sophistication..."

Everyone seems to want to invent a new word for 'programming with AI' because 'vibe coding' seems to have come to equate to 'being rubbish and writing AI slop'.

...buuuut, it doesn't really matter what you call it does it?

If the result is slop, no amount of branding is going to make it not slop.

People are not stupid. When I say "I vibe coded this shit" I do not mean, "I used good engineering practices to...". I mean... I was lazy and slapped out some stupid thing that sort of worked.

/shrug

When AI assisted programming is generally good enough not to be called slop, we will simply call it 'programming'.

Until then, it's slop.

There is programming, and there is vibe coding. People know what they mean.

We don't need new words.

show 1 reply
satisficetoday at 12:48 PM

It’s not automatic programming, any more than compiling is. It’s a form of high level programming.

It’s also sloppy and irresponsible. But hey, you can fake your work faster and more convincingly than ever before.

Call it slop coding.

keepamovintoday at 11:50 AM

Thank you. I and you can be proud. Yes we can! :)

I posted yesterday about how I'd invented a new compression algorithm, and used an AI to code it. The top comment was like "You or Claude? ... also ... maybe consider more than just 1-shotting some random idea." This was apparently based on the signal that I had incorrectly added ZIP to the list of tools that uses LZW (which is a tweak of LZ78, which is a dictionary version of the back-reference variant by the same Level-Ziv team of LZ77, the thing actually used in Zip). This mistake was apparently signal that I had no idea what I was doing, was a script kiddie who had just tried to one shot some crap idea, and ended up with slop.

This was despite the code working and the results table being accurate. Admittedly the readme was hyped and that probably set this person off too. But they were so far off in their belief that this was Claude's idea, Claude's solution, and just a one-off that it seemed they not only totally misrepresented me and my work, but the whole process that it would actually take to make something like this.

I feel that perhaps someone making such comments does not have much familiarity with automatic programming. Because here's what actually happened: the path to get from my idea (intuited in 2013, but beyond my skills to do easily until using AI) was about as far from a 'one-shot' as you can get.

The first iteration (Basic LZW + unbounded edit scripts + Huffman) was roughly 100x slower. I spent hours guiding the implementation through specific optimization attempts:

- BK-trees for lookups (eventually discarded as slow).

- Then going to Arithmetic coding. First both codes + scripts, later splitting.

- Various strategies for pruning/resetting unbounded dictionaries.

- Finally landing on a fixed dict size with a Gray-Code-style nearest neighbor search to cap the exploration.

The AI suggested some tactical fixes (like capping the Levenshtein table, splitting edits/codes in Arithemtic coding), but the architectural pivots came from me. I had to find the winning path.

I stopped when the speed hit 'sit-there-and-watch-it-able' (approx 15s for 2MB) and the ratio consistently beat LZW (interestingly, for smaller dics, which makes sense, as the edit scripts make each word more expressive).

That was my bar: Is it real? Does it work? Can it beat LZW? Once it did, I shared it. I was focused on the bench accuracy, not the marketing copy. I let the AI write the hype readme - I didn't really think it mattered. Yes, this person fixated on a small mistake there, and completely misrepresented or had the wrong model of waht it actually took to produce this.

I believe that kind of misperception must be the result of a lack of familiarity with using these tools in practice. I consider these kind of "disdain from the unserious & inexperienced" to be low quality, low effort comments than essentially equate AI with clueless engineers and slop.

As antirze lays out: the same LLMs depending on the human that is guiding the process with their intuition, design, continuous steering and idea of software.

Maybe some people are just pissed off - maybe their dev skills sucked beofre AI, and maybe they still suck with AI, and now they are mad at everything good people are doing with AI, and AI itself?

Idk, man. I just reckon this is the age where you can really make things happen, that you couldn't make before, and you should be into and positive. If you are a serious about making stuff. And making stuff is never easy. And it's always about you. A master doesn't blame his tools.

huflungdungtoday at 11:40 AM

[dead]

sandrusotoday at 11:09 AM

> Pre-training is, actually, our collective gift that allows many individuals to do things they could otherwise never do, like if we are now linked in a collective mind, in a certain way.

The question is if you can have it all? Can you get faster results and still be growing your skills. Can we 10x the collective mind knowledge with use of AI or we need to spend a lot of time learning the old wayTM to move the industry forward.

Also nobody needs to justify what tools they are using. If there is a pressure to justify them, we are doing something wrong.

show 1 reply