logoalt Hacker News

Dijkstra On the foolishness of "natural language programming"

391 pointsby nimbleplum40yesterday at 3:30 AM237 commentsview on HN

Comments

01100011yesterday at 8:04 AM

People are sticking up for LLMs here and that's cool.

I wonder, what if you did the opposite? Take a project of moderate complexity and convert it from code back to natural language using your favorite LLM. Does it provide you with a reasonable description of the behavior and requirements encoded in the source code without losing enough detail to recreate the program? Do you find the resulting natural language description is easier to reason about?

I think there's a reason most of the vibe-coded applications we see people demonstrate are rather simple. There is a level of complexity and precision that is hard to manage. Sure, you can define it in plain english, but is the resulting description extensible, understandable, or more descriptive than a precise language? I think there is a reason why legalese is not plain English, and it goes beyond mere gatekeeping.

show 11 replies
haolezyesterday at 11:57 AM

This reminded me of this old quote from Hal Abelson:

"Underlying our approach to this subject is our conviction that "computer science" is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology—the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of "what is". Computation provides a framework for dealing precisely with notions of "how to"."

show 2 replies
centra_mindedtoday at 1:13 AM

Modern programming already is very, very far from strict obedience and formal symbolism. Most programmers these days (myself included!) are using libraries, frameworks, and other features that mean what they are doing in practice is wielding sky-high abstractions, gluing things together they do not (and can not) fully understand the inner workings of.

If I create a website with Node.js, I’m not manually managing memory, parsing HTTP requests byte-by-byte, or even attempting to fully grasp the event loop’s nuances. I’m orchestrating layers of code written by others, trusting that these black boxes will behave as advertised according to my best, but deeply incomplete, understanding of them.

I'm not sure what this means for LLMs programming, but I already feel separated from the case Dijkstra lays out.

show 1 reply
l0new0lf-Gyesterday at 11:40 AM

Finally someone put it this way! Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

show 3 replies
0xbadcafebeetoday at 1:37 AM

I read this when I was younger, but I only now get it, and realize how true it all is.

13) Humans writing code is an inherently flawed concept. Doesn't matter what form the code takes. Machine code, assembly language, C, Perl, or a ChatGPT prompt. It's all flawed in the same way. We have not yet invented a technology or mechanism which avoids it. And high level abstraction doesn't really help. It hides problems only to create new ones, and other problems simply never go away.

21) Loosely coupled interfaces made our lives easier because it forced us to compartmentalize our efforts into something manageable. But it's hard to prove that this is a better outcome overall, as it forces us to solve problems in ways that still lead to worse outcomes than if we had used a simpler [formal] logic.

34) We will probably end up pushing our technical abilities to the limit in order to design a superior system, only to find out in the end that simpler formal logic is what we needed all along.

55) We're becoming stupider and worse at using the tools we already have. We're already shit at using language just for communicating with each other. Assuming we could make better programs with it is nonsensical.

For a long time now I've been upset at computer science's lack of innovation in the methods we use to solve problems. Programming is stupidly flawed. I've never been good at math, so I never really thought about it before, but math is really the answer to what I wish programming was: a formal system for solving a problem, and a formal system for proving that the solution is correct. That's what we're missing from software. That's where we should be headed.

sotixyesterday at 1:40 PM

> Machine code, with its absence of almost any form of redundancy, was soon identified as a needlessly risky interface between man and machine. Partly in response to this recognition so-called "high-level programming languages" were developed, and, as time went by, we learned to a certain extent how to enhance the protection against silly mistakes. It was a significant improvement that now many a silly mistake did result in an error message instead of in an erroneous answer.

I feel that we’ve collectively jumped into programming with LLMs too quickly. I really liked how Rust has iterated on pointing out “silly mistakes” and made it much more clear what the fix should be. That’s a much more favorable development for me as a developer. I still have the context and understanding of the code I work on while the compiler points out obvious errors and their fixes. Using an LLM feels like a game of semi-intelligent guessing on the other hand. Rust’s compiler is the master teaching the apprentice. LLMs are the confident graduate correcting the master. I greatly prefer Rust’s approach and would like to see it evolved further if possible.

show 1 reply
Someoneyesterday at 2:41 PM

/s: that’s because we haven’t gone far enough. People use natural language to generate computer programs. Instead, they should directly run prompts.

“You are the graphics system, an entity that manages what is on the screen. You can receive requests from all programs to create and destroys “windows”, and further requests to draw text, lines, circles, etc. in a window created earlier. Items can be of any colour.

You also should send more click information to whomever created the window in which the user clicked the mouse.

There is one special program, the window manager, that can tell you what windows are displayed where on any of the monitors attached to the system”

and

“you are a tic-tac-toe program. There is a graphics system, an entity that manages what is on the screen. You can command it to create and destroys “windows”, and to draw text, lines, circles, etc. in a window created earlier. Items can be of any colour.

The graphics you draw should show a tic-tac-toe game, where users take turn by clicking the mouse. If a user wins the game, it should…

Add ads to the game, unless the user has a pay-per-click subscription”

That should be sufficient to get a game running…

To save it, you’d need another prompt:

”you are a file system, an entity that persists data to disk…”

You also will want

”you are a multi-tasking OS. You give multiple LLMs the idea that they have full control over a system’s CPU and memory. You…”

I look forward to seeing this next year in early April.

show 1 reply
llsfyesterday at 6:34 PM

Natural language is poor medium at communicating rules and orders. The current state of affair in US is a prime example.

We are still debating what some laws and amendments mean. The meaning of words change over time, lack of historical context, etc.

I would love natural language to operate machines, but I have been programming since mid 80's and the stubbornness of the computer languages (from BASIC, to go) strikes a good balance, and puts enough responsibility on the emitter to precisely express what he wants the machine to do.

weeeee2yesterday at 4:01 PM

Forth, PostScript and Assembly are the "natural" programming languages from the perspective of how what you express maps to the environment in which the code executes.

The question is "natural" to whom, the humans or the computers?

AI does not make human language natural to computers. Left to their own devices, AIs would invent languages that are natural with respect to their deep learning architectures, which is their environment.

There is always going to be an impedance mismatch across species (humans and AIs) and we can't hide it by forcing the AIs to default to human language.

jedimastertyesterday at 11:59 AM

> It was a significant improvement that now many a silly mistake did result in an error message instead of in an erroneous answer. (And even this improvement wasn't universally appreciated: some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.)

If I didn't know who wrote this it would seem like a jab directly at people who dislike Rust.

show 3 replies
truculentyesterday at 3:24 PM

Any sufficiently advanced method of programming will start to look less like natural language and more like a programming language.

If you still don’t want to do programming, then you need some way to instruct or direct the intelligence that _will_ do the programming.

And any sufficiently advanced method of instruction will look less like natural language, and more like an education.

indigoabstractyesterday at 12:28 PM

Using natural language to specify and build an application is not unlike having a game design document before you actually start prototyping your game. But once you have implemented the bulk of what you wanted, the implementation becomes the reference and you usually end up throwing away the GDD since it's now out of sync with the actual game.

Insisting that for every change one should go read the GDD, implement the feature and then sync back the GDD is cumbersome and doesn't work well in practice. I've never seen that happen.

But if there ever comes a time when some AI/LLM can code the next version of Linux or Windows from scratch based on some series of prompts, then all bets are off. Right now it's clearly not there yet, if ever.

misja111yesterday at 6:40 AM

I somewhat disagree with this. In real life, say in some company, the inception of an idea for a new feature is made in the head of some business person. This person will not speak any formal language. So however you turn it, some translation from natural language to machine language will have to be done to implement the feature.

Typically the first step, translation from natural to formal language, will be done by business analysts and programmers. But why not try to let computers help along the way?

show 6 replies
octacatyesterday at 7:04 AM

Natural language is pretty good for describing the technical requirements for the complex system, though. I.e. not the current code implementation, but why the current code implementation is selected vs other possible implementations. Not what code do, but what it is expected to do. Basically, most of the missing parts, that live in Jira-s, instead of your repo. It is also good, at allowing better refactoring capabilities, when all your system is described by outside rules, which could be enforced on the whole codebase. We just use programming languages, because it is easier to use in automated/computer context (and was the only way to use, to be honest, before all the LLM stuff). Though, while it gives us non-ambiguity on the local scale, it stops working on the global scale, the first moment person went and copy-pasted part of the code. Are you sure that part follows all the high-level restrictions we should to follow and is correct program? It is program that would run, when compile, but definition of run is pretty loose. In C++ program that corrupts all the memory is also runnable.

hamstergeneyesterday at 6:50 AM

Reminds me of another recurring idea of replacing code with flowcharts. First I've seen that idea coming from some unknown Soviet professor from 80s, and then again and again from different people from different countries in different contexts. Every time it is sold as a total breakthrough in simplicity and also every time it proves to be a bloat of complexity and a productivity killer instead.

Or weak typing. How many languages thought that simplifying strings and integers and other types into "scalar", and making any operation between any operands meaningful, would simplify the language? Yet every single one ended up becoming a total mess instead.

Or constraint-based UI layout. Looks so simple, so intuitive on simple examples, yet totally failing to scale to even a dozen of basic controls. Yet the idea keeps reappearing from time to time.

Or an attempt at dependency management by making some form of symlink to another repository e.g. git modules, or CMake's FetchContent/ExternalProject? Yeah, good luck scaling that.

Maybe software engineering should have some sort of "Hall of Ideas That Definitely Don't Work", so that young people entering the field could save their time on implementing one more incarnation of an already known not good idea.

show 8 replies
wiz21cyesterday at 12:19 PM

> Remark. As a result of the educational trend away from intellectual discipline, the last decades have shown in the Western world a sharp decline of people's mastery of their own language: many people that by the standards of a previous generation should know better, are no longer able to use their native tongue effectively, even for purposes for which it is pretty adequate.

Compare that to:

https://news.ycombinator.com/item?id=43522966

still_grokkingyesterday at 1:04 PM

By chance I came just today across GitHub's SpecLang:

https://githubnext.com/projects/speclang/

Funny coincidence!

I leave it here for the nice contrast it creates in light of the submission we're discussing.

RazorDevyesterday at 6:54 PM

Dijkstra's insights on the importance of simplicity and elegance in programming are timeless. His emphasis on the process of abstraction and the value of clear, concise code is as relevant today as it was in 1978. A thought-provoking read for any programmer striving to improve their craft.

aubanelyesterday at 9:18 PM

Djikstra's clarity of expression and thoguht is indeed impressive. One nuance : he seems to completely equate ease of language with ability to do undetectable mistakes. I disagree: I know people whose language is extremely efficient at producing analogies that can shortcut for the listener many pages of painful mathematical proofs: for instance, convenu the emergence of complexity for many simple processes by a "swarming"

show 1 reply
0x1ceb00dayesterday at 9:01 AM

When was it written? The date says 2010 but dijkstra died in 2002.

show 1 reply
HarHarVeryFunnyyesterday at 5:08 PM

Seeing as much of the discussion here is about LLMs, not just the shortcomings of natural language as a programming language, another LLM-specific aspect is how the LLM is interpreting the natural language instructions it is being given...

One might naively think that the "AI" (LLM) is going to apply it's intelligence to give you the "best" code in response to your request, and in a way it is, but this is "LLM best" not "human best" - the LLM is trying "guess what's expected" (i.e. minimize prediction error), not give you the best quality code/design per your request. This is similar to having an LLM play chess - it is not trying to play what it thinks is the strongest move, but rather trying to predict a continuation of the game, given the context, which will be a poor move if it thinks the context indicates a poor player.

With an RL-trained reasoning model, the LLM's behavior is slightly longer horizon - not just minimizing next token prediction errors, but also steering the output in a direction intended to match the type of reasoning seen during RL training. Again, this isn't the same as a human, applying their experience to achieve (predict!) a goal, but arguably more like cargo-cult reasoning - following observed patterns of reasoning in the training set, without the depth of understanding and intelligence to know if this is really applicable in the current context, nor with the ability to learn from it's mistakes when it is not.

So, while natural language itself is of course too vague to program in, which is part of the reason that we use programming languages instead, it's totally adequate as a way to communicate requirements/etc to an expert human developer/analyst, but when communicating to an LLM instead of a person, one should expect the LLM to behave as an LLM, not as a human. It's a paperclip maximizer, not a human-level intelligence.

quantum_stateyesterday at 11:59 PM

A language is invented for a domain for precision and clarity the natural language cannot provide … trying to do the opposite would certainly create more work

grahamleeyesterday at 7:12 AM

Dijkstra also advocated for proving the correctness of imperative code using the composition of a set of simple rules, and most programmers ignore that aspect of his work too.

show 1 reply
nizarmahyesterday at 1:52 PM

One of the most challenging aspects in my career has been: communication.

This is largely because it leaves chance for misinterpretation or miscommunication. Programming languages eliminated misinterpretation and made miscommunication easier to notice through errors.

Programming language enables micromanaging proactively, I specify the instructions before they run. I often find myself micromanaging retroactively with LLMs, until I reach the path I am looking for.

wpollockyesterday at 9:30 PM

I love reading literate-programming software! The problem is that very few programmers are as skilled at writing clearly as are Knuth and Dijkstra. I think I read somewhere that book publishers receive thousands of manuscripts for each one they publish. Likewise, few programmers can write prose worth reading.

Rodmineyesterday at 9:28 AM

What needs to be done can and is almost always described in natural language.

Whether that is feasible is a different question (https://xkcd.com/1425/), but also can be described in natural language.

Here is something I tried with o3-mini:

> Create a program that takes an input image and can tell if there is a bird in it.

> ChatGPT said:

> Reasoned for 38 seconds

> Below is an example in Python that uses TensorFlow’s Keras API and a pre-trained ResNet50 model to classify an input image. The code loads an image, preprocesses it to the required size, obtains the top predictions, and then checks if any of the top labels appear to be a bird. You can expand the list of bird keywords as needed for your application.

> python code that works

If you take the critical view, you can always find a way to find an exception that will fail. I can see many happy cases which will just work most of the time, even with the currently available technology. Most of the programming work done today is putting libraries and api services together.

jruohonenyesterday at 5:41 AM

A great find!

The whole thing seems a step (or several steps) backwards also in terms of UX. I mean surely there was a reason why ls was named ls, and so forth?

A bonus point is that he had also something to say about a real or alleged degeneration of natural languages themselves.

chilldsgnyesterday at 6:37 AM

This is the most beautiful thing I've read in a long time.

show 1 reply
jedimastertyesterday at 12:00 PM

Over the last couple of weeks or so of me finally starting to use AI pair programming tools (for me, Cursor) I've been realizing that, much like when I play music, I don't really think about programming a natural language terms in the first place, it's actually been kind of hard to integrate an AI coding agent into my workflow mentally

Animatsyesterday at 6:57 AM

(2010)

This refers to the era of COBOL, or maybe Hypertalk, not LLMs.

show 2 replies
wewewedxfgdfyesterday at 6:33 PM

We are AI 1.0

Just like Web 1.0 - when the best we could think of to do was shovel existing printed brochures onto web pages.

In AI 1.0 we are simply shoveling existing programming languages into the LLM - in no way marrying programming and LLM - they exist as entirely different worlds.

AI 2.0 will be programming languages - or language features - specifically designed for LLMs.

hinkleyyesterday at 5:21 PM

About every six to ten years I look in on the state of the art on making artificial human languages in which one cannot be misunderstood.

If we ever invent a human language where laws can be laid out in a manner that the meaning is clear, then we will have opened a door on programming languages that are correct. I don’t know that a programmer will invent this first. We might, but it won’t look that natural.

Dansvidaniayesterday at 11:27 AM

- some people found error messages they couldn't ignore more annoying than wrong results

I wonder if this is a static vs dynamic or compiled vs interpreted reference.

Anyway I love it. Made me giggle that we are still discussing this today, and just to be clear I love both sides, for different things.

show 1 reply
teleforceyesterday at 3:41 PM

> thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.

Please check this talk on the contributions of these mentioned people for the complementary form of deterministic AI (machine intelligence) namely logic, optimization and constraint programming in a seminal lecture by John Hooker [1].

I have got the feeling that if we combine the stochastic nature of LLM based NLP with the deterministic nature of feature structure trchnique based NLP (e.g. CUE), guided by logic, optimization and constraint programming we probably can solve intuitive automation or at least perform proper automation (or automatic computing as Dijkstra put it).

Apparently Yann LeCun also recently proposing optimization based AI namely inference through optimization, or objective driven AI in addition to data-driven AI [2].

Fun facts, you can see Donald Knuth asking questions towards the end of the JH's lecture presentation.

[1] Logic, Optimization, and Constraint Programming: A Fruitful Collaboration - John Hooker - CMU (2023) [video]:

https://www.youtube.com/live/TknN8fCQvRk

[2] Mathematical Obstacles on the Way to Human-Level AI - Yann LeCun - Meta - AMS Josiah Willard Gibbs Lecture at the 2025 Joint Mathematics Meetings (2025) [video]:

https://youtu.be/ETZfkkv6V7Y

cpuguy83yesterday at 4:24 PM

One of the things LLM's/natural language programming brings is greater access. While the actual code may be crap, it opens up things to more people to play around and iterate on ideas without having to have a ton of knowledge. That is powerful by itself.

karmasimidayesterday at 9:28 AM

Who is laughing now?

It is clear NLU can't be done in the reign of PL itself, there is never going to be natural language grammar that is precise as PL.

But LLM is a different kind of beast entirely.

yagyuyesterday at 1:55 PM

In the same vein, Asimov in 1956:

Baley shrugged. He would never teach himself to avoid asking useless questions. The robots knew. Period. It occurred to him that, to handle robots with true efficiency, one must needs be expert, a sort of roboticist. How well did the average Solarian do, he wondered?

odyssey7yesterday at 12:17 PM

This is also, inadvertently, an argument against managers.

Why talk to your team when you could just program it yourself?

nyeahyesterday at 1:09 PM

But Dijkstra was writing long ago. I'm sure the situation is greatly improved today.

cafardyesterday at 1:00 PM

So many Dijkstra links amount to "Dijkstra on the [pejorative noun] of [whatever was bothering Dijkstra]."

I promise to upvote the next Dijkstra link that I see that does not present him as Ambrose Bierce with a computer.

show 1 reply
ma9oyesterday at 1:17 PM

People are supposed to communicate symbolically with LLMs too

fedeb95yesterday at 7:49 AM

this clearly has nothing to do with the current main usages of LLMs, it's about using natural language as an interface to produce accurate results, as a further abstraction on top of general purpose languages.

show 1 reply
voidhorsetoday at 12:11 AM

Dijkstra is entirely correct in this, and it's something I've been trying to urge people to recognize since the beginnings of this LLM wave.

There is inherent value in using formal language to refine, analyze, and describe ideas. This is, after all, why mathematical symbolism has lasted in spite of the fact that all mathematicians are more than capable of talking about mathematical ideas in their natural tongues.

Code realizes a computable model of the world. A computable model is made up of a subset of the mathematical functions we can define. We benefit greatly from formalism in this context. It helps us be precise about the actual relationships and invariants that hold within a system. Stable relationships and invariants lead to predictable behaviors, and predictable systems are reliable systems on the plan of human interaction.

If you describe your system entirely in fuzzily conceived natural language, have you done the requisite analysis to establish the important relationships and invariants among components in your system, or are you just half-assing it?

Engineering is all about establishing relative degrees of certainty in the face of the wild uncertainty that is the default modality of existence. Moving toward a world in which we "engineer" systems increasingly through informal natural language is a step backwards on the continuum of reliability, comprehensibility, and rigor. The fact that anyone considers using these tools and still thinks of themselves as an "engineer" of some kind is an absolute joke.

auggieroseyesterday at 2:56 PM

Formal can be foolish, too. If you don't believe that, then I have a set for sale, with the property that it contains all sets that don't contain itself.

show 1 reply
recursivedoubtsyesterday at 11:43 AM

looks at https://hyperscript.org

laughs nervously

alexvitkovyesterday at 3:35 PM

Computers do what you tell them to, not what you want them to. This is naturally infuriating, as when a computer doesn't do what you want, it means you've failed to express your vague idea in concrete terms.

LLMs in the most general case do neither what you tell them, nor what you want them to. This, surprisingly, can be less infuriating, as now it feels like you have another actor to blame - even though an LLM is still mostly deterministic, and you can get a pretty good idea of what quality of response you can expect for a given prompt.

dr_dshivyesterday at 7:40 AM

He didn’t understand the concept of the vibe. Here’s the best theory article I’ve read

https://www.glass-bead.org/article/a-theory-of-vibe/

show 2 replies
johnwatson11218yesterday at 7:14 PM

Why did mathematicians invent new symbols? Imagine if all of algebra, calculus, linear algebra looked like those word problems from antiquity? Natural language is not good for describing systems, symbolic forms are more compressed and be considered a kind of technology in its own right.

ur-whaleyesterday at 6:45 PM

> The foolishness of "natural language programming"

Wasn't that the actual motivation behind the development of SQL?

IIRC, SQL was something that "even" business people could code in because it was closer to "natural language".

When you see the monstrosity the motivation gave birth to, I think the "foolish" argument was well warranted at the time.

Of course, in these days of LLM's, Dijkstra's argument isn't as clear cut (even if LLM's aren't there yet, they're getting much closer).

James_Kyesterday at 10:32 AM

It's pretty obvious to me that this LLM business won't be economically feasible until it can actually produce better code than a team of humans could without it. The reason programmers are paid so highly is because their work is incredibly productive and valuable. One programmer can enable and improve the work of many hundreds of other people. Cost cutting on the programmer isn't worth it because it'll create greater losses in other places. Hence the high salaries. Every saving you make on the programmer is magnified a hundred times in losses elsewhere.

show 1 reply

🔗 View 5 more comments