In the 1950s, COBOL was introduced with the idea that programming could be written almost as if one were speaking English. But eventually people realized that writing COBOL well, in a style that resembles English conversation, was itself difficult.
Today, we are hearing a similar claim: “If you can describe the program in natural language, programming is basically finished.” But the industry is now discovering that describing the program well is the hard part.
This is also why ideas like harness engineering are appearing: methods for controlling the range of outputs, from poor to excellent, that can emerge from minimal input.
And honestly, I do not think the “vibe coding” phenomenon is entirely bad. The essence of programming is automation. Many people were previously limited because they did not know programming languages. Now, through AI, they can express themselves and turn that expression into working apps. Seeing this, I understand how deeply people have wanted to create.
I write industrial software that runs in large factory environments, and because of the nature of my work, it is difficult for me to use AI directly. These environments are usually closed networks, so AI does not really benefit my own production work. Even so, I still defend AI, because it functions as a new kind of voice that allows more people to express themselves..
Of course, capitalism distorts this. Many people use AI to chase money and capital, and as a result, a lot of low-quality apps are being produced. But on the other hand, what is wrong with the motivation of wanting to make something one wants to make?
I have been studying the history of programming, and I like Dijkstra’s famous line:
> Computer science is no more about computers than astronomy is about telescopes..
To me, this means that computing is fundamentally about automation.
AI has existed as a research topic almost since the birth of computers. We tend to think of it as recent, but it is a field with a history of more than sixty years. Starting from early work such as the Perceptron, there have always been people claiming that AI was a fraud or an illusion.
But now a new seed has germinated. The amount of complexity that a single human can handle has increased. Historically, the techniques for managing that complexity were things like programming patterns and software architecture. And even people who strongly argued for software architecture also warned that if architecture becomes detached from code, then something has gone wrong.
Memes always damage the essence of ideas. As information circulates, it degrades, and eventually the original meaning disappears.
The Dunning-Kruger effect is a good example. The original paper was not simply saying, “ignorant people show off, while knowledgeable people do not.” It was more about how both less competent and more competent people can have difficulty accurately assessing their own metacognition. But the idea became distorted.
The same thing happens to many famous ideas in programming. Knuth’s statement about premature optimization is also constantly distorted as it circulates.
In that situation, can we really say it is always bad to step away from online communities and learn through AI while cross-checking against books?
When I see people making extreme claims about this, I sometimes find it absurd. Of course, many people may flag or downvote my comment. But this is how I see it.