The gist ("product becomes a black box") applies to any abstraction. It could apply to high-level languages (including so-called "low-level" languages like C), and few people criticize those.
But LLMs are particularly insidious because they're a particularly leaky abstraction. If you ask an LLM to implement something:
- First, there's only a chance it will output something that works at all
- Then, it may fail on edge-cases
- Then, unless it's very trivial, the code will be spaghetti, so neither you nor the LLM can extend it
Vs a language like C, where the original source is indecipherable from the assembly but the assembly is almost certainly correct. When GCC or clang does fail, only an expert can figure out why, but it happens rarely enough that there's always an expert available to look at it.
Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular. Tasks like "style a website exactly how I want" or "implement this complex algorithm" you can't describe without being extremely verbose and inventing jargon (or being extremely more verbose), at which point you'd spend less effort and write less using a real programming language.
If people end up producing all code (or art) with AI, it won't be through prompts, but fancy (perhaps project-specific) GUIs if not brain interfaces.
I agree, there is a reason we settled on programming languages as a interface to instruct the machine. Ultimately it is a tool to indicate our thoughts as precise as possible in a certain domain.
People that don't understand the tools they use are doomed to reinvent them.
Perhaps the interface will evolve into pseudo code where AI will fill in undefined or boilerplate with best estimates.
> Even if LLMs get better, English itself is a bad programming language, because it's imprecise and not modular.
I absolutely agree. I've only dabbled in AI coding, but every time I feel like I can't quite describe to it what I want. IMO we should be looking into writing some kind of pseudocode for LLMs. Writing code is the best way to describe code, even if it's just pseudocode.
How is this different than giving a person requirements and asking them to implement something.
I built a Klatt formant synthesizer with Claude from ~120 academic papers. Every parameter cites its source, the architecture is declarative YAML with phased rule execution and dependency resolution.
Here's what "spaghetti" looks like:
https://github.com/ctoth/Qlatt/blob/master/public/rules/fron...
One thing I want to point out. Before I started, I gathered and asked Claude to read all these papers:
https://github.com/ctoth/Qlatt/tree/master/papers
Producing artifacts like this:
https://github.com/ctoth/Qlatt/blob/master/papers/Klatt_1980...
Note how every rule and every configurable thing in the synthesizer pipeline has a citation to a paper? I can generate a phrase, and with the "explain" command I can see precisely why a certain word was spoken a certain way.