logoalt Hacker News

samrustoday at 4:37 PM6 repliesview on HN

Great article. I agree with the argument.

But to offer a counter argument, would the same thing not have happened with the rise of high level languages? The machine code was abstracted away from engineers and they lost understanding of it, only knowing what the high level code is supposed to do. But that turned out fine. Would llms abstracting the code away so engineers only understand the functionality (specs, tests) also be fine for the same reason? Why didnt cognitive debt rise in with high level languages?

A counter counter argument is that compilers are deterministic so understanding the procedure of the high level language meant you understood the procedure that mattered of the machine code, and the stuff abstracted away wasnt necessary to the codes operation. But llms are probabilistic so understanding the functionality does not mean understanding the procedure of the code in the ways that matters. But id love to hear other peoples thoughts on that


Replies

kibwentoday at 5:01 PM

> would the same thing not have happened with the rise of high level languages?

Any argument that attempts to frame LLMs as analogous to compilers is too flawed to bother pursuing. It's not that compilers are deterministic (an LLM can also be deterministic if you have control over the seed), it's that the compiler as a translator from a high level language to machine code is a deductive logical process, whereas an LLM is inherently inductive rather than deductive. That's not to say that LLMs can't be useful as a way of generating high level code that is then fed into a compiler (an inductive process as a pipeline into a deductive process), but these are fundamentally different sorts of things, in the same way that math is fundamentally different from music (despite the fact that you can apply math to music in plenty of ways).

wrstoday at 5:31 PM

“Programs must be written for people to read, and only incidentally for machines to execute." — Harold Abelson

The purpose of high level languages is to make the structure of the code and data structures more explicit so it better captures the “actual” program model, which is in the mind of the programmer. Structured programming, type systems, modules, etc. are there to provide solid abstractions in which to express that model.

None of that applies to giving an LLM a feature idea in English and letting it run. (Though all of it is helpful for keeping an LLM from going completely off the rails.)

show 1 reply
avaertoday at 4:43 PM

I think it won't be too different once we see a few upgrades that are going to be required for reliability (and scaling up the AI assisted engineering process):

  - deterministic agents, where the model guarantees the same output with a seed
  - much faster coding agents, which will allow us to "compile" or "execute" natural language without noticing the llm
  - maybe just running the whole thing locally so privacy and reliability are not an issue
We're not there yet, but once we have that then I agree there won't be too much of a difference between using a high level language and plain text.

There's going to be a massive shift in programming education though, because knowing an actual programming language won't matter any more than knowing assembly does today.

nottorptoday at 5:43 PM

> But that turned out fine.

It did not turn out fine. Fortunately no one took it seriously, and at least seniors still have an intuitive model of how the hardware works in their head. You don't have to "see" the whole assembly language when writing high level code, just know enough about how it goes at lower levels that you don't shoot yourself in the foot.

When that's missing, due to lack of knowledge or perhaps time constraints, you end up on accidentally quadratic or they name a CVE after you.

gitanovictoday at 4:48 PM

I also was having a similar thought, and think you wrote the answer I could not put my finger on. Compilers are deterministic, AI is a stochastic process, it doesn't always converge exactly to the same answer. Here's the main difference

cowlbytoday at 5:59 PM

Yes my hot take is that the real risk isn't skill atrophy... it's failing to develop the new skill of using AI. It's all abstraction layers anyway and people always lament the next abstraction up.

0/1s → assembly → C → high-level languages → frameworks → AI → product

The engineer keeps moving up the abstraction chain with less and less understanding of the layers below. The better solution would be creating better verification, testing, and determinism at the AI layer. Surely we'll see the equivalent of high-level languages and frameworks for AI soon.