logoalt Hacker News

oncallthrowtoday at 11:52 AM3 repliesview on HN

I think this article is largely, or at least directionally, correct.

I'd draw a comparison to high-level languages and language frameworks. Yes, 99% of the time, if I'm building a web frontend, I can live in React world and not think about anything that is going on under the hood. But, there is 1% of the time where something goes wrong, and I need to understand what is happening underneath the abstraction.

Similarly, I now produce 99% of my code using an agent. However, I still feel the need to thoroughly understand the code, in order to be able to catch the 1% of cases where it introduces a bug or does something suboptimally.

It's possible that in future, LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on. When doing straightforward coding tasks, I think they're already there, but I think they aren't quite at that point when it comes to large distributed systems.


Replies

spicyusernametoday at 12:28 PM

So we already have this problem and things are "fine"?

mbbutlertoday at 12:47 PM

In my personal experience, the rate at which Claude Code produces suboptimal Rust is way higher than 1%.

show 1 reply
kgwxdtoday at 12:28 PM

> LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on.

The problem is, they're nothing like transistors, and never will be. Those are simple. Work or don't, consistently, in an obvious, or easily testable, way.

LLM are more akin to biological things. Complex. Not well understood. Unpredictable behavior. To be safely useful, they need something like a lion tamer, except every individual LLM is its own unique species.

I like working on computers because it minimizes the amount of biological-like things I have to work with.

show 1 reply