logoalt Hacker News

dimtionyesterday at 11:52 PM5 repliesview on HN

I'm not sure why people struggle with the fact that an abstraction can be built on top of a non-deterministic and stochastic system. Many such abstractions already exist in the world we live.

Take sending a packet over a noisy, low SNR cell network. A high number of packets may be lost. This doesn't prevent me, as a software developer, from building an abstraction on top of a "mostly-reliable" TCP connection to deliver my website.

There are times when the service doesn't work, particularly when the packet loss rate is too high. I can still incorporate these failures into my mental model of the abstraction (e.g through TIMEOUTs, CONN_ERRs…).

Much of engineering and reliability history revolves around building mathematical models on top of an unpredictable world. We are far from solving this problem with LLMs, but this doesn't prevent me from thinking of LLMs as a new level of abstraction that can edit and transform code.


Replies

zadikiantoday at 12:53 AM

I'm fine with that. The part that makes it not really an abstraction is, you still deliver code in the end. It'd be different if your deliverable were prompt+conversation, and the code were merely an intermediate build artifact. Usually people throw away the convo. Some have tried making markdown files the deliverable instead, so far that doesn't really work.

It makes even less sense when people compare an LLM to a compiler. Imagine making a pull request that's just adding a binary because you threw the source code away.

show 1 reply
distalxtoday at 12:06 AM

A transmission error has a strictly contained, predictable blast radius. If a packet drops, the system knows exactly how to handle it: it throws a timeout, drops a connection, or asks for a retry. The worst-case scenario is known.

A reasoning error has an infinite, unpredictable blast radius. When an LLM hallucinates, it doesn't fail safely but it writes perfectly compiling code that does the wrong thing. That "wrong thing" might just render a button incorrectly, or it might silently delete your production database, or open a security backdoor.

You can build reliable abstractions over failures that are predictable and contained. You cannot abstract away unpredictable destruction.

show 2 replies
evrydayhustlingtoday at 12:17 AM

Besides deeply unpredictable factors (like signal transmission), most users of higher-level abstractions do so without certainty about how the translation will be executed. For example, one of the main selling points of C when I was growing up was that you could write code independent of architecture, and leave the architecture-specific translation to assembly to the compiler!

Abstractions often embrace nondeterministic translation because lower level details are unknown at time of expression -- which is the moivation for many LLM queries.

yomismoaquitoday at 2:10 AM

The kicker is when you delegate some work to another team member and discover that humans are also non-deterministic.

dominotwtoday at 12:41 AM

that would make sense if ai said "fail. i dont know" . Its active deception is what makes it difficult.