logoalt Hacker News

joduplessistoday at 3:29 PM6 repliesview on HN

I really wish seemingly intelligent people would stop using the abstraction analogy (like the article does). The key word is: determinism. Every level of abstraction (inc. power tools, C, etc.) added a deterministic layer you can rely on to more effectively do whatever it is that you're doing - same result, every time. LLM's use natural language to describe programming and the result is varied at the very best (hence agents, so we can brute force the result instead). I think the real moat is becoming the person who can actually still program.


Replies

phpnodetoday at 4:01 PM

People always say this but it’s misguided imo. Yes LLMs are not deterministic, but that’s totally irrelevant. You aren’t executing the LLMs output directly, you’re using the LLM to produce an artefact once that is then executed deterministically. A spec gets turned into code once. Editing the spec can cause the code to be updated but it’s not recreating the whole program each time, so why does determinism matter?

show 6 replies
NiloCKtoday at 5:23 PM

I grant that there's a definition of abstraction that LLMs don't fall into. But people describing LLMs as another abstraction layer aren't all misunderstanding this. Instead, they are using the term ... more abstractly.

EG: How did Mark Zuckerberg make software five years ago?

He's as capable of opening up an editor as I am, but circumstance had offered him a different interface in terms of human resources. Instead of the editor, he interacts with those humans, who produced the software. This layer between him and the built systems is an abstraction, deterministic or not.

Today, you and I have a broader delegation mandate over many tasks than we did a few years ago.

arecsutoday at 6:43 PM

There's something to be said about the fact that the very people who would use deterministic layers to build stuff are... non-deterministic. We, as humans, have our set of pros and cons, wins and failures. Even the most brilliant coders on earth will make mistakes from time to time. I often fail to see this getting accounted in any conversation when there is a critique towards LLMs, as if we humans are not flawed in our own ways, with a huge degree of variance across individuals. Good and bad code existed prior to LLMs. If you're hiring someone to do code, you're basically using some heuristics to trust this person will do a good job. But nothing is ever guaranteed 100% deterministically ever. Without thinking it that much, LLMs will sometimes produce better code and manage systems that some people who are earning salaries out there. Possibly sub-par developers if we were precise, but professionals in the meaning of the word (that are being paid to do work).

At the end of the day, what matters is how willing the person behind a given task is when it comes to deliver quality work, how transparent and honest they are, to understand requirements, and a pleasure to work with along other humans. AI/LLMs are just extra tools for them. As crazy as it might sound, but not so many people are willing to push boundaries and deliver great work. That is what makes the difference.

ansktoday at 6:08 PM

I see what you're getting at, but determinism isn't the right word either. LLMs are fundamentally deterministic -- they are pure functions which output text as a function of the input text and the network parameters[1]. Depending on your views on free will, it could be effectively argued that humans are deterministic as well.

The concept you're touching on is the idea that LLMs (and humans) are functions which are inscrutable. Their behavior cannot be distilled into a series of logical steps that you can fit in your head, there are no invariants which neatly decompose their complexity into a few interpretable states, and the input and output spaces are unstructured, ambiguous, underspecified, and essentially infinite. This makes them just about impossible to reason about or compose using the same strategies and analysis we apply to traditional programs.

[1] Optionally, they can take in a source of entropy to add nondeterminism, but this is not essential. If LLM providers all fixed their prng seeds to a static value, hardly anyone would notice. I can't imagine there are many workflows which feed an LLM the exact same prompt multiple times and rely on the output having some statistical distribution. In fact, even if you wanted this you may just end up getting a cached response.

show 2 replies
qnleightoday at 5:14 PM

LLM's don't have to achieve perfect reliability to replace lots of work. They just have to reach the balance of reliability and cost suitable for a given task. This will depend on the task.

danawtoday at 3:40 PM

every time a person uses the abstraction argument, an angel dies