logoalt Hacker News

imiricyesterday at 12:21 PM1 replyview on HN

I mostly agree with you, but I see what afro88 is saying as well.

If you consider a human programmer as a "black box", in the sense that you feed it a set of inputs—the problem that needs to be solved, vague requirements, etc.—and expect a functioning program as output that solves the problem, then that process is similarly nondeterministic as an LLM. Ensuring that the process is reliable in both scenarios boils down to creating detailed specifications, removing ambiguity, and iterating on the product until the acceptance tests pass.

Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs. First of all, they have an understanding of human psychology, and can actually reason about semantics in ways that a pattern matching and token generation tool cannot. And in the best case scenario of experienced programmers, they have an intuitive grasp of the problem domain, and know how to resolve ambiguities in meatspace. LLMs at their current stage can at best approximate these capabilities by integrating with other systems and data sources, so their nondeterminism is a much bigger problem. We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.


Replies

kshri24yesterday at 12:58 PM

Agree with most of what you say. The only reason I say humans are different from LLMs when it comes to being a "black box" is because you can probe humans. For instance, I can ask a human to explain how he/she came to the conclusion and retrace the path taken to come to said conclusion from known inputs. And this can also be correlated with say brainwave imaging by mapping thoughts to neurons being triggered in that portion of the brain. So you can have a fairly accurate understanding of the path taken. I cannot probe the LLM however. At least not with the tools we have today.

> Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs.

Yes true. Another thought that comes to my mind is I feel it might also have to do with us recognizing other humans as not as alien to us as LLMs are. So there is an inherent trust deficit when it comes to LLMs vs when it comes to humans. Inherent trust in human beings, despite being less capable, is what makes the difference. In everything else we inherently want proper determinism and trust is built on that. I am more forgiving if a child computes 2 + 1 = 4, and will find it in me to correct the child. I won't consider it a defect. But if a calculator computes 2 + 1 = 4 even once, I would immediately discard it and never trust it again.

> We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.

Agreed.