logoalt Hacker News

LLM Doesn't Write Correct Code. It Writes Plausible Code

44 pointsby pretexttoday at 2:38 PM36 commentsview on HN

Comments

treetalkertoday at 3:30 PM

This is my experience with how LLMs "draft" legal arguments: at first glance, it's plausible — but may be, and often is, invalid, unsound, and/or ill-advised.

The catch is that many judges lack the time, energy, or willingness to not only read the documents in detail, but also roll up their sleeves and dig into the arguments and cited authorities. (Some lack the skills, but those are extreme cases.) So the plausible argument (improperly and unfortunately) carries the day.

LLM use in litigation drafting is thus akin to insurgent/guerilla warfare: it take little time, energy, or thinking to create, yet orders of magnitude more to analyze and refute. (It's a species of Brandolini's Law / The Bullshit Asymmetry Principle.) Thus justice suffers.

I imagine that this is analogous to the cognitive, technical, and "sub-optimal code" debt that LLM-produced code is generating and foisting upon future developers who will have to unravel it.

show 5 replies
andaitoday at 3:59 PM

It writes statistically represented code, which is why (unless instructed otherwise) everything defaults to enterprisey, OOP, "I installed 10 trendy dependencies, please hire me" type code.

seanmcdirmidtoday at 3:38 PM

Ok, I’ll bite: how is that different from humans?

show 6 replies
satvikpendemtoday at 3:55 PM

Oftentimes, plausible code is good enough, hence why people keep using AI to generate code. This is a distinction without a difference.

show 2 replies
bitwizetoday at 3:53 PM

You: Claude, do you know how to program?

Claude: No, but if you hum a few bars I can fake it!

Except "faking it" turns out to be good enough, especially if you can fake it at speed and get feedback as to whether it works. You can then just hillclimb your way to an acceptable solution.

show 1 reply
shablulmantoday at 3:21 PM

[dead]