logoalt Hacker News

ck2today at 7:23 PM8 repliesview on HN

if machine-learning can find all these holes

why can't machine-learning write a product from scratch that is flawless?


Replies

yjftsjthsd-htoday at 7:27 PM

Who said it can't? https://news.ycombinator.com/item?id=47759709 appears to be a nearly flawless (per spec) zip implementation.

tclancytoday at 7:56 PM

Because the problem is asymmetric: the attacker only needs to find one hole at one time. The defender has to be flawless forever.

perlgeektoday at 7:56 PM

LLMs certainly make it more feasible to rewrite a product in a memory-safe language, eliminating a whole class of bugs.

Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.

As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).

_fluxtoday at 7:38 PM

Just because something is good at finding bugs, it may not find all the bugs. Finding a bug only tells you there was one bug you found, it doesn't tell if the rest is solid.

chromacitytoday at 8:09 PM

If humans can find bugs, why can't humans write flawless code?

Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.

hnlmorgtoday at 7:40 PM

It’s easier to break something than it is to make something that cannot be broken.

jonhohletoday at 7:52 PM

Have you ever met a security engineer? I’ve never met one who was also a good engineer (not saying they don’t exist, I just haven’t met one). Do they find vulnerabilities? Sure. Could they write the tools they use to find vulnerabilities, most probably not.

dupedtoday at 8:27 PM

You could argue the answer to this question depends on if you believe P=NP