logoalt Hacker News

Capricorn2481last Tuesday at 11:17 PM0 repliesview on HN

> But looks like your goal is not really to be effective at using LLMs, but to bitch about it on the internet

Listen, you can engage with the comment or ignore everything but the first sentence and throw out personal insults. If you don't want to sound like a shill, don't write like one.

When you're telling people the problem is the LLM did not have tests, you're saying "Yeah I know you caught it spitting out random unrelated crap, but if you just let it verify if it was crap or not, maybe it would get it right after a dozen tries." Does that not seem like a horribly ineffectual way to output code? Maybe that's how some people write code, but I evaluate myself with tests to see if I accidentally broke something elsewhere. Not because I have no idea what I'm even writing to begin with.

You wrote

> Without that they cannot know if what they did works, and they are a bit like humans

They are exactly not like humans this way. LLMs break code by not writing valid code to begin with. Humans break code by forgetting an obscure business rule they heard about 6 months ago. People work on very successful projects without tests all the time. It's not my preference, but tests are non-exhaustive and no replacement for a human that knows what they're doing. And the tests are meaningless without that human writing them.

So your response to that comment, pushing them further down the path of agentic code doing everything for them, smacks of maximalism, yes.