logoalt Hacker News

helloplanetstoday at 5:08 AM7 repliesview on HN

And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.


Replies

falloustoday at 5:51 AM

You just described the burden of outsourcing programming.

show 5 replies
GCUMstlyHarmlstoday at 5:58 AM

I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

show 2 replies
rixedtoday at 12:18 PM

To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)

fflluuxxtoday at 11:02 AM

This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.

dkdbejwi383today at 9:12 AM

Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.

show 2 replies
jiveturkeytoday at 5:57 AM

> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

show 2 replies
Der_Einzigetoday at 6:12 AM

Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

show 2 replies