logoalt Hacker News

logicprogyesterday at 11:47 PM5 repliesview on HN

"It’s very unsettling, then, to find myself feeling like I’m in danger of being left behind - like I’m missing something. As much as I don’t like it, so many people have started going so hard on LLM-generated code in a way that I just can’t wrap my head around.

...

’ve been using Copilot - and more recently Claude - as a sort of “spicy autocomplete” and occasional debugging assistant for some time, but any time I try to get it to do anything remotely clever, it completely shits the bed. Don’t get me wrong, I know that a large part of this is me holding it wrong, but I find it hard to justify the value of investing so much of my time perfecting the art of asking a machine to write what I could do perfectly well in less time than it takes to hone the prompt.

You’ve got to give it enough context - but not too much or it gets overloaded. You’re supposed to craft lengthy prompts that massage the AI assistant’s apparently fragile ego by telling it “you are an expert in distributed systems” as if it were an insecure, mediocre software developer.

Or I could just write the damn code in less time than all of this takes to get working."

Well there's your problem. Nobody does roll-based prompts anymore, and the entire point of coding agents is that they search your code base, do internet searches, and do web fetches, as well as launch sub agents and use todo lists, to fill and adjust their context exactly as needed themselves, without you having to do it manually.

It's funny reading people planatively saying, "I just don't get how people could possibly be getting used out of these things. I don't understand it." And then they immediately reveal that it's not the baffling mystery or existential question there pretending it is for the purpose of this essay — the reason they don't understand it is that they literally don't understand the tech itself lol


Replies

raincoletoday at 3:09 AM

Yeah, remind me of this: https://news.ycombinator.com/item?id=46929505

> I have a source file of a few hundred lines implementing an algorithm that no LLM I've tried (and I've tried them all) is able to replicate, or even suggest, when prompted with the problem. Even with many follow up prompts and hints.

People making this kind of claim will never post the question and prompts they tried. Because if they did, everyone will know it's just they don't know how to prompt.

show 2 replies
martin-ttoday at 1:55 AM

This just shows that the models (not AI, statistical models of text used without consent) are not that smart, it's the tooling around them which allows using these models as a heuristic for brute force search of the solution space.

Just last week, I prompted (not asked, it is not sentient) Claude to generate (not tell me or find out or any other anthropomorphization) whether I need to call Dispose on objects passed to me from 2 different libraries for industrial cameras. Being industrial, most people using them typically don't post their code publicly, which means the models have poor statistical coverage around these topics.

The LLM generated a response which triggered the tooling around it to perform dozens of internet searches and based on my initial prompt, the search results and lots of intermediate tokens ("thinking"), generated a reply which said that yes, I need to call Dispose in both cases.

It was phrased authoritatively and confidently.

So I tried it, one library segfaulted, the other returned an exception on a later call. I performed my own internet search (a single one) and immediately found documentation from one of the libraries clearly stating I don't need to call Dispose. The other library being much more poorly documented didn't mention this explicitly but had examples which didn't call Dispose.

I am sure if I used LLMs "properly" "agentically", then they would have triggered the tooling around them to build and execute the code, gotten the same results as me much faster, then equally authoritatively and confidently stated that I don't need to call Dispose.

This is not thinking. It's a form of automation but not thinking and not intelligence.

show 4 replies
bitwizetoday at 5:55 AM

(Valley Girl voice) Role-based prompts are like, so September 2025. Everybody is just using agents now. You don't get to sit with us at the cool kids' table until you learn how to prompt, loser.

show 1 reply
ares623today at 3:28 AM

So I guess you could say, they're "holding it wrong"?

show 1 reply
skzizjjtoday at 1:47 AM

[dead]