As long as AI (genAI, LLMs, whatever you call it describe the current tech) is perceived not as a "bicycle of the mind" and a tool to utilize 'your' skills to a next phase but as a commodity to be exploited by giant corporations whose existence is based on maximizing profits regardless of virtue or dignity (a basic set of ethics to, for example, not to burn books after you scan it feed your LLM like Anthropic), it is really hard to justify the current state of AI.
Once you understand the sole winner in this hype is the one who'll be brutally scraping every bit of data, whether it's real-time or static and then refining it to give it back to you without your involvement in the process (a.k.a, learning) you'll come to understand that the current AI by nature is hugely unfavorable to mental progression...
I read this and thought, "are we using the same software?" For me, I have turned the corner where I barely hand-edit anything. Most of the tasks I take on are nearly one-shot successful, simply pointing Claude Code at a ticket URL. I feel like I'm barely scratching the surface of what's possible.
I'm not saying this is perfect or unproblematic. Far from it. But I do think that shops that invest in this way of working are going to vastly outproduce ones that don't.
LLMs are the first technology where everyone literally has a different experience. There are so many degrees of freedom in how you prompt. I actually believe that people's expectations and biases tend to correlate with the outcomes they experience. People who approach it with optimism will be more likely to problem-solve the speed bumps that pop up. And the speed bumps are often things that can mostly be addressed systemically, with tooling and configuration.
>* I find it hard to justify the value of investing so much of my time perfecting the art of asking a machine to write what I could do perfectly well in less time than it takes to hone the prompt.*
This sums up my interactions with LLMs
>You’re supposed to craft lengthy prompts that massage the AI assistant’s apparently fragile ego by telling it “you are an expert in distributed systems”
This isn't GPT-3 anymore. We have added fine tuning and other post training techniques to make this unnecessary.
I’ve reached a point where I stop reading whenever I see a post that mentions “one-shot.” It's becoming increasingly obvious that many platforms are riddled with bots or incompetent individuals trying to convince others that AI are some kind of silver bullet.
Nice name you got there localghost ;p
The POV of the author I understand quite well because it was mine. Its really only in the last 6 months or so that my perspective has changed. The author still sounds like they are in the "black box that I toss wishes into and is dumb as fuck" phase. It also sounds like they are resistant to learning how to make the most of it which is a shame. If you take the time to learn the techniques that make this stuff tick you'll be amazed at what it can do. I mean maybe I am a total idiot and this stuff will get good enough that I am no longer necessary. Right now though? I see it as an augmentation. An amplification of me and what I am capable of.
"It’s very unsettling, then, to find myself feeling like I’m in danger of being left behind - like I’m missing something. As much as I don’t like it, so many people have started going so hard on LLM-generated code in a way that I just can’t wrap my head around.
...
’ve been using Copilot - and more recently Claude - as a sort of “spicy autocomplete” and occasional debugging assistant for some time, but any time I try to get it to do anything remotely clever, it completely shits the bed. Don’t get me wrong, I know that a large part of this is me holding it wrong, but I find it hard to justify the value of investing so much of my time perfecting the art of asking a machine to write what I could do perfectly well in less time than it takes to hone the prompt.
You’ve got to give it enough context - but not too much or it gets overloaded. You’re supposed to craft lengthy prompts that massage the AI assistant’s apparently fragile ego by telling it “you are an expert in distributed systems” as if it were an insecure, mediocre software developer.
Or I could just write the damn code in less time than all of this takes to get working."
Well there's your problem. Nobody does roll-based prompts anymore, and the entire point of coding agents is that they search your code base, do internet searches, and do web fetches, as well as launch sub agents and use todo lists, to fill and adjust their context exactly as needed themselves, without you having to do it manually.
It's funny reading people planatively saying, "I just don't get how people could possibly be getting used out of these things. I don't understand it." And then they immediately reveal that it's not the baffling mystery or existential question there pretending it is for the purpose of this essay — the reason they don't understand it is that they literally don't understand the tech itself lol
There's a couple of news stories doing the rounds at the moment which point to the fact that AI isn't "there yet"
1. Microsoft's announcement of cutting their copilot products sales targets[0]
2. Moltbook's security issues[1] after being "vibe coded" into life
Leaving the undeniable conclusion to be - the vast majority (seriously) distrusts AI much more than we're led to believe, and with good reason.
Thinking (as a SWE) is still very much the most important skill in SWE, and relying on AI has limitations.
For me, AI is a great tool for helping me to discover ideas I had not previously thought of, and it's helpful for boilerplate, but it still requires me to understand what's being suggested, and, even, push back with my ideas.
[0] https://arstechnica.com/ai/2025/12/microsoft-slashes-ai-sale...
[1] https://www.reuters.com/legal/litigation/moltbook-social-med...
I hardly write code anymore myself and am a heavy user of Claude Code.
One of the things I’m struggling to come to terms with is the “don’t commit code that you don’t understand” thing.
This makes sense, however it’s a thorn in my side that if I’m honest I’ve not been able to come up with a compelling answer to yet.
I agree with the sentiment. But in practice it only works for folks who have become domain experts.
My pain point is this - it’s fine to only review if you have put in the work by writing lots of code over their careers.
But, what’s the game plan in 10 years if we’re all reviewing code?
I know you learn from reviewing, but there’s no doubt that humans learn from writing code, failing, and rewriting.
So…this leaves me in a weird place. Do we move passively into a future where we don’t truely deeply understand the code anymore because we don’t write it? Where we leave it up to the AI to handle the lower level details and we implicitly trust it so we can focus on delivering value further up the chain? What happens to developers in 10-20 years?
I don’t know but I’m not so convinced that we’re going to be able to review AI code so well when we’ve lost the skill to write it ourselves.
Every one of these posts and most of the comments on them could be written by an LLM. Nobody says anything new. Nobody has any original thoughts. People make incredibly broad statements and make fundamental attribution errors.
In fact, most LLMs would do a better job than most commenters on HN.
Outsourcing their thinking is going to be the stupidest thing humans ever did and we won't even be smart enough to understand that this is the case.
The best part about this whole debate is that we don't have to wait years and years for one side to be proven definitively right. We will know beyond a shadow of a doubt which side is right by this time next year. If agentic coding has not progressed any further by then, we will know. On the other hand, if coding agents are 4x better than they are today, then there will be a deluge of software online, the number of software engineers that are unemployed will have skyrocketed up and HN will be swamped by perma-doomers.
> I’ve been using Copilot - and more recently Claude - as a sort of “spicy autocomplete” and occasional debugging assistant for some time, but any time I try to get it to do anything remotely clever, it completely shits the bed.
This seems like a really disingenuous statement. If claude can write an entire C compiler that is able to compile the linux kernel, I think it has already surpassed an unimaginable threshold for "cleverness"
"Hammock driven development"'s force multiplying projected to increase steadily on, as the coding becomes less the pressure point.
I can't help but draw parrallels to the systems programmers who would scoff at people getting excited over css and javascript features. "Just write the code yourself! There is nothing new here! Just think!"
The point of programming is to automate reasoning. Don't become a reactionary just cause your skills got got. The market is never wrong, even if there is a correction in 20 years we'll see nvidia with 10T market cap. Like every other correction (at&t, NTT)
Programmers for some reason love to be told what do to. First thing in the morning they look out for someone else to tell them how to do, how to test, how to validate.
Why don't do it yourself, like you want to do it, when you could just fallback to mediocrity and instead do like everybody else does?
Why think when you can be told what to do?
Why have intercourse with your wife when instead you can let someone else do? This is the typical llm user mentality
Maybe I don't understand it correctly but to me this reads like the author isn't actually using AI agents. I don't talk or write prompts anymore. I write tasks and I let a couple of AI agent complete those tasks. Exactly how I'd distribute tasks to a human. The AI code is of variating quality and they certainly aren't great at computer science (at least not yet), but it's not like they write worse code than some actual humans would.
I like to say that you don't need computer science to write software, until you do. The thing is that a lot of software in the organisations I've worked in, doesn't actually need computer science. I've seen horrible javascript code on the back-end live a full lifecycle of 5+ years without needing much maintainence, if any, and be fine. It could've probably have been more efficient, but compute is so cheap that it never really mattered. Of course I've also seen inefficient software or errors cost us a lot of money when our solar plants didn't output what they were supposed to. I'd let AI's write one of those things any day.
Hell I did recently. We had an old javascript service which was doing something with the hubspot API. I say something because I didn't ever really find out what it was. Basically hubspot sunset the v1 of their API, and before the issue arrived at my table my colleagues had figured out that was the issue. I didn't really have the time to fix this, so when I saw how much of a mess the javascript code was and realized it would take me a few hours to figure out what it even did... well... I told my AI agent running on our company framework to fix it. It did so in 5-10 minutes with a single correction needed. It improved the javascript quite a bit while doing it, typing everything. I barely even got out of my flow to make it happen. So far it's run without any issues for a month. I was frankly completely unnecessary in this process. The only reason it was me who fired up the AI is because the people who sent me the task haven't yet adopted AI agents.
That being said... AI's are a major security risk that needs to be handled accordingly.
> I think it’s important to highlight at this stage that I am not, in fact, “anti-LLM”. I’m anti-the branding of it as “artificial intelligence”, because it’s not intelligent. It’s a form of machine learning.
It's a bit weird to be against the use of the phrase "artificial intelligence" and not "machine learning". Is it possible to learn without intelligence? Methinks the author is a bit triggered by the term "intelligence" at a base primal level ("machines can't think!").
> “Generative AI” is just a very good Markov chain that people expect far too much from.
The author of this post doesn't know the basics of how LLMs work. The whole reason LLMs work so well is that they are extremely stateful and not memoryless, the key property of Markov processes.
Highlighting this as a good thing is just... not great
> Code you did not write is code you do not understand > You cannot maintain code you do not understand
We need no AI for this one: If I could only maintain code I wrote, I'd have to work alone. Whether I am reviewing code written by AI or by a human is irrelevant here. The rest of the comments on the human we know having more trust is in itself a smell on your development process: This is how we get cliques, and people considering repos hostile, because there's clear differences in behavior for well known contributors, who are just as capable of writing something bad as anyone else. If there's anything I have seen in code review processes, regardless of where I've worked for the last couple of decades, is that visual inspections rarely get us anywhere, and no organization is willing to dedicate the time to double check anything. Bugs that happily go through inspection come in too even when the organization commits to extreme programming: Full time TDD and full pairing.
There is a question on how fast we can merge code in a project safely, regardless of the identity of the author of the PRs. A systematic approach to credible, reliable development. But the answer here has very little to do with what the article's author is saying, and has nothing to do with whether a contribution was made by an AI, a human, or a golden retriever with access to a keyboard and a lot of hope.