> And then, inevitably, comes the character evaluation, which goes something like this:
I saw a version of this yesterday where a commenter framed LLM-skepticism as a disappointing lack of "hacker" drive and ethos that should be applied to making "AI" toolchains work.
As you might guess, I disagreed: The "hacker" is not driven just by novelty in problems to solve, but in wanting to understand them on more than a surface layer. Messing with kludgy things until they somehow work is always a part of software engineering... but the motive and payoff comes from knowing how things work, and perceiving how they could work better.
What I "fear" from LLMs-in-coding is that they will provide an unlimited flow of "mess around until it works" drudgery tasks with none of the upside. The human role will be hammering at problems which don't really have a "root cause" (except in a stochastic sense) and for which there is never any permanent or clever fix.
Would we say someone is "not really an artist" just because they don't want to spend their days reviewing generated photos for extra-fingers, circling them, and hitting the "redo" button?
Hearing people on tech twitter say that LLMs always produce better code than they do by hand was pretty enlightening for me.
LLMs can produce better code for languages and domains I’m not proficient in, at a much faster rate, but damn it’s rare I look at LLM output and don’t spot something I’d do measurably better.
These things are average text generation machines. Yes you can improve the output quality by writing a good prompt that activates the right weights, getting you higher quality output. But if you’re seeing output that is consistently better than what you produce by hand, you’re probably just below average at programming. And yes, it matters sometimes. Look at the number of software bugs we’re all subjected to.
And let’s not forget that code is a liability. Utilizing code that was “cheap” to generate has a cost, which I’m sure will be the subject of much conversation in the near future.
The anti-LLM side seems much more insecure. Pro-LLM influencers are sometimes corny, but it's sort of like any other influencer, they are incentivized to make everything sound exciting to get clicks. Nobody was complaining about 3d printer influencers raving about how printing replacement dishwasher parts was going to change everything.
LLMs have also become kind of a political issue, except only the "anti" side even really cares about it. Given that using and prompting them is very much a garbage in/garbage out scenario, people let their social and political biases cloud their usage, and instead of helping it succeed, they try to collect "gotcha" moments, which doesn't reflect the workflow of someone using an LLM productively.
"LLM evangelists - are you willing to admit that you just might not be that good at programming computers?"
No.
I have been through this before (wherever/whenever the money seems to flow) - databases are bad, you should use couchbase etc, I was a db expert, the people advocating weren't, but they were very loud. The many, many evangelistic web development alternatives that come and go, all very loud. Now the latest - LLM's, like couchbase et al they have their place but the evangelists are not having any of it.
I work a lot with doctors (writing software for them), I am very envious of their system of specialisation, eg this dude is such and such a specialist - he knows about it, listen to him. IT seems to be anyone who talks the loudest has a podium, separating the wheat from the chaff is difficult. One day we will have a system of qualifications I hope, but it seems a long way off.
"I am still willing to admit I am wrong. That I'm not holding the GPS properly. That navigating with real-time satellite data is its own skill and I have not spent enough time with it. I have changed how I get around before, and I'm sure I will do so again.
Map-reading evangelists, are you willing to admit that you just might not be that good at driving a car? Maybe you once were. Maybe you never were."
How much longer until we get to just... let the results speak for themselves and stop relitigating an open question with no clear answer.
We're well past ad nauseum now. Let's talk about anything else.
> That doing this is it's own skill and I have not spent enough time with it.
Yeah, this.
I sucked (still sucks?) at it too, I spent countless hours correcting them. And throwing away hours of "work" they made, And even had them nuking the workplace a couple times (thankfully, they were siloed). I still feel like I'm wasting too much time way too often and trying new things constantly.
But I always thought I can learn and improve on this tool and its associated ecosystem as much as the other programming tools and languages and frameworks I learned over the years.
The tech industry seems to attract people that feel personally attacked when someone else makes different choices that they do.
"Why are you using Go? Rust is best! You should be using that!" "Don't use AWS CDK, use Terraform! Don't you know anything?"
5 anti-AI posts on the home page of Hacker News…yeah, plenty of insecure evangelism amongst the skeptics, too.
> But doing "prompt-driven development" or "vibe coding" with an Agentic LLM was an incredibly disapointing experience for me. It required an immense amount of baby sitting, for small code changes, made slowly, which were often wrong. All the while I sat there feeling dumber and dumber, as my tokens drained away.
Yeah I find they are useful for large sweeping changes, introducing new features and stuff, mostly because they write a lot of the boilerplate, granted with some errors. But for small fiddly changes they suck, you will have a much easier time doing these changes your self.
"LLM evangelists - are you willing to admit that you just might not be that good at programming computers?"
The people who were the best at something don't necessarily be the best at a new paradigm. Unlearning some principles and learning new ones might be painful exercise for some masters.
Military history has shown that the masters of the new wave are not necessarily the masters of the previous wave we see the rise and downfall of several civilizations from Roman to Greek for being too sure of their old methods and old military equipments and strategy.
I feel no strong need to convince others. I've been seeing major productivity boosts for myself and others since Sonnet 3.5. Maybe for certain kinds of projects and use cases it's less good, maybe they're not using it well; I dunno. I do think a lot of these people probably will be left behind if they don't adopt it within the next 3 years, but that's not really my problem.
To be fair to both sides, it really is hard to tell if we're in the world of
"you'll be left behind if you don't learn crypto" with crypto
or
"you'll be left behind if you don't learn how to drive" with cars
One of those statements is made in good faith, and the other is made out of insecurity. But we'll probably only really be able to tell looking backwards.
I see a lot more insecurity from people who refuse to use AI coding tools. My teammates amd I use this stuff all the time, and it's not making a statement, it's just an easier path sometimes.
This is also bad evangelism, but on opposite side.
Just because LLMs don't work for you outside of vibe-coding, doesn't mean it's the same for everyone.
> LLM evangelists - are you willing to admit that you just might not be that good at programming computers?
Productive usage of LLMs in large scale projects become viable with excellent engineering (tests, patterns, documentation, clean code) so perhaps that question should also be asked to yourself.
"I find LLMs useful as a sort of digital clerk - searching the web for me, finding documentation, looking up algorithms. I even find them useful1 in a limited coding capacity; with a small context and clear guidelines."
I am curious why the author doesn't think this saves them time (i.e. makes them more productive).
I never had terribly high output as a programmer. I certainly think LLMs have helped increased the amount of code that I can write, net total, in a year. Not to superhuman levels or even super-me levels, just me++.
But, I think the total time spent producing code has gone down to a fraction and has allowed me more time to spend thinking about what my code is meant to solve.
I wonder about two things: 1. maybe added productivity isn't going to be found in total code produced, because there is a limit on how much useful code can be produced that is based on external factors 2. do some devs look at the output of an LLM and "get the ick" because they didn't write it and LLM-code is often more verbose and "ugly", even though it may work? (this is a total supposition and not an accusation in any way. i also understand that poorly thought out, overly verbose code comes with problems over time)
I tend to share the sentiment of the author.
I think that coding assistants tend to quite good as long as what you ask is close to the training data.
Anything novel and the quality if falling off rapidly.
So, if you are like Antirez and ask for a Linenoize improvement that has already be seen many times by the LLM at training time, the result will seem magical, but that is largely an illusion, IMO.
Evangelism of a new technique and tool for doing work is insecure? I agree it’s been oversold but it’s natural to be pretty excited about the tech.
I think it goes further than this. Some people - some developers, even - do not _like_ programming computers. In fact, many hate it. Those people welcome the LLM agent stuff because it delivers the end product without going through the necessary pain (from their pov) of programming.
I don't get who is saying this dreaded "you'll be left behind." The only place I see that is from straight-up slop accounts in the Twitter algo feed. Surely you're not letting those people make you feel bad.
> You see a lot of accomplished, prominent developers claiming they are more productive without it.
You also see a lot of accomplished, prominent developers claiming they are more productive with it, so I don't know what this is supposed to prove. The inverse argument is just as easy to make and just as spurious.
I remember a similarly aggressive evangelism about self-driving cars several years ago. I suppose it's not so pleasant, when you feel like you've seen a prophetic glimpse of a brilliant future, to deal with skeptics who don't understand your vision and refuse to give your predictions the credit they deserve.
Of course we need a few people to get wildly overexcited about new possibilities, so they will go make all the early mistakes which show the rest of us what the new thing can and cannot actually do; likewise, we need most of us to feel skeptical and stick to what already works, so we don't all run off a cliff together by mistake.
> It required an immense amount of baby sitting, for small code changes, made slowly, which were often wrong.
Can’t speak for others but that’s not what I’d understand (or do) as vibecoding. If you’re babysitting it every inch of the way then yeah sure I can see how it might not be productive relative to doing it yourself.
If you’re constantly fighting the LLM because you have a very specific notion of what each line should look like it won’t be a good time.
Better to spec out some assumptions, some desired outcomes, tech to be used, maybe the core data structure, ask the llm what else it needs to connect the dots, add that and then let it go
Whatever standard of code quality you want, if it hasn't reached it yet, it will get there very soon.
I like OP's representation, but I feel like a lot of people arent saying 'LLMs are the bomb dot com _right now_' (though some are), but rather the trend is evident: these things will keep getting better, and the writing is on the wall.
Personally I think the rate of improvement will plateau: in my experience software inevitably becomes less about tech and more about the interpersonal human soup of negotiating requirements, needs, contradiction, and feedback loops, at lot of which is not signal accessible to a text-in-text-out engine.
I just want some externally verifiable numbers. If AI is a 10x improvement, we should be seeing new operating systems. If it’s 5x we should see new game engines. If it’s 2x we should see massive amounts of new features for popular open source projects.
If it’s less than that, then it’s more like adding syntax highlighting or moving from Java to Ruby on Rails. Both of those were nice, but people weren’t breathlessly shouting about being left behind.
if A.I maximalist gospel was true - we would see a company raising $10M Series A | Seed (these days)
spend 60% on A.I, 30% on Humans and 10% on operations but I can bet you my sole penny that's not happening - so we know someone is tryna sell us a polished turd as a diamond
This is a fun piece to dissect because it's self-aware about being uncharitable, yet still commits the very sin it's criticizing.
The author's central complaint is that LLM evangelists dismiss skeptics with psychological speculation ("you're afraid of being irrelevant"). Their response? Psychological speculation ("you're projecting insecurity about your coding skills").
This is tu quoque dressed up as insight. Fighting unfounded psychoanalysis with unfounded psychoanalysis doesn't refute anything. It just levels the playing field of bad arguments.
The author gestures at this with "I am still willing to admit I am wrong" but the bulk of the piece is vibes-based counter-psychoanalysis, not engagement with evidence.
It's a well-written "no u" that mistakes self-awareness ("I know this isn't charitable") for self-correction.
Until some days / weeks ago, LLM's for coding was more hype than actually real code producing. That is gone now. They clearly leveled up, things will not be the same anymore. And of course this is not just for coding, this is just the beginning. A month ago it really seemed that the models were hitting a complexity wall and that the architecture would need to be improved. Not anymore.
Whatever your personal feeling, judgement, or conviction on this matter; do not dismiss the other side because of a couple wingnuts saying crazy stuff (you can find them on both extremes as well as the middle). Stay curious as to why people have their own conviction, and seek the truth!
What I really like about LLMs is that you can do pair programming without having to deal with humans.
> You see a lot of accomplished, prominent developers claiming they are more productive without it.
Demonstrably impossible if you’re actually properly trying to use them in non-esoteric domains. I challenge anyone to very honestly showcase a non-esoteric domain in which opus4.5 does not make even the most experienced developer more productive.
I see how cool and powerful they're getting, but agree there is a huge insecurity element in the evangelism. Everyone wants to be seen as the one who will get a seat running the llms when the music stops playing.
I see it only as a threat to those who have a deep hook into their role as a SWE.
If as a SWE you see the oncoming change and adapt to it, no issue.
If as a SWE you see the enablement of LLMs as an existential threat, then you will find many issues, and you will fail to adapt and have all kind of issues related to it.
Your experience that "AI coding is bad" will match your belief that "AI coding is bad".
ITT: A bunch of people who think they're god's gift to Earth.
See the same thing in the bitcoin space. If you ask them to explain the value to you, you're a moronic, behind-the-times, luddite boomer who just doesn't understand. Not to mention poor!
I'll remain skeptical and let the technology speak for itself, if it ever does.
I'm dubbing this "podcast driven development" because so many of them aren't building things to build things, they just want to _have built something_ so they can go on podcasts and talk about how great it is.
For what it's worth, I think most of them are genuine when they say they're seeing 10X gains,they just went from, like, a 0.01X engineer (centi-swe) to to a 0.1X engineer (deci-swe).
I somewhat agree with this poster. However, I think the unfortunate reality of programming for money is that a mediocre programmer that pumps out millions of lines of slop that seems to drive the business forward and manages to hide disastrous bugs until after the contract / promotion cycle is over will get further ahead than the more competent programmer that delivers better, less buggy, less spaghetti code.
"LLM evangelists - are you willing to admit that you just might not be that good at programming computers? Maybe you once were. Maybe you never were."
lol. is this supposed to be like some sort of "gotcha"! yes? like maybe i am a really shitty programmer and always just wanted to hack things together. what it has allowed me to do is prevent burnout to some extent, outsource the "boring" parts and getting back to building things i like.
also getting tired of these extreme takes but whatever, it's #1 so mission accomplished. llms are neither this or that. just another tool in the toolbox, one that has been frustrating in some contexts and a godsend in others and part of that process is figuring out where it excels and doesn't.
> It's projection. Their evangelism is born of insecurity.
It's fear, but of different kind. Those who are most aggressive and pushy about it are those who invested too much [someone else's] money in it and are scared angry investors will come for their hides when reality won't match their expectations.
Damn even reading that title shows how dumb i am !!
I don't mind weighing in as someone who could fairly be categorized as both an LLM evangelist and "not an experienced dev".
It's a lot like why I've been bullish on Tesla's approach to FSD even as someone who owned an AP1 vehicle that objectively was NOT "self-driving" in any sense of the word: it's less about where the technology is right now, or even the speed the technology is currently improving at, and more about how the technology is now present to enable acceleration in the rate of improvement of performance, paired with the reality of us observing exactly that. Like FSD V12 to V14, the last several years in AI can only be characterized as an unprecedented rate of improvement, very much like scientific advancement throughout human society. It took us millions of years to evolve into humans. Hundreds of thousands to develop language. Tens of thousands to develop writing. Thousands to develop the printing press. Hundreds to develop typewriters. Decades to develop computers. Years to go from the 8086 to the modern workstations of today. The time horizon of tasks AI agents can now reliably perform is now doubling every 4 months, per METR.
Do frontier models know more than human experts in all domains right now? Absolutely not. But they already know far more than any individual human expert outside that human's domain(s) of expertise.
I've been passionate about technology for nearly two decades, working in the technology industry for close to a decade. I'm a security guy, not a dev. I have over half a dozen CVEs and countless private vuln disclosures. I can and do write code myself - I've been writing scripts for various network tasks for a decade before ChatGPT ever came into existence. That said, it absolutely is a better dev than me. But specialized harnesses paired with frontier models are also better security engineers than I am, dollar for dollar versus my cost. They're better pentesters than me, for the relative costs. These statements were not true at all without accounting for cost two years ago. Two years from now, I am fully expecting them to just be outright better at security engineering, pentesting, SCA than I am, without accounting for cost, yet I also expect they will cost less then than they do now.
A year ago, OpenAI's o1 was still almost brand new, test-time compute was this revolutionary new idea. Everyone thought you needed tens of billions to train a model as good as o1, it was still a week before Deepseek released R1.
Now, o1's price/performance seems like a distant bad dream. I had always joked that one quarter in tech saw as much change as like 1 year in "the real world". For AI, it feels more like we're seeing more change every month than we do every year in "the real world", and I'd bet on that accelerating, too.
I don't think experienced devs still preferring to architect and write code themselves are coping at all. I still have to fix bugs in AI-generated code myself. But I do think it's short sighted to not look at the trajectory and see the writing on the wall over the next 5 years.
Stanford's $18/hr pentester that outperforms 9/10 humans should have every pentester figuring out what they're going to be doing when it doubles in performance and halves in cost again over the next year, just like human Uber drivers should be reading Motortrend's (historically a vocal critic of Tesla and FSD) 2026 Best Driver Assistance System and figuring out what they're going to do next. Experienced devs should be looking at how quickly we came from text-davinci-003 to Opus 4.5 and considering what their economic utility will look like in 2030.
mitchellh talked about how he vibe coded the one off visualization code for some blog post of his recently, and he seems like a fairly good programmer
Can we just agree that both the pro- and anti-llm faction mostly contribute noise? And go back to discuss actual achievements?
It's trivial to share coding sessions, be they horrific or great. Without those, you're hot air on the internet, independent of whatever specific opinions on LLMs you voice.
This doesn't feel completely right.
Simon Wilson (known for Django) has been doing a lot of LLM evangelism on his blog these days. Antirez (Redis) wrote a blog post recently with the same vibe.
I doubt they are not good programmers. They are probably better than most of us, and I doubt they feel insecure because of the LLMs. Either I'm wrong, or there's something more to this.
edit: to clarify, I'm not saying Simon and Antirez are part of the hostile LLM evangelists the article criticizes. Although the article does generalize to all LLM evangelists at least in some parts and Simon did react to this here. For these reasons, I haven't ruled him out as a target of this article, at least partly.