I disagree with the "confidence trick" framing completely. My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily. The productivity gains I'm seeing right now are unprecedented. Even a year ago this wouldn't have been possible, it really feels like an inflection point.
I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.
Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.
> It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.
That's not how book printing works and I'd argue the monk can far more easy create new text and devise new interpretations. And they did in the sidelines of books. It takes a long time to prepare one print but nearly just as long as to print 100 which is where the good of the printing press comes from. It's not the ease of changing or making large sums of text, it's the ease of reproducing and since copy/paste exist it is a very poor analogue in my opinion.
I'd also argue the 10x is subject/observer bias since they are the same person. My experience at this point is that boilerplate is fine with LLMs, and if that's only what you do good for you, otherwise it will hardly speed up anything as the code is the easy part.
This. By now I don’t understand how anyone can still argue in the abstract while it’s trivial to simply give it a try and collect cold, hard facts.
It’s like arguing that the piano in the room is out of tune and not bothering to walk over to the piano and hit its keys.
It's fine for a Django app that doesn't innovate and just follows the same patterns for the 100 solved problems that it solves.
The line becomes a lot blurrier when you work on non trivial issues.
A Django app is not particularly hard software, it's hardly software but a conduit from database to screens and vice-versa; which is basic software since the days of terminals. I'm not judging your job, if you get paid well for doing that, all power to you. I had a well paying Laravel job at some point.
What I'm raising though is the fact that AI is not that useful for applications that aren't solving what has been solved 100 times before. Maybe it will be, some day, reasoning that well that it will anticipate and solve problems that don't exist yet. But it will always be an inference on current problems solved.
Glad to hear you're enjoying it, personally, I enjoy solving problems, not the end result as much.
> I'm seeing legitimate 10x gains...
Self-reports on this have been remarkably unreliable.
> My belief in this tech isn't based on marketing hype or someone telling me it's good – it's based on cold reality of what I'm shipping daily
Then why is half of the big tech companies using Microsoft Teams and sending mails with .docx embedded in ?
Of course marketing matters.
And of course the hard facts also matters, and I don't think anybody is saying that AI agents are purely marketing hype. But regardless, it is still interesting to take a step back and observe what marketing pressures we are subject to.
Are you actually reading the code? I have noticed most of the gains go away when you are reading the code outputted by the machine. And sometimes I do have to fix it by hand and then the agent is like "Oh you changed that file, let me fix it"
Are you also getting dumber? https://tech.co/news/another-study-ai-making-us-dumb
> I'm maintaining a well-structured enterprise codebase (100k+ lines Django)
How do you avoid this turning into spaghetti? Do you understand/read all the output?
You are speculating. You don’t know. You are not testing this technology— you are trusting it.
How do I know? Because I am testing it, and I see a lot of problems that you are not mentioning.
I don’t know if you’ve been conned or you are doing the conning. It’s at least one of those.
Even assuming all of what you said is true, none of it disproves the arguments in the article. You're talking about the technology, the article is about the marketing of the technology.
The LLM marketing exploits fear and sympathy. It pressures people into urgency. Those things can be shown and have been shown. Whether or not the actual LLM based tools genuinely help you has nothing to do with that.