This has convinced many non-programmers that they can program, but the results are consistently disastrous, because it still requires genuine expertise to spot the hallucinations.
I've been programming for 30+ years and now a people manager. Claude Code has enabled me to code again and I'm several times more productive than I ever was as an IC in the 2000s and 2010s. I suspect this person hasn't really tried the most recent generation, it is quite impressive and works very well if you do know what you are doing
Isn't that what the author means?
"it still requires genuine expertise to spot the hallucinations"
"works very well if you do know what you are doing"
I have a journalist friend with 0 coding experience who has used ChatGPT to help them build tools to scrape data for their work. They run the code, report the errors, repeat, until something usable results. An agent would do an even better job. Current LLMs are pretty good at spotting their own hallucinations if they're given the ability to execute code.
The author seems to have a bias. The truth is that we _do not know_ what is going to happen. It's still too early to judge the economic impact of current technology - companies need time to understand how to use this technology. And, research is still making progress. Scaling of the current paradigms (e.g. reasoning RL) could make the technology more useful/reliable. The enormous amount of investment could yield further breakthroughs. Or.. not! Given the uncertainty, one should be both appropriately invested and diversified.
Last week I gave antigravity a try, with the latest models and all, it generated subpar code that did the job very quickly for sure, but no one would have ever accepted this code in a PR, it took me 10x more time to clean it up than to have gemini shit it out.
The only thing I learned is that 90% of devs are code monkeys with very low expectations which basically amount to "it compiles and seems to work then it's good enough for me"
For toy and low effort coding it works fantastic. I can smash out changes and PRs fantastically quick, and they’re mostly correct. However, certain problem domains and tough problems cause it to spin its wheels worse than a junior programmer. Especially if some of the back and forth troubleshooting goes longer than one context compaction. Then it can forget the context of what it’s tried in the past, and goes back to square one (it may know that it tried something, but it won’t know the exact details).
..and works very well if you do know what you are doing
That's the issue. AI coding agents are only as good as the dev behind the prompt. It works for you because you have an actual background in software engineering of which coding is just one part of the process. AI coding agents can't save the inexperienced from themselves. It just helps amateurs shoot themselves in the foot faster while convincing them they're a marksman.
It seems to work well if you DONT really know what you are doing. Because you can not spot the issues.
If you know what you are doing it works kind of mid. You see how anything more then a prototype will create lots of issues in the long run.
Dunning-Kruger effect in action.
If you’ve been programming for 30+ years, you definitely don’t fall under the category of “non-programmers”.
You have decades upon decades of experience on how to approach software development and solve problems. You know the right questions to ask.
The actual non-programmers I see on Reddit are having discussions about topics such as “I don’t believe that technical debt is a real thing” and “how can I go back in time if Claude Code destroyed my code”.