I have to be honest. I’ve written a lot of pro-ai / dark-software articles and I think Im due an update, cause it worked great, till it didn’t.
I could write a lot about what I’ve tried and learnt, but so far this article is a very based view and matches my experience.
I definitely suffered under the unnecessary complexity and wished to never’ve used AI at moments and even with OPUS 4.6 I could feel how it was confused and couldn’t understand business objectives really. It became way faster to jump in code, clean it up and fix it myself. I’m not sure yet where and how the line is and where it will be.
Bold claims that writing code was never the bottleneck. It may not be the only bottleneck but we conveniently move goal posts now that there is a more convenient mechanism and our profession is under threat.
I recently started using AI for personal projects, and I find it works really well for 'spike' type tasks, where what you're trying to do is grow your knowledge about a particular domain. It's less good at discovering the correct way of doing things once you've decided on a path forward, but still more useful than combing through API docs and manpages yourself.
It might not actually deliver working things all that much faster than I could, but I don't feel mentally drained by the process either. I used to spend a lot of time reading architecture docs in order to understand available solutions, now I can usually get a sense for what I need to know just from asking ChatGPT how certain things might be done using X tool.
In the last few days, I've stood up syncthing, tailscale with a headscale control plane, and started making working indicators and strategies in PineScript, TradingView's automated trading platform. Things I had no energy for or would have been weeklong projects take hours or a day or so. AI's strengths synergize really well with how humans want to think.
I just paste an error message in, and ChatGPT figures out what I'm trying to do from context, then gives me not just a possible resolution, but also why the error is happening. The latter is just as useful as the former. It's wrong a lot, but it's easy to suss out.
There is a saying you need to write an essay 3 times. The first time its puked out, the second is decent and the third is good.
It’s quite similar with code, and with code less is more. for try 1 and 2
Speak for yourself, I have never thrown away code at this rate in my entire career. I couldn't keep up this pace without AI codegen.
I think there's some goldilocks speed limit for using these tools relative to your skillset. When you're building, you forget that you're also learning - which is why I actually favour some AI code editors that aren't as powerful because it gets me to stop and think.
A well considered article, despite the author categorizing it as a rant. I appreciate the appendix quotations, as well as the acknowledgement that they are appeals to authority.
Whilst the author clearly has a belief that falls down on one side of the debate, I hope folks can engage with the "Should we abandon everything we know" question, which I think is the crux of things. Evidence that AI-driven-development is a valuable paradigm shift is thin on the ground, and we've done paradigm shifts before which did not really work out, despite massive support for them at the time. (Object-Oriented-Everything, Scrum, etc.)
Hey, author here. Never thought I'd see my pokey little blog on HN and all that.
Happy to discuss further.
> Humans and LLMs both share a fundamental limitation. Humans have a working memory, and LLMs have a context limit.
But there’s a more important difference: I can’t spin up 20 decent human programmers from my terminal.
The argument that "code was never the bottleneck" is genuinely appealing, but it hasn’t matched my experience at all. I’m getting through dramatically more work now. This is true for my colleagues too.
My non-technical niece recently built a pretty solid niche app with AI tools. That would have been inconceivable a few years ago.
The collaboration aspect is what many AI enthusiasts miss. As humans, our success is dependent on our ability to collaborate with others. You may believe that AI could replace many individual software engineers, but if it does so at the expense of harming collaboration, it’s a massive loss. AI tools are simply not good at collaborating. When you add many humans to a project, the result becomes greater than the sum of its parts. When you add many AI tools to a project, it quickly becomes a muddled mess.
honestly the thing that trips me up is when codegen makes me feel productive but I haven't actually validated anything. like I'll have claude write a whole data pipeline in 20 minutes and then spend 2 hours debugging edge cases it didn't think about because it doesn't know our data
the speed is real but it mostly just moves where I spend my time. less typing, more reading and testing. which is... fine? but it's not the 10x thing people keep claiming
In practical terms, "productivity" is any metric that people with power can manipulate (cheating numbers, changing narratives, etc) to affect behavior of others to their interests.
ALL OF IT is meaningless. It's a pointless discussion.
I went to look at some of the authors other posts and found this:
https://www.antifound.com/posts/advent-of-code-2022/
So much of our industry has spent the last two decades honing itself into a temple built around the idea of "leet code". From the interview to things like advent of code.
Solving brain teasers, knowing your algorithms cold in an interview was always a terrible idea. And the sort of engineers it invited to the table the kinds of thinking it propagated were bad for our industry as a whole.
LLM's make this sort of knowledge, moot.
The complaints about LLM's that lack any information about the domains being worked in, the means of integration (deep in your IDE vs cut and paste into vim) and what your asking it to do (in a very literal sense) are the critical factors that remain "un aired" in these sorts of laments.
It's just hubris. The question not being asked is "Why are you getting better results than me, am I doing something wrong?"
For me it's simple:
1. Assume you're to work on product/feature X.
2. If God were to descend and give you a very good, reality-tested spec:
3. Would you be done faster? Of course, because as every AI doomer says, writing code was never the bottleneck!!1!
4. So the only bottleneck is getting to the spec.
5. Guess what AI can help you with as well, because you can iterate out multiple versions with little mental effort and no emotional sunk cost investment?
ergo coding is a solved problem
rules of thumb for when to take blog posts about AI coding seriously:
- must be using the latest state of the art model from the big US labs
- must be on a three digit USD per month plan
- must be using the latest version of a full major harness like codex, opencode, pi
- agent must have access to linting, compilation tools and IDE feedback
- user must instruct agent to use test driven development and write tests for everything and only consider something done if tests pass
- user must give agent access to relevant documentation, ie by cloning relevant repositories etc
- user must use plan mode and iterate until happy before handing off to agent
- (list is growing every month)
---
if the author of a blog post about AI coding doesnt respect all of these, reading his blog posts is a waste of time because he doesn't follow best practices
[dead]
You can write cope like this all you want but it doesn't change the fact I can ship a feature in few days that previously would have taken me a few weeks.
The most useful reframe I've found: codegen changes the cost structure of writing code, not the cost structure of knowing what to write.
Before, if you had a vague spec you'd write a small prototype to clarify your thinking. Now you can have a complete implementation in minutes — but you still have an unclear spec. You've just moved the uncertainty forward in the process, where it's more expensive to catch.
The teams I've seen use LLMs well treat the output as a rough draft that requires real review, not a finished product. The teams that get into trouble treat generation speed as the goal. Both groups produce the same lines of code. Very different results.