I do not understand why this is so unpopular today? I feel like everyone now thinks that basically all of SW engineering is outdated. We are supposed to forget all lessons learned and let agents to go through this? My opinion is to not care who did the job. But we should apply the same standard to human and AI output. I do not buy "we should not look at code". If we should not look at it what we should check instead to have the same control over final product? Because not having control over final product is so stupid right now.
Reads at least partially like LLM writing, for example:
> When code production gets cheap, the cost doesn't disappear. It migrates.
> It was true then. It is unavoidably true now.
Personally I've found one of the biggest gains with coding agents is in helping me read code. Actually - that's a lie. I don't read the code. Mostly (unless my spidey-sense goes off) I ask the LLM to read the code and tell me what it does.
And then I make a decision based on that.
I guess I'm wondering if the article is missing half the picture. Yes - AI is wrong some of the time (and that % varies based on a host of variables). But it can read code as well as just write it. And that does matter as it changes the trade-offs this article is weighing up.
Related (maybe the same thing): Whenever an agent is planning there are often architecture and product choices that it asks humans to make. None of this intent is captured in the code or comments. We started a decisions.md file and updated CLAUDE.md and AGENTS.md to create an entry in the decisions.md file everytime it has to ask a human about what to do. It captures the intent so at least we have doc that describes why certain choices were made.
Curious what other teams are doing to keep encouraging people to think critically about their code? I’ve been finding it harder to keep people motivated, keep them engaged with all the changes coming in. And I can’t blame them, it’s been overwhelming. Is everyone else just using more AI..?
I worried this blog post was going to pivot into a marketing pitch for some product, but no, it just describes the issue where the AI tool that generates your code probably won't document its reasons for the choices it makes. That documentation problem exists in the pre-AI era too, except that the reasons might exist in the heads of your co-workers and could possibly be teased out.
I know nothing about AI code generation (or about AI in general), but I wonder if you could include in your prompt a request that the AI describe the reasons for its choices and actually include those reasons as comments in the code.
When I generate code with AI, I will read through each change as it makes them (babysitting). If I don’t understand it, then I ask for explanation right away. At least by the end I have a grasp on what each change does and the reasoning. Then, I can make a PR and highlight the same info for my reviewer and for longevity. Our codebase style is not to litter comments everywhere. We go back to the code review for details and discussion. Obviously, this only works if the changes are small.
What is wrong with using LLMs to analyze and explain code? Am I missing something? Before writing code, this is an even easier task to accomplish using AI.
There is something to this, but to the concluding paragraph: I think these tools already are extremely good at helping us understand code, in addition to helping us generate it.
Coding agent to me, means shifting my brain from memory-bound to compute-bound
> The code they [LLMs] produce is often fine. It works. It passes tests. It might ship as-is
The blog posts they [LLMs] write are often fine. They work. They pass tests. They might ship as-isI think a huge gap in the market today is documentation that is both easy for humans to navigate and understand, but also readily ingestible for agents.
The context of when that previous experience - Heartland outsourcing to India – happened would be helpful. The 90s? The 00s? The 10s?
>The cost of producing code has collapsed. AI tools can generate functional, adequate, perfectly average code at a speed and cost that would have been unimaginable even five years ago. And like the outsourcing wave of the early 2000s, the economics are real and rational. Nobody is wrong for using these tools. The code they produce is often fine. It works. It passes tests. It might ship as-is.
After using AI for months (Claude, Gemini, ChatGPT) it is extremely rare for their code to work 'as is' first shot and almost always requires several iterations and cleaning up edge-cases.
When it does work 'first shot' it's usually when it's transferring existing working code to a new project which is slightly different.
[flagged]
> The code they [LLMs] produce is often fine. It works. It passes tests. It might ship as-is.
I don't disagree, but I've been thinking about this a bit: a lot of _human_ written code was/is less-than-fine. And a lot of human devs didn't understand the context when they wrote it.
I'm not advocating that we fire devs, or evangelizing that LLms are awesome. But I do wish there was a slightly more honest take on the pre-LLM world: it's not just about cost reduction, it's about solving some long-term structural deficiencies of industry.