logoalt Hacker News

acjohnson55today at 3:42 PM1 replyview on HN

> But you miss a lot of experiences compared to when you’re actually in the code trenches reading, designing, and tinkering with code on your own.

Completely agree. Working with this tooling is a fundamentally different practice.

I'm not trying to suggest that agentic coding is superior in every way. I simply believe that in my own experience, the current gains exceed the drawbacks by a large margin for many applications, and that significantly higher gains are within close reach (e.g. weeks).

I spent years in management, and it's not dissimilar to that transition. In my first role as a manager, I found it very difficult to divest myself of the need to have fine-grained knowledge of and control over the team's code. That doesn't scale. I had to learn to set people up for success and manage from a place of uncertainty. I had to learn to think like a risk manager instead of an artisan.

I'll also say that when it comes to solution design, I have found it very helpful to ask the agent to give me options when it comes to solutions that look suboptimal. Often times, I can still find great refactor opportunities, and I can have agent draw up plans for those improvements and delegate them to parallel sessions, where the focus can be safely executing a feature-neutral refactor.

Separately from that, I would note that the business doesn't always need us to be making conceptual shifts. Great business value can be delivered with suboptimal architecture.

It is difficult to swallow, but I think that those of us whose market value is based on our ability to develop systems by manipulating code and getting feedback from the running product will find that businesses believe that machines can do this work more than good enough and at vastly higher scale.

For the foreseeable future, there will be places where hands-on coding is superior, but I see that becoming more the exception than the norm, especially in product engineering.


Replies

whaleidktoday at 5:33 PM

Your perspective is quite thoughtful, thank you. I do agree that if you are just fixing a bug or updating function internals, +20/-20 is certainty good enough and I wouldn’t oppose AI used there.

I am going to have to agree to disagree overall though, because the second there is something the AI can’t do the maintenance time for a human to learn the context and solve the problem skyrockets (in practice, for me) in a way I find unacceptable. And I may be wrong, but I don’t see LLMs being able to improve to close that gap soon because that would require a fundamental shift away from what the LLMs are under the hood.

show 1 reply