I stopped writing code a year ago. Claude code is a multiplier when you know how to use it.
Treat it like an intern, give it feedback, have it build skills, review every session, make it do unit tests. Red green refactor. Spend time up front reviewing the plan. Clearly communicate your intent and outcomes you want. If you say "do x" it has to guess what you want. If you say "I want this behaviour and this behaviour, 100% branch unit tested, adhearing to contributing guidelines and best practices, etc" it will take a few minutes longer, but the quality increases significantly.
I uninstalled vscode, I built my own dashboard instead that organizes my work. I get instant notifications and have a pr review kick off a highly opinionated or review utilizing the Claude code team features.
If you aren't doing this level of work by now, you will be automated soon. Software engineering is a mostly solved problem at this point, you need to embed your best practices in your agent and keep and eye on it and refine it over time.
> Software engineering is a mostly solved problem at this point
I guess that's why Claude Code has 0 open issues on Github. Since software engineering is solved, their autonomous agents can easily fix their own software much better and faster than human devs. They can just add "make no mistakes" to their prompt and the model can solve any problem!
Oh wait, they have 5,000+ open issues on Github[1]. I'm yet to be convinced that this is a solved problem
> you need to embed your best practices in your agent and keep and eye on it and refine it over time.
Sincere question, how do beginners to the field (interns, juniors) do this when they don't have any best practices yet?
Very cool. What have you built with this method? Do you mind sharing details about the kinds of projects?
> If you aren't doing this level of work by now, you will be automated soon.
It's harder and harder to detect sarcasm these days but in case you're being serious, I've tested a similar setup and I noticed Claude produces perfectly plausible code that has very subtle bugs that get harder and harder to notice. In the end, the initial speedup was gone and I decided to rewrite everything by hand. I'm working on a product where we need to understand the code base very well.
Sounds like tech debt as a service. If the code review is automated, how can you be sure the code isn't full of security or maintanability issues?
Do you have any kind of proof you can show us? This reads like every other AI hype post but I have still never seen anyone demonstrate anything but proof of concept apps and imaginary workloads.
Why exactly do you think people not doing that kind of work will be automated but your kind of work won't be automated?
If AI really is all that, then whatever "special" thing you are doing will be automated as well.
>Software engineering is a mostly solved problem at this point
You from 2 months ago:
>LLMs are great coders, but subpar developers". https://news.ycombinator.com/item?id=46434304
Interesting. That's a lot of progress in 2 months!
>>>> Software engineering is a mostly solved problem at this point...
I'll believe it when AI can tell me when a project will be done. I've asked my developer friends about this and I get a blank stare, like I'm stupid for asking.
[dead]
I have read comments about this on X, here, and other places, yet I have ever seen there be proof this is an actual productivity boost.
I use Claude Opus (4.5, 4.6) all the time and catch it making making subtle mistakes, all the time.
Are you really being more productive (let’s say 3x times more), or just feel that way because you are constantly prompting Claude?
Maybe I’m wrong, but I don’t buy it.