logoalt Hacker News

icedroptoday at 5:57 AM2 repliesview on HN

Maintain a good agents.md with notes on code grammar/structure/architecture conventions your org uses, then for each problem, prompt it step-by-step as if you were a junior engineer's monologue.

e.g. as I am dropped into a new codebase:

1. Ask Claude to find the section of code that controls X

2. Take a look manually

3. Ask it to explain the chain of events

4. Ask it to implement change Y, in order to modify X to do behavior we want

5. Ask it about any implementation details you don't understand, or want clarification on -- it usually self-edits well.

6. You can ask it to add comments, tests, etc., at this point, and it should run tests to confirm everything works as expected.

7. Manually step through tests, then code, to sanity check (it can easily have errors in both).

8. Review its diff to satisfaction.

9. Ask it to review its own diff as if it was a senior engineer.

This is the method I've been using, as I onboard onto week 1 in a new codebase. If the codebase is massive, and READMEs are weak, AI copilot tools can cut down overall PR time by 2-3x.

I imagine overall performance dips after developer familiarity increases. From my observation, it's especially great for automating code-finding and logic tracing, which often involves a bunch of context-switching and open windows--human developers often struggle with this more than LLMs. Also great for creating scaffolding/project structure. Overall weak at debugging complex issues, less-documented public API logic, often has junior level failures.


Replies

extrtoday at 7:07 AM

Great walkthrough, I might send your comment to my coworkers. I use AI to write pretty much 100% of my code and my process looks similar. For writing code, you really want to step through each edit one by one and course-correct it as you go. A lot of times it's obvious when it's taking a suboptimal approach and it's much easier to correct before the wrong thing is written. Plus it's easier to control this way than trying to overengineer rules files to get it to do exactly what you want. The "I'm running 10 autonomous agents at once" stuff is a complete joke unless you are a solo dev just trying to crap something working out.

I use Sonnet 4.5 exclusively for this right now. Codex is great if you have some kind of high-context tricky logic to think through. If Sonnet 4.5 gets stuck I like to have it write a prompt for Codex. But Codex is not a good daily driver.

blkstoday at 8:43 AM

As usual with people describing their AI workflows, I’m amazed how complicated and hand-holding their whole process is. Sounds like you’re spending the time you would otherwise spend on the task to struggle with ai tools.