logoalt Hacker News

j45today at 12:34 AM1 replyview on HN

Fair points, share how you are learning - seems to be more than one way to the same result.


Replies

icedroptoday at 5:57 AM

Maintain a good agents.md with notes on code grammar/structure/architecture conventions your org uses, then for each problem, prompt it step-by-step as if you were a junior engineer's monologue.

e.g. as I am dropped into a new codebase:

1. Ask Claude to find the section of code that controls X

2. Take a look manually

3. Ask it to explain the chain of events

4. Ask it to implement change Y, in order to modify X to do behavior we want

5. Ask it about any implementation details you don't understand, or want clarification on -- it usually self-edits well.

6. You can ask it to add comments, tests, etc., at this point, and it should run tests to confirm everything works as expected.

7. Manually step through tests, then code, to sanity check (it can easily have errors in both).

8. Review its diff to satisfaction.

9. Ask it to review its own diff as if it was a senior engineer.

This is the method I've been using, as I onboard onto week 1 in a new codebase. If the codebase is massive, and READMEs are weak, AI copilot tools can cut down overall PR time by 2-3x.

I imagine overall performance dips after developer familiarity increases. From my observation, it's especially great for automating code-finding and logic tracing, which often involves a bunch of context-switching and open windows--human developers often struggle with this more than LLMs. Also great for creating scaffolding/project structure. Overall weak at debugging complex issues, less-documented public API logic, often has junior level failures.

show 2 replies