logoalt Hacker News

storystarlingtoday at 8:09 PM2 repliesview on HN

How did you handle the context window for 20k lines? I assume you aren't feeding the whole codebase in every time given the API costs. I've struggled to keep agents coherent on larger projects without blowing the budget, so I'm curious if you used a specific scoping strategy here.


Replies

simonwtoday at 8:25 PM

GPT-5.2 has a 400,000 token context window. Claude Opus 4.5 is just 200,000 tokens. To my surprise this doesn't seem to limit their ability to work with much larger codebases - the coding agent harnesses have got really good at grepping for just the code that they need to have in-context, similar to how a human engineer can make changes to a million lines of code without having to hold it all in their head at once.

show 1 reply
nurettintoday at 8:36 PM

You don't load the entire project into the context. You let the agent work on a few 600-800 line files one feature at a time.

show 1 reply