What model are you working with where you still get good results at 25k?
To your q, I make huge effort in making my prompts as small as possible (to get the best quality output), I go as far as removing imports from source files, writing interfaces and types to use in context instead of fat impl code, write task specific project / feature documentation.. (I automate some of these with a library I use to generate prompts from code and other files - think templating language with extra flags). And still for some tasks my prompt size reaches 10k tokens, where I find the output quality not good enough
I'm working with Anthropic models, and my combined system prompt is already 22k. It's a big project, lots of skill and agent definitions. Seems to work just fine until it reaches 60k - 70k tokens.