depending on how large your codebase is, hopefully not. At this point use something like the IX plugin to ingest codebase and track context, rather than from the LLM itself.
- naiveTokens = 19.4M — what ix estimates it would have cost to answer your queries without graph intelligence (i.e., dumping full files/directories into context)
- actualTokens = 4.7M — what ix's targeted, graph-aware responses actually used
- tokensSaved = 14.7M — the difference
This is crazy..
tokensSaved = naiveTokens - actualTokens