One moment you're speaking about context but talking in kilobytes, can you confirm the token savings data?
And when you say only returns summaries, does this mean there is LLM model calls happening in the sandbox?
Hey! Thank you for your comment! There are test examples in the README. Could you please try them? Your feedback is valuable.
For your second question: No LLM calls. Context Mode uses algorithmic processing — FTS5 indexing with BM25 ranking and Porter stemming. Raw output gets chunked and indexed in a SQLite database inside the sandbox, and only the relevant snippets matching your intent are returned to context. It's purely deterministic text processing, no model inference involved.