logoalt Hacker News

handfuloflighttoday at 6:52 AM2 repliesview on HN

One moment you're speaking about context but talking in kilobytes, can you confirm the token savings data?

And when you say only returns summaries, does this mean there is LLM model calls happening in the sandbox?


Replies

mksglutoday at 7:08 AM

For your second question: No LLM calls. Context Mode uses algorithmic processing — FTS5 indexing with BM25 ranking and Porter stemming. Raw output gets chunked and indexed in a SQLite database inside the sandbox, and only the relevant snippets matching your intent are returned to context. It's purely deterministic text processing, no model inference involved.

show 1 reply
mksglutoday at 6:55 AM

Hey! Thank you for your comment! There are test examples in the README. Could you please try them? Your feedback is valuable.