logoalt Hacker News

Show HN: Semble – Code search for agents that uses 98% fewer tokens than grep

52 pointsby Bibabomastoday at 3:37 PM19 commentsview on HN

Hey HN! We (Stephan and Thomas) recently open-sourced Semble. We kept running into the same problem while using Claude Code on large codebases: when the agent can't find something directly, it falls back to grep, reading full files or launching subagents. This uses a lot of tokens, and often still misses the relevant code. There are existing tools for this, but they were either too slow to index on demand, needed API keys, or had poor retrieval quality.

Semble is our solution for this. It combines static Model2Vec embeddings (using our latest static model: potion-code-16M) with BM25, fused via RRF and reranked with code-aware signals. Everything runs on CPU since there's no transformers involved. On our benchmark of ~1250 query/document pairs across 63 repos and 19 languages, it uses 98% fewer tokens than grep+read and reaches 99% of the retrieval quality of a 137M-parameter code-trained transformer, while being ~200x faster.

Main features:

- Token-efficient: 98% fewer tokens than grep+read

- Fast: ~250ms to index a typical repo on our benchmark, ~1.5ms per query on CPU (very large repos may take longer)

- Accurate: 0.854 NDCG@10, 99% of the best transformer setup we tested

- MCP server: drop-in for Claude Code, Cursor, Codex, OpenCode

- Zero config: no API keys, no GPU, no external services

Install in Claude Code with: claude mcp add semble -s user -- uvx --from "semble[mcp]" semble

Or check our README for other installation instructions, benchmarks, and methodology:

Semble: https://github.com/MinishLab/semble

Benchmarks: https://github.com/MinishLab/semble/tree/main/benchmarks

Model: https://huggingface.co/minishlab/potion-code-16M

Let us know if you have any feedback or questions!


Comments

jerezzprimetoday at 7:48 PM

I'd be interested in seeing actual agent benchmarks (eg CC or Copilot CLI with grep removed and this tool instead).

For example, I have explored RTK and various LSP implementations and find that the models are so heavily RL'd with grep that they do not trust results in other forms and will continually retry or reread, and all token savings are lost because the model does not trust the results of the other tools.

show 4 replies
smcleodtoday at 9:16 PM

How does it compare to context-mode or serina that are both well established now?

singpolyma3today at 9:02 PM

Semantic code search seems like a useful tool for a human too. Not just for agents.

vikeritoday at 9:10 PM

very curious to give it a spin but why write a cli in python? would surely be faster and more portable with go or rust?

esafranchiktoday at 5:25 PM

Is the benchmark measuring one-shot retrieval accuracy, or Coding agent response accuracy?

show 1 reply
mrpf1stertoday at 6:38 PM

Does this work well for non-coding documents as well? Say api docs or AI memory files?

show 1 reply
ludicrousdisplatoday at 6:06 PM

grep doesn't need tokens, so what is 98% fewer than zero?

show 1 reply