Multi-tier exact-match cache for AI agents backed by Valkey or Redis. LLM responses, tool results, and session state behind one connection. Framework adapters for LangChain, LangGraph, and Vercel AI SDK. OpenTelemetry and Prometheus built in. No modules required - works on vanilla Valkey 7+ and Redis 6.2+.
Shipped v0.1.0 yesterday, v0.2.0 today with cluster mode. Streaming support coming next.
Existing options locked you into one tier (LangChain = LLM only, LangGraph = state only) or one framework. This solves both.
npm: https://www.npmjs.com/package/@betterdb/agent-cache Docs: https://docs.betterdb.com/packages/agent-cache.html Examples: https://valkeyforai.com/cookbooks/betterdb/ GitHub: https://github.com/BetterDB-inc/monitor/tree/master/packages...
Happy to answer questions.
[dead]
Can you explain what this does?