Hey HN!
Wanted to show our open source agent harness called Gambit.
If you’re not familiar, agent harnesses are sort of like an operating system for an agent... they handle tool calling, planning, context window management, and don’t require as much developer orchestration.
Normally you might see an agent orchestration framework pipeline like:
compute -> compute -> compute -> LLM -> compute -> compute -> LLM
we invert this so with an agent harness, it’s more like:
LLM -> LLM -> LLM -> compute -> LLM -> LLM -> compute -> LLM
Essentially you describe each agent in either a self contained markdown file, or as a typescript program. Your root agent can bring in other agents as needed, and we create a typesafe way for you to define the interfaces between those agents. We call these decks.
Agents can call agents, and each agent can be designed with whatever model params make sense for your task.
Additionally, each step of the chain gets automatic evals, we call graders. A grader is another deck type… but it’s designed to evaluate and score conversations (or individual conversation turns).
We also have test agents you can define on a deck-by-deck basis, that are designed to mimic scenarios your agent would face and generate synthetic data for either humans or graders to grade.
Prior to Gambit, we had built an LLM based video editor, and we weren’t happy with the results, which is what brought us down this path of improving inference time LLM quality.
We know it’s missing some obvious parts, but we wanted to get this out there to see how it could help people or start conversations. We’re really happy with how it’s working with some of our early design partners, and we think it’s a way to implement a lot of interesting applications:
- Truly open source agents and assistants, where logic, code, and prompts can be easily shared with the community.
- Rubric based grading to guarantee you (for instance) don’t leak PII accidentally
- Spin up a usable bot in minutes and have Codex or Claude Code use our command line runner / graders to build a first version that is pretty good w/ very little human intervention.
We’ll be around if ya’ll have any questions or thoughts. Thanks for checking us out!
Walkthrough video: https://youtu.be/J_hQ2L_yy60
Is this an alternative to https://mastra.ai/docs
How would it compare?
This is an interesting direction for agent frameworks. What stood out to me is the shift from simple tool orchestration to agents that can reason, call other agents, and self-manage workflows. That’s something we’ve been thinking about a lot while building SalesPlay — especially around how autonomous sales agents need clear evaluation, guardrails, and accountability to actually be useful in real GTM teams. The built-in grading/evaluation angle here feels like a practical step toward making agents less brittle and more production-ready. Curious to see how this evolves in real-world use cases.
[under-the-rug stub]
[see https://news.ycombinator.com/item?id=45988611 for explanation]
We ran into similar reliability issues while building GTWY. What surprised us was that most failures weren’t about model quality, but about agents being allowed to run too long without clear boundaries.
What helped was treating agents less like “always-on brains” and more like short-lived executors. Each step had an explicit goal, explicit inputs, and a defined end. Once the step finished, the agent stopped and context was rebuilt deliberately.
Harnesses like this feel important because they shift the problem from “make the model smarter” to “make the system more predictable.” In our experience, reliability came more from reducing degrees of freedom than from adding intelligence.