I built Axe because I got tired of every AI tool trying to be a chatbot.
Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.
Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.
What Axe is:
- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)
- Stdin piping, something like `git diff | axe run reviewer` just works
- Sub-agent delegation. Where agents call other agents via tool use, depth-limited
- Persistent memory. If you want, agents can remember across runs without you managing state
- MCP support. Axe can connect any MCP server to your agents
- Built-in tools. Such as web_search and url_fetch out of the box
- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format
- Path-sandboxed file ops. Keeps agents locked to a working directory
Written in Go. No daemon, no GUI.
What would you automate first?
> Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out.
I'm a bit skeptical of this approach, at least for building general purpose coding agents. If the agents were humans, it would be absolutely insane to assign such fine-grained responsibilities to multiple people and ask them to collaborate.
Now what we need is a chat interface to develop these config files.
Looks pretty interesting!
Tiny note: there's a typo in your repo description.
It's exciting to see so much experimentation when it comes to form factors for agent orchestration!
The first question that comes to mind is: how do you think about cost control? Putting a ton in a giant context window is expensive, but unintentionally fanning out 10 agents with a slightly smaller context window is even more expensive. The answer might be "well, don't do that," and that certainly maps to the UNIX analogy, where you're given powerful and possibly destructive tools, and it's up to you to construct the workflow carefully. But I'm curious how you would approach budget when using Axe.