Hi everyone,
I run a generative AI infra company, unified API for 600+ models. Our team started deploying AI agents for our marketing and lead gen ops: content, engagement, analytics across multiple X accounts.
OpenClaw worked fine for single agents. But at ~14 agents across 6 accounts, the problem shifted from "how do I build agents" to "how do I manage them."
Deployment, monitoring, team isolation, figuring out which agent broke what at 3am. Classic orchestration problem.
So I built klaw, modeled on Kubernetes: Clusters — isolated environments per org/project Namespaces — team-level isolation (marketing, sales, support) Channels — connect agents to Slack, X, Discord Skills — reusable agent capabilities via a marketplace
CLI works like kubectl: klaw create cluster mycompany klaw create namespace marketing klaw deploy agent.yaml
I also rewrote from Node.js to Go — agents went from 800MB+ to under 10MB each.
Quick usage example: I run a "content cluster" where each X account is its own namespace. Agent misbehaving on one account can't affect others. Adding a new account is klaw create namespace [account] + deploy the same config. 30 seconds.
The key differentiator vs frameworks like CrewAI or LangGraph: those define how agents collaborate on tasks. klaw operates one layer above — managing fleets of agents across teams with isolation and operational tooling. You could run CrewAI agents inside klaw namespaces.
Happy to answer questions.
For us, we actually moved away from k8s to dedicated VMs on Proxmox for our agents. We initially had a containerized environment manager running in k8s, but found that VMs give you things containers struggle with: full desktop environments with X11 for GUI automation, persistent state across sessions and dedicated resources per agent. Each agent gets their own Debian VM with a complete OS, which makes it much easier to run tools like xdotool and browser automation that don't play well in containers.
Ha! This is great. I've been waiting for someone to make this.
Giving an LLM a computer makes it way more powerful, giving it a kubernetes cluster should extend that power much further and naturally fits well with the way LLMs work.
I think this abstraction can scale for a good long while. Past this what do you give the agent? Control of a whole Data Center I guess.
I'm not sure if it will replace openclaw all together since kubernetes is kind of niche and scary to a lot of people. But I bet for the most sophisticated builders this will become quite popular, and who knows maybe far beyond that cohort too.
Congrats on the launch!
I don’t quite get what makes it Kubernetes for AI agents. Is the idea to pool hardware together to distribute AI agents tasking? Is the idea to sandbox agents in a safe runtime with configuration management? Is the idea something else entirely? Both? I couldn’t figure it out by the README alone.
In first read I thought this was an operator for k8s, but it is just comparing itself.to k8s as an orchestration system.
You should consider looking at oh-my-opencode for inspiration (similar to gas town) for how to best orchestrate agents from your controller central brain.
This looks great though, will definitely give it a try
In case anyone is interested because "Kubernetes for agents" sounds innovative: https://medium.com/p/welcome-to-gas-town-4f25ee16dd04?source...
Also, Kubernetes and Gas Town are open source, but this is not.
Edit: the Medium link doesn't jump down to the highlighted phrase. It's "'It will be like kubernetes, but for agents,' I said."
This looks like what I want. A few questions: is it possible to have a “mayor” type role that has the ability to start other agents, but at the same time be unable to access those secrets or infiltrate prompt data? The key piece I don’t see is the agent needs a tool for klaw itself, and then I have to be able to configure that appropriately.
Is there a unified human approval flow, or any kind of UI bundled with this? Maybe I missed this part.