Completely agree. As soon as I had OpenClaw working, I realized actually giving it access to anything was a complete nonstarter after all of the stories about going off the rails due to context limitations [1]. I've been building a self-hosted open sourced tool to try to address this by using an LLM to police the activity of the agent. Having the inmates run the asylum (by having an LLM police the other LLM) seemed like an odd idea, but I've been surprised how effective it's been. You can check it out here if you're curious: https://github.com/clawvisor/clawvisor clawvisor.com
[1] https://www.tomshardware.com/tech-industry/artificial-intell...