This is begging to turned into a youtube style "Real World", where you pit 12 humans with 12 AIs and they're only allowed to interact through CLIs.
Then you slowly reveal they're all humans.
This is exactly why I built Safebots to prevent problems with agents. This article shows how it can address every security issue with agents that came up in the study:
https://community.safebots.ai/t/researchers-gave-ai-agents-e...
All this to say: OpenClaw is hella insecure and unreliable?
I mean all of in the space already know this but I suppose its important to be showcasing the problems of systems of agents
[dead]
[dead]
The TLDR is that current agents are as problematic as many of us already know they are:
> unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover