The Unix-philosophy framing resonates — focused, composable, single-purpose agents are genuinely safer architecturally than monolithic long-lived sessions with massive context windows.
That said, composability introduces its own attack surface. When agents chain together via pipes or tool calls, each handoff is a trust boundary. A compromised upstream output becomes a prompt injection vector for the next agent in the chain.
This is one of the patterns we stress-test at audn.ai (https://audn.ai) — we do adversarial testing of AI agents and MCP tool chains. The depth-limited sub-agent delegation you mention is exactly the kind of structure where multi-step drift and argument injection can cause real damage. A malicious intermediate output can poison a downstream agent's context in ways that are really hard to audit after the fact.
The small binary / minimal deps approach is great for reducing supply chain risk. Have you thought about trust boundaries between agents when piping? Would be curious whether there's a signing or validation layer planned between agent handoffs.
wow, like 10 posts within 5 minutes, how great! love me some AI slop on HN @dang