logoalt Hacker News

horsawlarwayyesterday at 2:52 PM1 replyview on HN

I'm struggling to see how this resolves the problem the author has. I still think there's value in this approach, but it feels to be in the same thrust as the built in controls that already exist in claude code.

The problem with this approach (unless I'm misunderstanding - entirely possible!) is that it still blocks the agent on the first need for approval.

What I think most folks actually want (or at least what I want) is to allow the agent to explore a space, including exploring possible dead ends that require permissions/access, without stopping until the task is finished.

So if the agent is trying to "fix a server" it might suggest installing or removing a package. That suggestion blocks future progress.

Until a human comes in and says "yes - do it" or "no - try X instead" it will sit there doing nothing.

If instead it can just proceed, observe that the package doesn't resolve the issue, and continue exploring other solutions immediately, you save a whole lot of time.


Replies

corvyesterday at 3:13 PM

You're right that blocking on every operation would defeat the purpose! Shannot is able to auto-approve safe operations for this reason (e.g. read-only, immutable)

So the agent can freely explore, check logs, list files, inspect service status. It only blocks when it wants to change something (install a package, write a config, restart a service).

Also worth noting: Shannot operates on entire scripts, not individual commands. The agent writes a complete program, the sandbox captures everything it wants to do during a dry run, then you review the whole batch at once. Claude Code's built-in controls interrupt at each command whereas Shannot interrupts once per script with a full picture of intent.

That said, you're pointing at a real limitation: if the fix genuinely requires a write to test a hypothesis, you're back to blocking. The agent can't speculatively install a package, observe it didn't help, and roll back autonomously.

For that use case, the OP's VM approach is probably better. Shannot is more suited to cases where you want changes applied to the real system but reviewed first.

Definitely food for thought though. A combined approach might be the right answer. VM/scratch space where the agent can freely test hypotheses, then human-in-the-loop to apply those conclusions to production systems.

show 1 reply