logoalt Hacker News

Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents

64 pointsby souvik1997today at 2:34 PM36 commentsview on HN

WASM sandbox for running LLM-generated code safely.

Agents get a bash-like shell and can only call tools you provide, with constraints you define. No Docker, no subprocess, no SaaS — just pip install amla-sandbox


Comments

rellfytoday at 5:55 PM

I really like the capability enforcement model, it's a great concept. One thing this discussion is missing though is the ecosystem layer. Sandboxing solves execution safety, but there's a parallel problem: how do agents discover and compose tools portably across frameworks? Right now every framework has its own tool format and registry (or none at all). WASM's component model actually solves this — you get typed interfaces (WIT), language interop, and composability for free. I've been building a registry and runtime (also based on wasmtime!) for this: components written in any language, published to a shared registry, runnable locally or in the cloud. Sandboxes like amla-sandbox could be a consumer of these components. https://asterai.io/why

show 1 reply
simonwtoday at 5:36 PM

This project looks very cool - I've been trying to build something similar in a few different ways (https://github.com/simonw/denobox is my most recent attempt) but this is way ahead of where I've got, especially given its support for shell scripting.

I'm sad about this bit though:

> Python code is MIT. The WASM binary is proprietary—you can use it with this package but can't extract or redistribute it separately.

show 3 replies
syrusakbarytoday at 3:10 PM

This is great!

While I think that with their current choice for the runtime will hit some limitations (aka: not really full Python support, partial JS support), I strongly believe using Wasm for sandboxing is the way for the future of containers.

At Wasmer we are working hardly to make this model work. I'm incredibly happy to see more people joining on the quest!

show 2 replies
quantummagictoday at 3:05 PM

Sure, but every tool that you provide access to, is a potential escape hatch from the sandbox. It's safer to run everything inside the sandbox, including the called tools.

show 1 reply
vimotatoday at 4:26 PM

Sharing our version of this built on just-bash, AgentFS, and Pyodide: https://github.com/coplane/localsandbox

One nice thing about using AgentFS as the VFS is that it's backed by sqlite so it's very portable - making it easy to fork and resume agent workflows across machines / time.

I really like Amla Sandbox addition of injecting tool calls into the sandbox, which lets the agent generated code interact with the harness provided tools. Very interesting!

show 1 reply
sd2ktoday at 4:07 PM

Cool to see more projects in this space! I think Wasm is a great way to do secure sandboxing here. How does Amla handle commands like grep/jq/curl etc which make AI agents so effective at bash but require recompilation to WASI (which is kinda impractical for so many projects)?

I've been working on a couple of things which take a very similar approach, with what seem to be some different tradeoffs:

- eryx [1], which uses a WASI build of CPython to provide a true Python sandbox (similar to componentize-py but supports some form of 'dynamic linking' with either pure Python packages or WASI-compiled native wheels) - conch [2], which embeds the `brush` Rust reimplementation of Bash to provide a similar bash sandbox. This is where I've been struggling with figuring out the best way to do subcommands, right now they just have to be rewritten and compiled in but I'd like to find a way to dynamically link them in similar to the Python package approach...

One other note, WASI's VFS support has been great, I just wish there was more progress on `wasi-tls`, it's tricky to get network access working otherwise...

[1] https://github.com/eryx-org/eryx [2] https://github.com/sd2k/conch

show 1 reply
sibellaviatoday at 4:37 PM

I had the same idea, forcing the agent to execute code inside a WASM instance, and I've developed a few proof of concepts over the past few weeks. The latest solution I adopted was to provide a WASM instance as a sandbox and use MCP to supply the tool calls to the agent. However, it hasn't seemed flexible enough for all use cases to me. On top of that, there's also the issue of supporting the various possible runtimes.

show 1 reply
asyncadventuretoday at 4:01 PM

Really appreciate the pragmatic approach here. The 11MB vs 173MB difference with agentvm highlights an important tradeoff: sometimes you don't need full Linux compatibility if you can constrain the problem space well enough. The tool-calling validation layer seems like the sweet spot between safety and practical deployment.

messhtoday at 5:36 PM

Docker and vms are not the only options though... you can use bubblewrap and other equivalents for mac

benatkintoday at 5:14 PM

The readme exaggerates the threat of agents shelling out and glosses over a serious drawback of itself. On the shelling out side, it says "One prompt injection and you're done." Well, you can run a lot of these agents in a container, and I do. So maybe you're not "done". Also it's rare enough that this warning exaggerates - Claude Code has a yolo mode and outside of that, it has a pretty good permission system. On glossing over the drawback: "The WASM binary is proprietary—you can use it with this package but can't extract or redistribute it separately." And who is Amla Labs? FWIW the first commit is in 2026 and the license is in 2025.

show 1 reply
turnsouttoday at 5:00 PM

This is really awesome. I want to give my agent access to basic coding tools to do text manipulation, add up numbers, etc, but I want to keep a tight lid on it. This seems like a great way to add that functionality!

show 1 reply
behnamohtoday at 5:03 PM

> What you don't get: ...GPU access...

So no local models are supported.

show 1 reply
muktharbuildstoday at 5:23 PM

thats great one i am definetly ussing this

westurnertoday at 2:48 PM

From the README:

> Security model

> The sandbox runs inside WebAssembly with WASI for a minimal syscall interface. WASM provides memory isolation by design—linear memory is bounds-checked, and there's no way to escape to the host address space. The wasmtime runtime we use is built with defense-in-depth and has been formally verified for memory safety.

> On top of WASM isolation, every tool call goes through capability validation: [...]

> The design draws from capability-based security as implemented in systems like seL4—access is explicitly granted, not implicitly available. Agents don't get ambient authority just because they're running in your process.

show 1 reply
taosu_ybtoday at 3:43 PM

[dead]