logoalt Hacker News

time0uttoday at 5:48 PM2 repliesview on HN

Very interesting. I just started researching this topic yesterday to build something for adjacent use cases (sandboxing LLM authored programs). My initial prototype is using a wasm based sandbox, but I want something more robust and flexible.

Some of my use cases are very latency sensitive. What sort of overhead are you seeing?


Replies

qalfytoday at 7:54 PM

Wasm sandboxes are fast for pure compute but get painful the moment LLM code needs filesystem access or subprocess spawning. And it will, constantly. Containers with seccomp filters give you near-native speed and way broader syscall support — overhead is basically startup time (~2s cold, sub-second warm). For anything IO-heavy it's not even close. We're doing throwaway containers at https://cyqle.in if anyone's curious.

afshinmehtoday at 6:09 PM

I added a benchmark test (Apple M5) and on average I'm seeing 10ms overhead. I added a benchmark section to the repo as well https://github.com/afshinm/zerobox?tab=readme-ov-file#perfor...

Also, I'm literally wrapping Claude with zerobox now! No latency issues at all.