I wonder why everyone seems to go with Vagrant VMs rather than simple docker containers.
See: A field guide to sandboxes for AI¹ on the threat models.
> I want to be direct: containers are not a sufficient security boundary for hostile code. They can be hardened, and that matters. But they still share the host kernel. The failure modes I see most often are misconfiguration and kernel/runtime bugs — plus a third one that shows up in AI systems: policy leakage.
Theoretically, they have a smaller attack surface. The programs inside the VM can't interact directly with the host kernel.
I find using docker containers more complex - you need a Dockerfile instead of a regular install script, they tend to be very minimal and lack typical linux debugging tools for the agent, they lose state when stopped.
Instead I'm using LXC containers in a VM, which are containers that look and feel like a VM.
Thank you, good question! My original implementation was actually a bunch of manifests on my own microk8s cluster. I was finding that this meant a lot of ad-hoc adjustments with every little tweak. (Ironic, given the whole "pets vs cattle" thing.) So I started testing the changes in a VM.
Then I was talking to a security engineer at my company, who pointed out that a VM would make him feel better about the whole thing anyway. And it occurred to me: if I packaged it as a VM, then I'd get both isolation and determinism. It would be easier to install and easier to debug.
So that's why I decided to go with a Vagrant-based installation. The obvious downside is that it's harder now to integrate it with external systems or to use the full power of whatever environment you deploy it in.