I'm after this too.
I want to have a "container" (used in the conceptual sense here - I'm aware of the differences between container and other solutions) that I can let an AI agent run commands in but is safely sandboxed from the rest of my computer.
For me this is primarily file access. I don't want it inadvertently deleting the wrong things or reading my SSH keys.
But the way the agent uses it is important too. They generally issue the commands they want to run as strings, eg:
bash ls
sed -i 's/old_string/new_string/g' filename.py
I need a way to run these in the "container". I can `ssh command` but open to other options too.
If you provide your own functions/tools to the AI agent, wouldn't that let you do exactly that?
ie "Here AI, call this function -> local_exec(commmand_name, {param1, param2, [etc]})" to execute functions.
And you'd wire up your local_exec() function to run the command in the container however you choose. (chroot, namespace, ssh to something remote, etc)