> I don’t think there is a solution.
Sandboxing. LLM shouldn't be able to run actions affecting anything outside of your project. And ideally the results should autocommit outside of that directory. Then you can yolo as much as you want.
The danger is that the people most likely to try to use it, are the people most likely to misunderstand/anthropomorphize it, and not have a requisite technical background.
I.e. this is just not safe, period.
"I stuck it outside the sandbox because it told me how, and it murdered my dog!"
Seems somewhat inevitable result of trying to misapply this particular control to it...
I've been using bubblewrap for sandboxing my command line executables. But I admit I haven't recently researched if there's a newer way people are handling this. Seems Firejail is popular for GUI apps? How do you recommend, say, sandboxing Zed or Cursor apps?
If they're that unsafe... why use them? It's insane to me that we are all just packaging up these token generators and selling them as highly advanced products when they are demonstrably not suited to the tasks. Tech has entered it's quackery phase.