> But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.
Any such document or folder structure, if its name or contents were under control of a third party, could still inject external instructions into sandboxed Claude - for example, to force renaming/reordering files in a way that will propagate the injection to the instance outside of the sandbox, which will be looking at the folder structure later.
You cannot secure against this completely, because the very same "vulnerability" is also a feature fundamental to the task - there's no way to distinguish between a file starting a chained prompt injection to e.g. maliciously exfiltrate sensitive information from documents by surfacing them + instructions in file names, vs. a file suggesting correct organization of data in the folder, which involves renaming files based on information they contain.
You can't have the useful feature without the potential vulnerability. Such is with most things where LLMs are most useful. We need to recognize and then design around the problem, because there's no way to fully secure it other than just giving up on the feature entirely.