How do you prevent these models from reading secrets in your repos locally?
It’s one thing for the ENVs to be user pasted but typically you’re also giving the bots access to your file system to interrogate and understand them right? Does this also block that access for ENVs by detecting them and doing granular permissions?
by putting secrets in your environment instead of in your files, and running AI tools in a dedicated environment that has its own set of limited and revocable secrets.
I configure permission settings within projects.
https://code.claude.com/docs/en/settings#permission-settings