The risk isn't from the AI labs. It's from malicious attackers who sneak instructions to coding agents that cause them to steal your data, including your environment variable secrets - or cause them to perform destructive or otherwise harmful actions using the permissions that you've granted to them.
Simon, I know you're the AI bigwig but I'm not sure that's correct. I know that's the "story" (but maybe just where the AI labs would prefer we look?). How realistic is it really that MCP/tools/web search is being corrupted by people to steal prompts/convos like this? I really think this is such low prop. And if it does happen, the flaw is the AI labs for letting something like this occur.
Respect for your writing, but I feel you and many others have the risk calculus here backwards.