The problem is boundary enforcement fatigue. People become lazy, creating tight permission scopes is tedious work. People will use an LLM to manage the scopes given to another LLM, and so on.
100% this. Human psychology is always overlooked in these discussions, and people focus on "perfect technical solution" without considering how humans will actually end up using them. Linux permissions schema are a classic example, with many guides advising users to keep everything as locked down as possible, and expanding permissions as and when required. After the 100th time of fucking around with chmod, users often give up and just make everything 777. If there were a user-friendly (but imperfect) method (like Windows' UAC), people would actually use it, and be far safer in the long run.
> creating tight permission scopes is tedious work
I have a feeling this kind of boundary configuration is the bread and butter of the current AI software landscape.
Once we figure out how to make this tedious work easier a lot of new use cases will get unlocked.