I wrote a short position paper arguing that current agentic AI safety failures are the confused deputy problem on repeat. We are handing agents ambient authority and trying to contain it with soft constraints like prompts and userland wrappers. My take: you need hard, reduce-only authority enforced at a real boundary (kernel control plane class), not something bypassable from userland. Curious how others are modeling this. What constraints do you think are truly non-negotiable?
Was this written with a LLM? If so, please add a note about it at the start of the README.
[dead]
People want convenience more than they want security. No one wants permission grants to go away in minutes or hours. Every time the agent is stopped by permissions grant check, the average user experience is a little worse.