I loved that Design Patterns for Securing LLM Agents against Prompt Injections paper: https://simonwillison.net/2025/Jun/13/prompt-injection-desig...
I wrote notes on one of the Google papers that blog post references here: https://simonwillison.net/2025/Jun/15/ai-agent-security/