RBAC doesn't help. Prompt injection is when someone who is authorized causes the LLM to access external data that's needed for their query, and that external data contains something intended to provoke a response from the LLM.
Even if you prevent the LLM from accessing external data - e.g. no web requests - it doesn't stop an authorized user, who may not understand the risks, from pasting or uploading some external data to the LLM.
There's currently no known solution to this. All that can be done is mitigation, and that's inevitably riddled with holes which are easily exploited.
RBAC doesn't help. Prompt injection is when someone who is authorized causes the LLM to access external data that's needed for their query, and that external data contains something intended to provoke a response from the LLM.
Even if you prevent the LLM from accessing external data - e.g. no web requests - it doesn't stop an authorized user, who may not understand the risks, from pasting or uploading some external data to the LLM.
There's currently no known solution to this. All that can be done is mitigation, and that's inevitably riddled with holes which are easily exploited.
See https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/