> We have all of the tools to prevent these agentic security vulnerabilities,
We do? What is the tool to prevent prompt injection?
The best I've heard is rewriting prompts as summaries before forwarding them to the underlying ai, but has it's own obvious shortcomings, and it's still possible. If harder. To get injection to work
more AI - 60% of the time an additional layer of AI works every time
Sanitise input and LLM output.
The best I've heard is rewriting prompts as summaries before forwarding them to the underlying ai, but has it's own obvious shortcomings, and it's still possible. If harder. To get injection to work