logoalt Hacker News

groby_byesterday at 9:34 PM3 repliesview on HN

> We have all of the tools to prevent these agentic security vulnerabilities,

We do? What is the tool to prevent prompt injection?


Replies

alienbabyyesterday at 10:55 PM

The best I've heard is rewriting prompts as summaries before forwarding them to the underlying ai, but has it's own obvious shortcomings, and it's still possible. If harder. To get injection to work

lacunaryyesterday at 9:40 PM

more AI - 60% of the time an additional layer of AI works every time

losthobbiesyesterday at 9:52 PM

Sanitise input and LLM output.

show 1 reply