logoalt Hacker News

ACCount37last Thursday at 7:46 PM0 repliesview on HN

LLMs are vulnerable in the same way humans are vulnerable. We found a way to automate PEBKAC.

I expect that agent LLMs are going to get more and more hardened against prompt injection attacks, but it's hard to get the chance of them working all the way down to zero while still having a useful LLM. So the "solution" is to limit AI privileges and avoid the "lethal trifecta".