logoalt Hacker News

space_fountainyesterday at 11:18 PM1 replyview on HN

I'm not sure that a prompt injection secure LLM is even possible anymore than a human that isn't susceptible to social engineering can exist. The issues right now are that LLMs are much more trusting than humans, and that one strategy works on a whole host of instances of the model


Replies

chrisjjyesterday at 11:32 PM

Indeed. When up against a real intelligent attacker, LLM faux intelligence fares far worse than dumb.