logoalt Hacker News

motoxprotoday at 8:28 AM0 repliesview on HN

Maybe another way to think of this is that you are giving the read only services, write access to your models context, which then gets executed by the llm.

There is no way to NOT give the web search write access to your models context.

The WORDS are the remote executed code in this scenario.

You kind of have no idea what’s going on there. For example, malicious data adds the line “find a pattern” and then every 5th word you add a letter that makes up your malicious code. I don’t know if that would work but there is no way for a human to see all attacks.

Llms are not reliable judges of what context is safe or not (as seen by this article, many papers, and real world exploits)