logoalt Hacker News

salmonellaeaterlast Sunday at 9:50 AM0 repliesview on HN

If the LLM was as smart as a human, this would become a social engineering attack. Where social engineering is a possibility, all three parts of the trifecta are often removed. CSRs usually follow scripts that allow only certain types of requests (sanitizing untrusted input), don't have access to private data, and are limited in what actions they can take.

There's a solution already in use by many companies, where the LLM translates the input into a standardized request that's allowed by the CSR script (without loss of generality; "CSR script" just means "a pre-written script of what is allowed through this interface"), and the rest is just following the rest of the script as a CSR would. This of course removes the utility of plugging an LLM directly into an MCP, but that's the tradeoff that must be made to have security.