LLMs read the web through a second vector as well - their training data. Simply separating security concerns in MCP is insufficient to block these attacks.
The odds of managing to carry out a prompt injection attack or gain meaningful control through the training data seems sufficiently improbable that that we're firmly in Russell's teapot territory - extraordinary evidence required that it is even possible, unless you suspect your LLM provider itself, in which case you have far bigger problems and no exploit of the training data is necessary.
The odds of managing to carry out a prompt injection attack or gain meaningful control through the training data seems sufficiently improbable that that we're firmly in Russell's teapot territory - extraordinary evidence required that it is even possible, unless you suspect your LLM provider itself, in which case you have far bigger problems and no exploit of the training data is necessary.