I actually don't think MCP is to blame here. At its root MCP is a standard abstraction layer over the tool calling mechanism of modern LLMs, which solves the problem of not having to implant each tool in different ways in order to integrate with different models. That's good, and it should exist.
The problem is the very idea of giving an LLM that can be "tricked" by malicious input the ability to take actions that can cause harm if subverted by an attacker.
That's why I've been talking about prompt injection for the past three years. It's a huge barrier to securely implementing so many of the things we want to do with LLMs.
My problem with MCP is that it makes it trivial for end users to combine tools in insecure ways, because MCP affords mix-and-matching different tools.
Older approaches like ChatGPT Plugins had exactly the same problem, but mostly failed to capture the zeitgeist in the way that MCP has.
Isn't that a bit like saying object-linking and embedding or visual basic macros weren't to blame in the terrible state of security in Microsoft desktop software in prior decades?
They were solving a similar integration problem. But, in exactly the same way, almost all naive and obvious use of them would lead to similar security nightmares. Users are always taking "data" from low trust zones and pushing them into tools not prepared to handle malignant inputs. It is nearly human nature that it will be misused.
I think this whole pattern of undisciplined system building needs some "attractive nuisance" treatment at a legal and fiscal liability level... the bad karma needs to flow further back from the foolish users to the foolish tool makers and distributors!