logoalt Hacker News

rvzlast Saturday at 6:14 PM1 replyview on HN

There is a single reason why this is happening and it is due to a flawed standard called “MCP”.

It has thrown away almost all the best security practices in software engineering and even does away with security 101 first principles to never trust user input by default.

It is the equivalent of reverting back to 1970 level of security and effectively repeating the exact mistakes but far worse.

Can’t wait for stories of exposed servers and databases with MCP servers waiting to be breached via prompt injection and data exfiltration.


Replies

simonwlast Saturday at 6:21 PM

I actually don't think MCP is to blame here. At its root MCP is a standard abstraction layer over the tool calling mechanism of modern LLMs, which solves the problem of not having to implant each tool in different ways in order to integrate with different models. That's good, and it should exist.

The problem is the very idea of giving an LLM that can be "tricked" by malicious input the ability to take actions that can cause harm if subverted by an attacker.

That's why I've been talking about prompt injection for the past three years. It's a huge barrier to securely implementing so many of the things we want to do with LLMs.

My problem with MCP is that it makes it trivial for end users to combine tools in insecure ways, because MCP affords mix-and-matching different tools.

Older approaches like ChatGPT Plugins had exactly the same problem, but mostly failed to capture the zeitgeist in the way that MCP has.

show 1 reply