logoalt Hacker News

Pakvotheyesterday at 10:27 PM2 repliesview on HN

This is interesting for MCP server deployment. Right now most MCP servers run as local stdio processes. Containerizing them would solve the security and isolation concerns that come up every time someone installs a thirdparty MCP server.

Would love to see this support stdio-to-HTTP bridging so local MCP servers can be exposed as remote ones without rewriting them.


Replies

jsunderland323yesterday at 10:40 PM

There a couple of ways you can go about MCP within coasts (also depends on what the MCP does). You can either install the MCP service host-side (something like playwright), in which case everything should just work out of the box for you.

Alternatively, you can setup the Coast to install MCP services in the containers. There are some cases around specific logging or db MCP's where this might make sense.

>Would love to see this support stdio-to-HTTP bridging so local MCP servers can be exposed as remote ones without rewriting them.

Are you saying if you exposed the MCP service in the Coast and hosted it remotely you could expose back the MCP service remotely? That's actually a sort of interesting idea. Right now, the agents basically need to exec the mcp calls if they are running host-side and need to call an inner mcp. I hadn't considered the case of proxying the stdout to http. I'll think about how best to implement that!

cyanydeezyesterday at 10:34 PM

Isn't the primary security concern with thirdparty MCP servers the actual injected context and not whatever sandbox the MCP server is in? It doesn't really matter if the MCP can't do something to it's host; it's that it can manipulate the context to whatever ends it deems fit, which then is intractable in whatever LLM is calling it.

I'm really struggling to understand what peoples security concepts are with LLMs.