Along with burning tokens, how MCP servers are ran and managed is resource wasteful. Running a whole Docker container just to have some model call a single API? Want to call a small CLI utility, people say to run another Docker container for that
Feels like a monolith would be better
I don’t think running these commands in a docker container is the standard way of doing this, I’ve seen “npx” et al being used way more often.
Furthermore, the “docker” part wouldn’t even be the most resource wasteful if you consider the general computational costs of LLMs.
The selling point of MCP servers is that they are composable and plug in into any AI agent. A monolith doesn’t achieve that, unless I’m misunderstanding things.
What I find annoying is that it’s very unpredictable when exactly an LLM will actually invoke an MCP tool function. Different LLM providers’ models behave differently, and even within the same provider different models behave differently.
Eg it’s surprisingly difficult to get an AI agent to actually use a language server to retrieve relevant information about source code, and it’s even more difficult to figure out a prompt for all language server functions that works reliably across all models.
And I guess that’s because of the fuzzy nature of it all.
I’m waiting to see how this all matures, I have the highest expectations of Anthropic with this. OpenAI seems to be doing their own thing (although ChatGPT supposedly will come with MCP support soon). Google’s models appear to be the most eager to actually invoke MCP functions, but they invoke them way too much, in turn causing a lot of context to get wasted / token noise.
Remote MCPs should resolve some of this
A "whole Docker container" is not very heavyweight. Other than having their own filesystem view and separate shared libraries, container processes are nearly as light as non-container processes. It's not like running a VM.