You just described how to write a tool the LLM can use. Not MCP!! MCP is basically a tool that runs in a server so can be written in any programming language. Which is also its problem: now each MCP tool requires its own server with all the complications that come with it, including runtime overhead, security model fragmentation, incompatibility…
> You just described how to write a tool the LLM can use. Not MCP!! MCP is basically a tool that runs in a server so can be written in any programming language.
It's weird you're saying it's not MCP, when this is precisely what I've done to write several MCP servers.
You write a function. Wrap it with a decorator, and add another line in __main__, and voila, it's an MCP server.
> now each MCP tool requires its own server with all the complications that come with it, including runtime overhead, security model fragmentation, incompatibility…
You can lump multiple tools in a server. Personally, it makes sense to organize them by functionality, though.
> including runtime overhead, security model fragmentation, incompatibility…
What incompatibility?
Runtime overhead is minimal.
Security - as I said, if you write your own tools, you control it just as you would with the old tool use. Beyond that, yes - you're dependent on the wrapper library's vulnerabilities, as well as the MCP client. Yes, we've introduced one new layer (the wrapper library), but seriously, it's like saying "Oh, you introduced Flask into our flow, that's a security concern!" Eventually, the libraries will be vetted and we'll know which are secure and which aren't.