logoalt Hacker News

DieErdetoday at 7:44 AM6 repliesview on HN

Why is the concept of "MCP" needed at all? Wouldn't a single tool - web access - be enough? Then you can prompt:

    Tell me the hottest day in Paris in the
    coming 7 days. You can find useful tools
    at www.weatherforadventurers.com/tools
And then the tools url can simply return a list of urls in plain text like

    /tool/forecast?city=berlin&day=2026-03-09 (Returns highest temp and rain probability for the given day in the given city)
Which return the data in plain text.

What additional benefits does MCP bring to the table?


Replies

SyneRydertoday at 8:32 AM

A few things: in this case, you have to provide the tool list in your prompt for the AI to know it exists. But you probably want the AI agent to be able to act and choose tools without you micromanaging and reminding it in every prompt, so then you'd need a tool list... and then you're back to providing the tool list automatically ala MCP again.

MCP can provide validation & verification of the request before making the API call. Giving the model a /tool/forecast URL doesn't prevent the model from deciding to instead explore what other tools might be available on the remote server instead, like deciding to try running /tool/imagegenerator or /tool/globalthermonuclearwar. MCP can gatekeep what the AI does, check that parameters are valid, etc.

Also, MCP can be used to do local computation, work with local files etc, things that web access wouldn't give you. CLI will work for some of those use cases too, but there is a maximum command line length limit, so you might struggle to write more than 8kB to a file when using the command line, for example. It can be easier to get MCP to work with binary files as well.

I tend to think of local MCP servers like DLLs, except the function calls are over stdio and use tons of wasteful JSON instead of being a direct C-function call. But thinking of where you might use a DLL and where you might call out to a CLI can be a useful way of thinking about the difference.

Phlogistiquetoday at 8:10 AM

The point is authorization. With full web access, your agent can reach anything and leak anything.

You could restrict where it can go with domain allowlists but that has insufficient granularity. The same URL can serve a legitimate request or exfiltrate data depending on what's in the headers or payload: see https://embracethered.com/blog/posts/2025/claude-abusing-net...

So you need to restrict not only where the agent can reach, but what operations it can perform, with the host controlling credentials and parameters. That brings us to an MCP-like solution.

show 1 reply
fennecbutttoday at 10:07 AM

For me (actually trying to get shit done using this stuff) it's validation.

Being able to have a verifiable input/output structure is key. I suppose you can do that with a regular http api call (json) but where do you document the openapi/schema stuff? Oh yeah...something like mcp.

I agree that mcp isn't as refined as it should be, but when used properly it's better than having it burn thru tokens by scraping around web content.

ewidartoday at 7:54 AM

One thing that I currently find useful on MCPs is granular access control.

Not all services provide good token definition or access control, and often have API Key + CLI combo which can be quite dangerous in some cases.

With an MCP even these bad interfaces can be fixed up on my side.

iddantoday at 7:47 AM

The prophecy of the hypermedia web

show 1 reply
jbverschoortoday at 8:18 AM

Proxying / gatekeeping