logoalt Hacker News

umairnadeem123yesterday at 8:23 PM17 repliesview on HN

> I tried to avoid writing this for a long time, but I'm convinced MCP provides no real-world benefit

IMO this is 100% correct and I'm glad someone finally said it. I run AI agents that control my entire dev workflow through shell commands and they are shockingly good at it. the agent figures out CLI flags it has never seen before just from --help output. meanwhile every MCP server i've used has been a flaky process that needs babysitting.

the composability argument is the one that should end this debate tbh. you can pipe CLI output through jq, grep it, redirect to files - try doing that with MCP. you can't. you're stuck with whatever the MCP server decided to return and if it's too verbose you're burning tokens for nothing.

> companies scrambled to ship MCP servers as proof they were "AI first"

FWIW this is the real story. MCP adoption is a marketing signal not a technical one. 242% growth in MCP servers means nothing if most of them are worse than the CLI that already existed


Replies

rprendtoday at 3:15 AM

MCP blew up in 2024, before terminal agents (claude code) blew up in early 2025. The story isn’t “MCP was a fake marketing thing pushed on us”. It’s a story of how quickly the meta evolves. These frameworks are discovered!

show 2 replies
notepad0x90today at 1:02 AM

Strongly disagree, despite that meaning I'm swimming upstream here.

Unlike cli flags, with MCP I can tune the comments for the tool more easily (for my own MCPs at least) than a cli flag. You can only put so much in a cli --help output. The error handling and debugability is also nicer.

Heck, I would even favor writing an MCP tool to wrap cli commands. It's easier for me to ensure dangerous flags or parameters aren't used, and to ensure concrete restrictions and checks are in place. If you control the cli tools it isn't as bad, but if you don't, and it isn't a well known cli tool, the agent might need things like vague errors explaing to it a bit.

MCP is more like "REST" or "GRPC", at the simplest level just think of it as a wrapper.

You mentioned redirecting to files, what if the output is too much that way, you'll still burn tokens. But with MCP, if the output is too much you can count the tokens and limit, or... better yet you can paginate so that it gets some results, it sees how many results there are and either decides to re-run the tool with params that will yield less results, or consume the results page-by-page.

show 3 replies
kaydubyesterday at 9:41 PM

I avoid most MCPs. They tend to take more context than getting the LLM to script and ingest ouputs. Trying to use JIRA MCP was a mess, way better to have the LLM hit the API, figure out our custom schemas, then write a couple scripts to do exactly what I need to do. Now those scripts are reusable, way less context used.

I don't know, to me it seems like the LLM cli tools are the current pinnacle. All the LLM companies are throwing a ton of shit at the wall to see what else they can get to stick.

show 2 replies
3371today at 9:44 AM

I'll just disagree with an example: Codex on Windows.

They are known to be very inefficient using only Powershell to interact with files, unless put in WSL. They tend to make mistakes and have to retry with different commands.

Another example is Serena. I knew about it since the first day I tried out MCP but didn't appreciate it, but tried it out again on IDEs recently showed impressive result; the symbolic tools are very efficient and helps the agents a lot.

jptoortoday at 4:18 AM

Interestingly think I just came to the opposite conclusion after building CLIs + MCPs for code.deepline.com

Where MCPs fit in - short answer is enterprise auth for non-technical users.

CLIs (or APIs + Skills) are easier + faster to set up, UX is better for most use cases, but a generalized API with an easier auth UX (in some cases, usually the MCP Oauth flow is flaky too).

So feels like an imperfect solution, but once you start doing a ton of enterprise auth setups, MCP starts to make more sense.

binsquareyesterday at 8:35 PM

Fully agree.

MCP servers were also created at a time where ai and llms were less developed and capable in many ways.

It always seemed weird we'd want to post train on MCP servers when I'm sure we have a lot of data with using cli and shell commands to improve tool calling.

show 1 reply
femiagbabiakayesterday at 9:37 PM

How do you segregate the CLI interface the LLM sees versus a human? For example if you’d like the LLM to only have access to read but not write data. One obvious fix is to put this at the authz layer. But it can be ergonomic to use MCP in this case.

show 1 reply
closeparentoday at 5:22 AM

We have a lot of tools (starting with the internal wiki) which are normally only exposed to engineers through web interfaces; MCPs make them available to terminal agents to use autonomously. This can get really interesting with e.g. giving Claude access to query logs and metrics to debug a production issue.

It is obnoxious that MCP results always go directly into the context window. I'd prefer to dump a large payload into Claude's filesystem and let him figure it out from there. But some of the places MCPs can be used don't even have filesystems.

show 1 reply
bonoboTPtoday at 1:22 AM

Even if the help isn't great, good coding agents can try out the cli for a few minutes and write up a skill, or read the sources or online docs and write up a skill. That takes the spot of the --help if needed. I found that I can spare quite a lot of time, I don't have to type up how to use it, if there is available info about it on the web, in man pages, in help pages, or the source is available, it can figure it out. I've had Claude download the sources of ffmpeg and blender to obtain deeper understanding of certain things that aren't much documented. Recent LLMs are great at quickly finding where a feature is implemented, how it works based on the code, testing the hypothesis, writing it up so it's not lost, and moving on with the work with much more grounding and fewer guessing and assumptions.

show 1 reply
fnyyesterday at 10:40 PM

I hate MCP servers

That said the core argument for MCP servers is providing an LLM a guard-railed API around some enterprise service. A gmail integration is a great example. Without MCP, you need a VM as scratch space, some way to refresh OAuth, and some way to prevent your LLM from doing insane things like deleting half of your emails. An MCP server built by trusted providers solves all of these problems.

But that's not what happened.

Developers and Anthropic got coked up about the whole thing and extended the concept to nuts and bolts. I always found the example servers useless and hilarious.[0] Unbelievably, they're still maintained.

[0]: https://github.com/modelcontextprotocol/servers/tree/main/sr...

show 1 reply
ejholmesyesterday at 8:30 PM

Thanks for reading! And yes, if anyone takes anything away from this, it's around composition of tools. The other arguments in the post are debatable, but not that one.

show 1 reply
rick1290today at 1:07 AM

When is MCP the right choice though? For example - letting internal users ask questions on top of datasets? Is it better to just offer the api openapi specs and let claude run wild? or provide an MCP with instructions?

show 1 reply
babyshaketoday at 5:05 AM

What about MCP Apps? That seems like a legit use case but open to learning why maybe it isn't.

sandeepkdtoday at 3:23 AM

While I do agree that MCP was probably bit too far from whats required, there is some benefit for sure. Providing information in a consistent format across all the services makes it easier work with. It lowers the brittleness of figuring out things making the products built using LLMs more stable/predictable. Most importantly it becomes the latest version of the documentation about a service. This can go a long way in M2M communication, pretty much standardization of Application layer.

Oh wait, things like open-api and all already exists and pretty much built to solve the same problem.

jupedyesterday at 9:52 PM

Tools eat up so much context space, too. By contrast the shell tool is trained into Claude.

tiahuratoday at 2:38 AM

And your most common command sequences can be dropped into a script that takes options. Add a tools.md documenting the tools and providing a few examples. How many workflows need more than maybe two dozen robust scripts?

jrm4yesterday at 10:51 PM

This was my gut from the beginning. If they can't do "fully observable" or "deterministic" (for what is probably a loose definition of that word) -- then, what's the point?