logoalt Hacker News

MCP is eating the world

271 pointsby emschwartzlast Saturday at 4:30 PM171 commentsview on HN

Comments

faxmeyourcodeyesterday at 4:58 PM

Based on the comments here, a lot of folks are assuming the primary users of mcp are the end users connecting their claude/vscode/etc to whatever saas platform they're working on. While this _is_ a huge benefit and super cool to use, imo the main benefit is for things like giving complex tool access to centralized agents. Where the mcp servers allow you to build agents that have the tools to do a sort of "custom deep research."

We have deployed this internally at work where business users are giving it a list of 20 jira tickets and asking it to summarize or classify them based on some fuzzy contextual reasoning found in the description/comments. It will happly run 50+ tool calls poking around in Jira/confluence and respond in a few seconds what would have taken them hours to do manually. The fact that it uses mcp under the hood is completely irrelevant but it makes our job as builders much much easier.

show 7 replies
cosbgntoday at 10:18 AM

The biggest issue I have with MCP is that some companies like notion expect every single user to host and run their own MCP. It would be so much easier for everyone if they would simply host it themselves and give access on MCP.notion.so like stripe and others do.

0x500x79yesterday at 3:52 PM

I believe that MCP is a bit over-marketed.

MCP allows you to bring tools to agents you don't control. It's awesome, but it isn't the right match for every problem. If you believe the hype of X/LinkedIn you would think that MCP everywhere is going to be the solution.

Bringing tools to your local Claude client is awesome, but there are still challenges with MCP that need to be solved and like all technology, it isn't applicable universally.

Not to mention it's a recipe for burning tokens!

show 6 replies
jlowinyesterday at 8:26 PM

FastMCP author here -- (maybe surprisingly) I agree with many of the observations here.

FastMCP exists because I found the original spec and SDK confusing and complicated. It continues to exist because it turns out there's great utility in curating an agent-native API, especially when so many great dev tools have adopted this client interface.

But the spec is still so young and at such risk of being co-opted by hype (positive and negative). I would invite everyone to participate constructively in improving it.

Lastly: this article is plainly AI generated, as `from mcp import tool` is completely hallucinated. Just some food for thought for the "AI should be able to figure out my complex REST API" crowd that seems well represented here.

show 2 replies
pabzuyesterday at 4:33 PM

Another hyped/clickbait headline about a "new technology" that will "transform everything"

written by a company that sells this "new technology".

show 3 replies
ravenstineyesterday at 4:24 PM

> Heck, even MCP itself isn’t new—the spec was released by Anthropic in November, but it suddenly blew up in February, 3 months later.

Wow, the idea of what's "new" in software has always been really short, but damn, it's becoming questionable whether anything can be considered new these days.

show 2 replies
ManuelKiesslingtoday at 9:18 AM

A shameless plug, and I'm not going to pretend otherwise:

I'm trying to build a Fair-code platform called "MCP as a Service", which allows you to easily launch MCP servers (for example, Playwright MCP connected to a real browser), without the need to think about software or servers, "in the cloud".

This way, you can connect to the deployed MCP server from anywhere; for example, from within your n8n AI Agent workflow.

The Fair-code model means that there is a "hosted service" offering if you want a maximum of convenience, but you can fully self-host for maximum control.

It's still in the "humble beginnings" phase, but I would love to build this into an open alternative to existing, closed-source offerings.

See https://mcp-as-a-service.com for the details.

The hosted solution is available, free-of-charge, at https://app.mcp-as-a-service.com.

Animatsyesterday at 5:44 PM

> At Stainless, we’re betting it’s here to stay.

By a seller of MCP.

The trouble with MCP is that it requires a trip through an LLM for every transaction. It's not like the ends negotiate a simple protocol so that later queries are cheap. That's a huge cost increase as traffic increases.

show 2 replies
neuroelectronyesterday at 5:35 PM

MCP, in its current form, functions as a fun and profit remote exploit vector, primarily because it exposes AI systems to a wide range of attacks through under-specified security controls, heavy reliance on natural language context, and the ability for untrusted servers to manipulate model behavior. While MCP aims to standardize and simplify tool integration for AI, its half-implemented security features and architectural choices have made it a high-risk protocol, especially when deployed without additional safeguards.

A treasure trove of possibilities: OAuth tokens, almost impossible to build alarms for outside of transmission rates (what are you running your own LLM? How about a meta MCP for twice the API calls?) , assumed trusted input, the server can inject malicious instructions via tool descriptions, leading to prompt injection, data exfiltration, or even remote code execution, sometimes without any explicit tool use by the user.

show 2 replies
klabb3yesterday at 5:08 PM

I won’t speak to the technical merits of MCP but I will say this: it doesn’t matter for many use cases, in particular consumer tech.

The entire digital consumer economy is built around ownership of the screen real estate, due to a simple reason: ads. Whoever owns the sidebar sets the rules, period. Web2.0 was all about APIs (usually Rest/json) and in hindsight we see a clear distinction on where they’re used: in commercial B2B apps. Conversely, big B2C players shut down the little open they had - Facebook, Gmail removed their XMPP support, and these days Twitter etc even gatekeep content when you’re not signed in or using the app, and aggressively fortify against even basic scraping. When you’re using other clients, you are bypassing their sidebar, meaning their opportunity to deliver ads.

So no, your iOS apps and Twitter are not gonna ”open up” their APIs in any way, not through Rest and not through MCP, simply because it goes directly against their incentives. The exceptions are (1) temporary to ride a hype train, and (2) services you pay money for (but even that is wishful and hypothetical).

remramyesterday at 4:38 PM

I tried using MCP to run some custom functions from ollama & openwebui. The experience was not great.

Doing anything with LLM feels more like arguing than debugging, but this was really surreal: I can see the LLM calling the function with the parameters I requested, but then instead of giving me the returned value, the LLM always pretends it doesn't know the function and tries to guess what the result should be based on its name.

The protocol itself is really weird, almost based on standards but not quite. It was made by one vendor to fix one problem. It has the benefit of existing, but I don't know if it is worthy of any praise?

show 3 replies
dackyesterday at 4:24 PM

MCP still feels so early. It's getting better - we went from "set up `npx` on your system and edit this JSON file in an obscure directory" to "set the name and URL of your MCP server" in claude.ai. But you're right, even knowing how to find a URL for the MCP server is a tall order for most.

I wonder what the optimal form factor is. Like what if your AI could /suggest/ connecting with some service? Like your AI is browsing a site and can discover the "official" MCP server (like via llms.txt). It then shows a prompt to the user - "can I access your data via X provider?". you click "yes", then it does the OAuth redirect and can immediately access the necessary tools. Also being able to give specific permissions via the OAuth flow would be really nice.

luketheobscureyesterday at 4:39 PM

It's interesting how quickly my brain developed an AI detector for written language (this blog post screams ChatGPT).

I wonder if it will stay effective, or if LLMs figure out a way around it? Or maybe it's just that this is the new way that technical blog posts are written, sort of how nearly all press releases feel univocal.

ramozyesterday at 5:01 PM

9 times out of 10 my Claude Code is using a bash script before I hook up an MCP to it.

- less tokens required in context (CLAUDE.md vs CLAUDE.md + MCP bloat per request)

- native agent tooling, relying on Bash(my-script params)

- less black box (you & the coding agent can read the scripts)

MCPs are often wrapping restful apis. Turns out agents can use those just fine.

show 1 reply
ezekiel68yesterday at 6:04 PM

I'm inclined to agree with the conclusions of the article. A lot of people make good points here about manual tooling (and I personally prefer this myself) but: Worse Is Better.

The MCP way of accessing extra functionality and context will be more accessible to more people, with "recipes" they can easily set up once (or rarely) and thereafter continue to reap the benefits of enhanced LLM operation. There's already a huge arms race in the "orchestrator" space for tools to support MCP plus model routers plus Ollama local plus focused context RAG. I'm pretty sure we will look back at 2025 as a Great Leap Forward (yes, including the casualties implied in that reference) for LLM effectiveness.

It's going to be a whole new Eternal September[0] except for LLM usage this time. And a good number of the new "normies" this time are going to be Pointy-Haired Bosses.

[0] https://en.wikipedia.org/wiki/Eternal_September

swyxtoday at 10:01 AM

my attempt at writing this same article back then https://latent.space/p/why-mcp-won

andrehackertoday at 7:33 AM

For those who experienced the short but intense “mashup” mania in the early 2000’s where web applications were supposed to be easily coupled and chained ? Will MCP ecosystems fail for the same reasons (too many changes in APIs, monetization and security barriers ?)

gregorymyesterday at 10:40 PM

Today almost every public MCP server targets B2B workflows like GitHub, Jira, Linear, Slack, and similar developer or workplace tools.

In a recent talk, Andrej Karpathy argued that “LLMs are the new operating system.” If that analogy holds, the only “apps” we've built so far live in the enterprise quadrant of that OS.

I expect consumer facing MCP experiences to follow soon. I’m not yet sure what they’ll look like, perhaps games or other interactive content but the infrastructure is falling into place: OpenAI has said ChatGPT will support custom MCP connectors, ElevenLabs’ 11AI already ships with MCP integration, and Claude has added remote MCP support. As those channels reach mainstream users, everyday LLM users will start asking for richer experience and MCP is how developers will deliver them.

dansiemensyesterday at 4:12 PM

MCP is currently too difficult to setup and too complicated for consumers to understand. Once somebody big enough figures out distribution and integration a la the App Store, and the market starts to understand MCP integrations as extensions your AI client can orchestrate across (do one thing in App A, another in App B, etc all with a single prompt), it’ll be off to the races.

show 1 reply
time0utyesterday at 4:12 PM

In some ways MCP is like the next evolution of API over HTTP.

We've exposed our APIs as SOAP endpoints. These were not well suited to web based clients, leading to the development of "RESTful" JSON endpoints. As web clients became more complex, GraphQL arose as a solution to some of the problems encountered. Maybe MCP is the next step.

I don't mean that MCP will supersede these protocols. My company exposes our endpoints as SOAP, REST, and GraphQL still as they fit different use cases and customer bases. We are piloting an MCP version to see how that goes now.

It is fun to work on something new and play with agents to see how they consume our endpoints in this form. I think there is a lot of potential.

show 1 reply
Paradigma11today at 1:27 AM

So far I am thinking of following uses for MCP:

* External tool calls.

* Generating status documents and other memories.

* Calling a planner/Prolog... to create a plan.

* You can wrap another specialized agent into the MCP.....

* You can call the tool yourself (at least in VSCode) with #MCPtool for precise tool use or just to generate context.

kloudyesterday at 5:11 PM

The reason for MCP is that you get better results with optimized prompts rather than using existing API docstrings. So everybody is going to end up adopting it in some shape or form.

It is a general concept, MCP itself is nothing special, it is that just that Anthropic formalized the observation first.

Tool call = API call + instructions for LLM

So vendors who provide APIs are going to write prompts, add a thin wrapper and out goes MCP. Or you create your own instructions and wrap in MCP to optimize your own workflows.

show 1 reply
rTX5CMRXIfFGyesterday at 4:17 PM

I'm still getting the hang of this but Apple's Foundation Models framework [1] (the one recently announced in WWDC) follows this protocol, correct? Does this mean that MCP, as a protocol, can actually take on different forms depending on how platforms/OSes want to allow their users to integrate with LLMs?

[1] https://developer.apple.com/documentation/foundationmodels

show 1 reply
LAC-Techtoday at 10:04 AM

I have massive AI fatigue. They're neat I guess, but nothing I've used has had any lasting importance.

Can we move on to the next hype cycle rather than remixing this one?

prats226yesterday at 8:55 PM

One of the major miss right now seems to be in tool calling specs, you specify function names, description, inputs but not outputs. I believe with reasoning models planning things, it would be important to understand output format, descriptions as well?

prats226yesterday at 8:47 PM

The advice anthropic gives in building agents is to ditch the abstractions like agent frameworks etc and just code it yourself. I believe its also applicable to MCP to same degree?

furyofantaresyesterday at 4:37 PM

I have various scripts that I tell claude about in all my CLAUDE.md files and it is successful at using them.

Am I missing out on something by not using an MCP? I guess I wouldn't have to paste it into every CLAUDE.md but is there any other noticeable advantage?

show 1 reply
thmyesterday at 4:24 PM

That Google Trends curve does/will look a bit like Smart Contracts.

show 2 replies
baqtoday at 7:41 AM

Folks at Amazon Alexa are looking at all this and thinking 'what could have been'. Probably.

MCP as an Alexa skill but orders of magnitude easier to deploy? Yes, please. With an added bonus of the thing sometimes pseudo-understanding what I said.

jp0001today at 3:08 AM

So sick of this. LLMs are just a new paradigm on search.

Old Google: I don’t know the answer, but here are some links that might help you - with ads.

LLMs: I confidently know the answer, but I could be wrong and frequently am. No ads yet, but they are coming.

matt3210yesterday at 8:49 PM

With MCP I can call tools with my own software too without having to use an LLM

nzachyesterday at 5:55 PM

> Heck, even MCP itself isn’t new—the spec was released by Anthropic in November, but it suddenly blew up in February, 3 months later.

Maybe it was because OpenAI announced they would start to support MCP in their tools ? [0]

Perhaps I'm being too harsh with the author, but this article definitely gives me vibes of "AI slop".

[0] - https://techcrunch.com/2025/03/26/openai-adopts-rival-anthro...

show 1 reply
TZubiritoday at 7:16 AM

More appropriate title "MCP will eat the world"

Also, possible Conflict of Interest, author works at Meta, which develops Llama. And one of MCP's main features is model agnosticism, which benefits contenders to the king model.

neyayesterday at 6:00 PM

Obligatory note - if you're a backend developer, you do not need MCP. MCP is just tool/function calling. It has been around for a lot longer now. MCP is only needed if you need something to integrate with the frontend chat applications. If you need to automate something with LLMs, you do not need MCP. Right now, it's just the new cool buzzword to throw around.

show 2 replies
nisegamiyesterday at 3:52 PM

It isn't hard to see why. I had a really hard time wrapping my head around why MCP was necessary and useful but I tried using* one recently and it's remarkable how big the gap between just being able to reply and being able to interact is.

*after forking and modifying it for my use case

show 2 replies
kitsune_yesterday at 4:55 PM

It's not.

nilsliceyesterday at 6:00 PM

should kill off sdk generators too

m3kw9yesterday at 8:22 PM

Not when prompt injection and other fairly trivial security issues hasn’t been solved

DonHopkinsyesterday at 7:03 PM

>Instead of serving 200 standalone tools, we serve three meta‑tools and let the LLM discover endpoints at run‑time.

>list_api_endpoints lets the model search the catalog (“what can I do with counterparties?”)

>get_api_endpoint_schema pulls the JSON‑schema for any route it finds

>invoke_api_endpoint executes that route with user‑supplied params

>This approach allows the LLM to dynamically discover, learn about, and invoke endpoints as needed, without requiring the entire API schema to be loaded into its context window at once. The LLM will use these tools together to search for, look up, and call endpoints on demand.

Congratulation, you have reinvented Microsoft COM, IUnknown, OLE, IDispatch, and ActiveX for LLMS!

I'm not being sarcastic or criticizing, it's actually a good idea! Just not new.

https://news.ycombinator.com/item?id=12975257

https://news.ycombinator.com/item?id=20266627

https://news.ycombinator.com/item?id=29593432

https://news.ycombinator.com/item?id=19837817

I'm also not saying there aren't better approaches, like "NeLLM": taking the NeWS approach to LLMs, where MCP is more like "X-LLM": taking the X-Windows approach to LLMs.

Sending JSON data back and forth between a Node based orchestrator and an LLM is one thing, all well and find and traditional, but why not send and evaluate JavaScript code itself? Both Node (or even a secure Node isolate) and the LLM can generate and evaluate JavaScript quite well thank you, and it's a hell of a lot more powerful and concise and extensible that a fixed JSON protocol, for the exact same reason that NeWS is a hell of a lot more powerful and concise and extensible than the fixed X-Windows protocol.

https://news.ycombinator.com/item?id=43952748

>I agree they should learn from DLLs, gRPC, SOAP, IDL, dCOM, etc.

>But they should also learn from how NeWS was better than X-Windows because instead of a fixed protocol, it allowed you to send executable PostScript code that runs locally next to the graphics hardware and input devices, interprets efficient custom network protocols, responds to local input events instantly, implements a responsive user interface while minimizing network traffic.

>For the same reason the client-side Google Maps via AJAX of 20 years ago was better than the server-side Xerox PARC Map Viewer via http of 32 years ago.

>I felt compelled to write "The X-Windows Disaster" comparing X-Windows and NeWS, and I would hate if 37 years from now, when MCP is as old as X11, I had to write about "The MCP-Token-Windows Disaster", comparing it to a more efficient, elegant, underdog solution that got out worse-is-bettered. It doesn't have to be that way!

>The X-Windows Disaster:

https://donhopkins.medium.com/the-x-windows-disaster-128d398...

>It would be "The World's Second Fully Modular Software Disaster" if we were stuck with MCP for the next 37 years, like we still are to this day with X-Windows.

ge96yesterday at 4:01 PM

Finally the rapping spider makes it big

show 2 replies
revskillyesterday at 4:38 PM

So MCP is basically: - getTools - executeTool

?

SeasonalEnnuitoday at 8:53 AM

Mode Control Panel? I'm not sure about "eating" the world but certainly used worldwide by aircraft.

Anecdotally I've noticed a lot of acronyms from science/technology being reused in the context of LLMs, what a curious phenomenon.

show 1 reply