Simon Willison made many of the same points (without the technical deep dive) back in October 2025 [1], when Anthropic announced Skills.
A couple of choice quotes, which are echoed in this new article:
> I like to joke that one of the reasons it took off is that every company knew they needed an “AI strategy”, and building (or announcing) an MCP implementation was an easy way to tick that box.
> Almost everything I might achieve with an MCP can be handled by a CLI tool instead.
MCP is just a small, boring protocol that lets agents call tools in a standard way, nothing more. You can run a single MCP server next to your app, expose a few scripts or APIs, and you are done. There is no requirement for dozens of random servers or a giant plugin zoo.
Most of the “overhead” and “security nightmare” worries assume the worst possible setup with zero curation and bad ops. That would be messy with any integration method, not only with MCP. Teams that already handle HTTP APIs safely can apply the same basics here: auth, logging, and isolation.
The real value is that MCP stays out of your way. It does not replace your stack, it just gives tools a common shape so different clients and agents can use them. For many people that is exactly what is needed: a thin, optional layer, not another heavy platform.
Fair warning, if you load the site in dark-mode the diagrams are completely broken. They're PNGs with an alpha-transparency background and gray/black for the actual content, when the site is in dark mode you can see nothing at all...
So make sure to change to light-mode in the top-right if you want to read this article at all.
This article could really mostly be reduced to the last two paragraphs, but then it calls skills "over-engineered". Skills are basically just having the agent read the front matter with instructions to read the rest if a given skill seems useful in a given context... I don't know how it could be more minimal.
We went from "Review any services and their interaction without local system and network" to "Defending local and remote logic created on the fly to mangle the local file system, and why that's a good thing" ...
That's not a productivity boost. That's a rapid increase in cognitive tax you're offloading for later and as you get backlogged in reviewing it, you lose more control over what it does...
I don't think MCP is a fad - I think it is the 2020s equivalent of:
- Active X
- asbestos
- leaded gasoline and paint
- radium medicines
Well, with the exception of the first 3 actually being quite useful.
I think this article misses the most important point of MCP: Authentication. Granted, it wasn't in the initial spec, but it is now, and it really opens interoperability without compromising on security.
Think about how to provide your SaaS service to users of ChatGPT or Claude.ai (not only coding tools like VSCode). At one time, the user will need to allow the SaaS service to interact with their agent, and will have to authenticate in the SaaS service so that the agent can act on their behalf. This is all baked in the MCP spec (through OAuth) [1], and scripting can't beat that.
That's why the Extensions/Applications marketplaces of consumer AI assistants like ChatGPT Apps [2] are a thin layer on top of MCP.
Another domain where MCP is required is for Generative UI. We need a standard that allows third-party apps to return more sophisticated content than just text The MCP spec now encloses the MCP Apps specification [3], which is exactly that: a specification for how third-party apps can generate UI components in their response. On the other hand, scripting will only let you return text.
[1]: https://modelcontextprotocol.io/specification/2025-03-26/bas... [2]: https://help.openai.com/en/articles/11487775-apps-in-chatgpt [3]: https://github.com/modelcontextprotocol/ext-apps
Agent skills are overengineered? Seriously it’s just md with description.
Tip: If you're in dark mode, flip to light mode so that you can see the graphics. There's a toggle in top right corner of the site.
There are many good points, but unfortunately the title ("fad") and the conclusion seem unwarranted, become a distraction and diminish the value of the article.
The security issue has been discussed many times in the past year.
Agree on the "one process per server" thing -- seems smart and convenient but gets worse when the number of MCPs and coding agents go up, especially when combined with the following point.
Lifetime is a real issue, and I am glad that someone talks about it. You probably won't worry the overhead for git, GitHub or Playwright MCP servers, where they are likely wrappers for some bash commands or rest APIs or everything is fast to launch. However, lifetime is still an issue, when you have multiple coding agents or sessions.
It gets worse for certain applications, especially heavy desktop apps (Imagine an MCP server for Lightroom classical) -- due to their application model, in order to evaluate a command, you'll have to load half of the application to do that. You'd think you only want to launch this once. But likely not. Each coding agent session will launch its own instance. Maybe this won't happen if the MCP server works extra hard to share processes and manage lifetimes on its own, but it totally depends on the third party provider, and the behavior could vary wildly.
Would a user want to deal with all these issues? If they are not careful, they'll easy launch 15 processes consuming 1G of memory for two coding agents, one of which does not actually use any of those servers, and one is simply sitting idle because the user hasn't started vibe coding yet.
(If this doesn't seem an issue to you, probably just because you haven't run into it first hand yet )
I think there has got to be a better way to do this.
Yeah I mean it would be better if REST was the way tools were exposed to LLMs
I'm just glad it's there as a standardized approach. Right now I can connect an MCP Clock to ChatGPT .com/iOS, Claude .ai/iOS/Claude Code/Claude .exe
It's wild that it's been over three years and these apps don't have a way to check the time without booting up a REPL, relying on an outdated system prompt comment about current UTC or doing web search for cached pages. Bill Gates would have added this to ChatGPT by Dec 2022
You can add my clock [https://mcpclock.firasd.workers.dev/sse] to any of your AI apps right now
(the code is directly deployed from this github to a cloudflare worker if you want to check what it does https://github.com/firasd/mcpclock/blob/main/src/index.ts)
The article focused on the local stdio MCP tools used by coding / computer automation agents like claude code and cursor, but missed the fact that we will need protocols for AI agents to call networked services, including async interactions with long-running operations.
Funny that the "What is MCP?" section doesn't even explain what the acronym stands for... (I genuinely have no clue)
Yeah, but MCP provides a convenient layer of indirection where I can sandbox my app, allowing only files within a given directory tree (i.e., project workspace) to be read from/written to using my tools. How do I accomplish this when allowing an agent to call my tools directly?
Someone’s a little late to the party…
MCP is very easy to use because it allows to just dump it in the model api call and the providers handle the calling. A bit easier than running the tool call loop
What do MCPs do that the CLI cannot?
i.e. assuming your agent has access to the terminal, and therefore CLIs, what additional value do MCPs provide?
We're at a point in the LLM curve where there's two huge, polarized groups of developers:
- the ones who don't see any value on AI for coding and dismiss it as a fad at every change they get
- the ones who are in love with the new tools and adopting as many as they can on their workflows
I know the arguments of the second bunch well. But very curious about what the "AI is a fad" bunch thinks will happen. Are we going to suddenly realize all these productivity gains people are claiming are all lies and go back to coding by typing characters on emacs and memorizing CS books? Will StackOverflow suddenly return as the most popular source of copy-paste code slop?
sometimes a bridge is also a fad
Understatement of last year. It was a horrific standard and was a completely broken one security-wise from day 0.
The folks who wrote it have never written an RFC or an internet standard before.
Remember the VCs screaming about MCPs all day long last year? Well I don't see them doing that at all anymore, and called that 1 year ago. [0]
It's best to replace the MCP's with a respectable CLI / executable and design a skill for that tool, that way an agent would fetch something the same way you would
You only have to spend 5 minutes browsing for MCP servers to see that there is an issue with AI slop. MCP is probably the first "standard" to be built out in the vibe-coding era and it really shows.
As mentioned in the article, its not clear to me what the advantage over OpenAPI is. Surely a swagger file solves more or less the same issue.
That said, one minor nice thing about the MCP servers is that they operate locally over stdin stdout, which feels a lot faster than HTTP/Rest.
From an enterprise adoption standpoint, remote MCP addresses the connector problem and can be easily retrofitted into enterprise-wide gateway services. In contrast, building tools is significantly more expensive for enterprises with large, existing API surfaces.
Most of the concerns can be addressed by a gateway service
MCP solves the wrong problem. The mechanics of calling tools, commands, apis, etc. isn't all that hard given some documentation. That's why agentic coding tools work so well.
For security, some sandboxing can address enough concerns that many developers feel comfortable enough using these tools. Also, you have things like version control and CI/CD mechanisms where you can do reviews and manually approve things. Worst case you just don't merge a PR. Or you revert one.
For business usage, the tools are more complicated, state full, dangerous, and mistakes can be costly. Employees are given a lot of powerful tools and are expected to know what to do and not do. E.g. a company credit card can be abused but employees know that would get them in jail and fired. So they moderate what they buy. Likewise they know not to send company secrets by email.
AI tools with the same privileges as employees would be problematic. It's way too easy to trick them into exfiltrating information, doing a lot of damage with expensive resources, etc. This cannot be fixed by a simple permission model. There needs to be something that can figure out what is appropriate to do and not under some defined policy and audit agent behavior. Asking the user for permission every time something needs to happen is not a scalable solution. This needs to be automated. Also, users aren't particularly good at this if it isn't simple. It's way too easy for them to make mistakes answering questions about permissions.
I think that's where the attention will go for a lot of the AI investments. AIs are so useful for coding now that it becomes tempting to see if we can replicate the success of having agents do complex things in different contexts. If the cost savings are significant, it's worth taking some risks even. Just like with coding tools. I run codex with --yolo. In a vm. But still, it could do some damage. But it does some useful stuff for me and the bad stuff is so far theoretical.
I run a small startup, a short cut to success here is taking a development perspective to using business tools. For example instead of using google docs or ms word, use text based file formats like markdown, latex, or whatever and then pandoc to convert them. I've been updating our website this way. It's a static hugo website. I can do all sorts of complicated structure and content updates with codex. That limits my input to providing text and direction. If I was still using wordpress, I'd be stuck and doing all this manually. Which is a great argument to ditch that in a hurry.
I don't necessarily like it writing text though it can be good to have a first shot at a new page. But it's great at putting text in the right place, doing consistency checks, fixing broken layout, restructuring pages, etc. I just asked it to add a partner logo and source the appropriate svg. In the past I would have done that manually. Download some svg. Figure out where to put it. And then fiddle with some files to get it working. Not a huge task but something I no longer have to do manually. Website maintenance has lots of micro tasks like this. I get to focus on the big picture. Having a static site generator and codex fast forwards me a few years in terms of using AI to do complex website updates. Forget about doing any of this with the mainstream web based content management systems any time soon.
MCP is also a pretty good way to circumvent normal API security and many companies bought into it - and all of that just to hop on the AI hype train.
You can do your own research.
I for one rejoice at the return of apis, that were depreciated because it was hard to insert ads, in the form of MCP.
* briefly nods and continues to mcp add playwright everywhere *
More than a fad, MCP is a reinvention of Smalltalk. Of course an automated agent doesn't want to communicate through other autonomous systems via text or binary protocols. There should be a unified way of executing high-level commands (i.e. message passing) to other systems. A global RPC mechanism, if you will.
MCP is simply a crappy implementation of this idea because our programming environments do not expose global remote function call mechanisms with well-defined protocols. The "everything is a file" idea is quite limiting these days.
Speaking of Smalltalk, I always imagined that you could integrate LLMs/actual artificial intelligence by giving them access to the internal data and telling them what you want to do, rather than calling a method. Instead of:
a := Point x: 0 y: 0
b := Point x: 5 y: 7
distance := a distanceTo: b
You would do: a := Point x: 0 y: 0
b := Point x: 5 y: 7
distance := a llm: "You are a point object. Please calculate the distance to the argument." arg: b
Wouldn't that be neat? But alas, we're still writing software as if it's the 1970s.
This analysis dismisses MCP by focusing too narrowly on local file system interactions. The real value isn't just running scripts; it's interoperability.
MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code. Comparing it to local scripts is like calling USB a fad because parallel ports worked for printers. The power is standardization: write once, support every AI client.
Edit:
To address the security concerns below: MCP is just the wire protocol like TCP or HTTP. We don't expect TCP to natively handle RBAC or prevent data exfil. That is the job of the application/server implementation.