It’s crazy how Anthropic keeps coming up with sticky “so simple it seems obvious” product innovations and OpenAI plays catch up. MCP is barely a protocol. Skills are just md files. But they seem to have a knack for framing things in a way that just makes sense.
They are the LLM whisperers.
In the same way Nagel knew what it was like to be a bat, Anthropic has the highest fraction of people who approximately know what it's like to be a frontier ai model.
> https://www.anthropic.com/news/donating-the-model-context-pr...
This is a prime example of what you're saying. Creating a "foundation" for a protocol created an year ago that's not even a protocol
Has the Gavin Belson tecthics energy
Their name is Anthropic. Their entire schtick is a weird humanization of AIs.
MCP/Tool use, Skills, and I'm sure others that I can't think of.
This is might be because of some core direction that is more coherent than other labs.
Skills are not just markdown files. They are markdown files combined with code and data, which only work universally when you have a general purpose cloud-based code execution environment.
Out of the box Claude skills can call python scripts that load modules from Pypi or even GitHub, potentially ones that include data like sqlite files or parquet tables.
Not just in Claude Code. Anywhere, including the mobile app.
MCP is a terribly designed (and I assume vibe-designed) protocol. Give me the requirements that an LLM needs to be able to load tools dynamically from another server and invoke them like an RPC, and I could give you a much simpler, better solution.
The modern HTTP Streamable version is light-years better, but took a year and was championed by outside engineers faced with the real problem of integrating it, and I imagine was designed by a human.
OpenAI was there first, but unfortunately the models weren't quite good enough yet, so their far superior approach unfortunately didn't take off.
Also, MCP is a serious security disaster. Too simple, I'd wager.
Also `CLAUDE.md` (which is `AGENTS.md` everywhere? else now)
I get the impression the innovation drivers at OpenAI have all moved on and the people that have moved in were the ones chasing the money, the rest is history.
I noticed something like this earlier, in the android app you can have it rewrite a paragraph, and then and only then do you have the option to send that as a text message. It's just a button that pops up. Claude has an elegance to it.
Well, my MCP servers only really started working when I implemented the prompt endpoints, so I’m happy I’ll never have to use MCP again if this sticks.
Anthropic are using Ai beyond the chat window. Without external information, context and tools the "magic" of Ai evaporates after a few minutes.
A good example of:
Build things and then talk about them in a way that people remember and share it with friends.
I guess some call it clever product marketing.
Oh yeah I forgot the biggest one. Claude fucking code. Lol
I hate to be that guy but skills are not an invention of sorts. It a simple mechanism that exists already in many places.
The biggest unlock was tool calling that was in invented at OpenAI.
It’s the only AI company that isn’t monetize at all costs. I’m curious how deep their culture goes as it’s remarkable they even have any discernible value system in today’s business world.
Anthropic has good marketing, but ironically their well marketed mediocre ideas retard development of better standards.
Skills are lazy loaded prompt engineering. They are simple, but powerful. Claude sees a one line index entry per skill. You can create hundreds. The full instructions only load when invoked.
Those instructions can reference external scripts that Claude executes without loading the source. You can package them with hooks and agents in plugins. You pay tokens for the output, not the code that calls it.
Install five MCPs and you've burned a large chunk of tokens before typing a prompt. With skills, you only pay for what you use.
You can call deterministic code (pipelines, APIs, domain logic) with a non-deterministic model, triggered by plain language, without the context bloat.