How different is it from the semantic Web (schema, RDF, OWL…)? Instead of reinventing something, why not using a well established technology that can also be beneficial for other usages?
Please don't implement WebMCP on your site. Support a11y / accessibility features instead. If browser or LLM providers care they will build to use existing specs meant to health humans better interact with the web.
This seems to be the actual docs: https://docs.google.com/document/d/1rtU1fRPS0bMqd9abMG_hc6K9...
The way how Google now tries to define "web-standards" while also promoting AI, concerns me. It reminds me of AMP aka the Google private web. Do we really want to give Google more and more control over websites?
Can someone explain what the hell is going on here?
Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.
If I'm using Selenium it's a problem, but if I'm using Claude it's fine??
Hey, it's the semantic web, but with ~~XML~~, ~~AJAX~~, ~~Blockchain~~, Ai!
Well, it has precisely the problem of the semantic web, it asks the website to declare in a machine readable format what the website does. Now, llms are kinda the tool to interface to everybody using a somewhat different standard, and this doesn't need everybody to hop on the bandwagon, so perhaps this is the time where it is different.
Okay, this is interesting. I want my blog/wiki to be generally usable by LLMs and people browsing to them with user agents that are not a web browser, and I want to make it so that this works. I hope it's pretty lightweight. One of the other patterns I've seen (and have now adopted in applications I build) is to have a "Copy to AI" button on each page that generates a short-lived token, a descriptive prompt, and a couple of example `curl` commands that help the machine navigate.
I've got very slightly more detail here https://wiki.roshangeorge.dev/w/Blog/2026-03-02/Copy_To_Clau...
I really think I'd love to make all my websites and whatnot very machine-interpretable.
I suspect people will get pretty riled up in the comments. This is fine folks. More people will make their stuff machine-accessible and that's a good thing even if MCP won't last or if it's like VHS -- yes Betamax was better, but VHS pushed home video.
Why aren't we using HATEOAS as a way to expose data and actions to agents?
I'm glad I'm not the only one whose features are obsolete by the time they're ready to ship!
Genuine question, why can't this be done via an API that the agents call? there are already established ways to call APIs on behalf of the user. Seems to me that the agent is loading a web app just to be able to access it's apis, what am i missng?
Have to say, this feels like Web 2.0 all over again (in a good way) :)
When having APIs and machine consumable tools looked cool and all that stuff…
I can’t see why people are looking this as a bad thing — isn’t it wonderful that the AI/LLM/Agents/WhateverYouCallThem has made websites and platforms to open up and allow programatical access to their services (as a side effect)?
Why WebMCP when we could have WebCLI?
Apparently there's already a few projects with the latter name.
The signup form for the early preview mentioned Firebase twice. I'm guessing this is where the push to develop it is coming from. Cross integration with their hosting/ai tooling. The https://firebase.google.com/ website also is clearly targeted at AI
Advancing capability in the models themselves should be expected to eat alive every helpful harness you create to improve its capabilities.
Majority of sites don't even expose accessibility functionalities, and for WebMCP you have to expose and maintain internal APIs per page. This opens the site up to abuse/scraping/etc.
Thats why I dont see this standard going to takeoff.
Google put it out there to see uptake. Its really fun to talk about but will be forgotten by end of year is my hot take.
Rather what I think will be the future is that each website will have its own web agent to conversationally get tasks done on the site without you having to figure out how the site works. This is the thesis for Rover (rover.rtrvr.ai), our embeddable web agent with which any site can add a web agent that can type/click/fill by just adding a script tag.
These developments completely miss the point of LLMs. They were created to understand text written for humans, not to interact with specialized APIs. For specialized APIs, LLMs aren't needed.
Browser devs will do literally anything just to not work on WebGPU support.
Don't trust Google, will they send the data to their servers to "improve the service"?
I don't know what it is. I don't want to know. What I want is to immediately disable it and never hear about it again.
Disclaimer: I am all in favor of AI and use LLMs all the time. But spare me the slop.
>Users could more easily get the exact flights they want
Can we stop pretending this is an issue anyone has ever had.
Is this a reinvention of openapi formerly known as swagger?
Is this just devtools protocol wrapped by an MCP? I've been doing this with go-rod for two years...
I actually think webmcp is incredibly smart & good (giving users agency over what's happening on the page is a giant leap forward for users vs exposing APIs).
But this post frustrates the hell out of me. There's no code! An incredibly brief barely technical run-down of declarative vs imperative is the bulk of the "technical" content. No follow up links even!
I find this developer.chrome.com post to be broadly insulting. It has no on-ramps for developers.
Between Zero Click Internet (AI Summaries) + WebMCP (Dead Internet) why should content producers produce anything that's not behind a paywall the days?
If only there were a way for programs to programmatically interface with web servers.
You could program your applications against such an interface, even.
[dead]
For those concerned on making it easy for bots to act on your website, may be this tool can be used to prevent the same;
Example: Say, you wan to prevent bots (or users via bots) from filling a form, register a tool (function?) for the exact same purpose but block it in the impleentaion;
While we cannot stop users from using bots, may be this can be a tool to handle it effectively.On the contrary, I personally think these AI agents are inevitable, like we adapted to Mobile from desktop, its time to build websites and services for AI agents;