Can someone explain what the hell is going on here?
Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.
If I'm using Selenium it's a problem, but if I'm using Claude it's fine??
I'm old enough to remember discussions around the meaning of `User-Agent` and why it was important that we include it in HTTP headers. Back before it was locked to `Chromium (Gecko; Mozilla 4.0/NetScape; 147.01 ...)`. We talked about a magical future where your PDA, car, or autonomous toaster could be browsing the web on your behalf, and consuming (or not consuming) the delivered HTML as necessary. Back when we named it "user agent" on purpose. AI tooling can finally realize this for the Web, but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.
They wanna let you use the service the way they want.
An e-commerce? Wanna automate buying your stuff - probably something they wanna allow under controlled forms
Wanna scrape the site to compare prices? Maybe less so.
Obviously if you wanted people to book flights with a bot then you could have provided a public API for that long ago.
I think potentially the subtlety here is a sort of cooperative mode - the computer filling out a lot of the forms and doing the grunt, but it's important that the human is still in the loop - so they need to be able to share a UI with the agent.
Hence a agent friendly web page, rather than just an API.
> Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.
The proposal (https://docs.google.com/document/d/1rtU1fRPS0bMqd9abMG_hc6K9...) draws the line at headless automation. It requires a visible browsing context.
> Since tool calls are handled in JavaScript, a browsing context (i.e. a browser tab or a webview) must be opened. There is no support for agents or assistive tools to call tools "headlessly," meaning without visible browser UI.
>Can someone explain what the hell is going on here?
Someone at Chromium team is launching rapidly for an promotion
Not fine if you use Claude. But it's fine if you are Google Flights and the user uses Gemini. The paid version of course.
i’m seeing this at my corporate software job now. that service that you used to have security and product approval for to even read their Swagger doc has an MCP server you can install with 2 clicks.
Remember when many websites had quite open public APIs? Over time this became less common, and existing things like FB added more limitations.
I can deeply, deeply relate. X and Bluesky are both going nuts with ai and ai scams, but _both_ of them banned an advertising account because we were... using a bot to automate behavior because their APIs are only a subset of functionality.
Their vision is a world where they use all the automation regardless of safety or law, and we have to jump through extra hoops and engage in manual processes with AI that literally doesn't have the tool access to do what we need and will not contact a human.
as a website operator, i want my website to not experience downtime and unreliability because of usage rates that exceed the rate at which humans load pages, and i want to not be defrauded.
if you want to access my website using automated tools, that's fine. but if there's a certain automated tool that is consistently used to either break the site or attempt to defraud me, i'm going to do my best to block that tool. and sometimes that means blocking other, similar tools.
if the webMCP client in chrome behaves in a reasonable way that prevents abuse, then i don't see a problem with it. if scammers discover they can use it to scam, then websites will block it too.
These are obviously different people you're talking about here
different threat model. cloudflare blocks automation that pretends to be human -- scraping, fake clicks, account stuffing. webmcp is a site explicitly publishing 'here are the actions i sanction.' you can block selenium on login and expose a webmcp flight search endpoint at the same time. one's unauthorized access, the other's a published api.
I was also thinking about more or less the same thing with APIs and MCPs. The companies that didn't have any public apis are now exposing MCPs. That, to me is quite interesting. Maybe it is the FOMO effect.
I can't see walled garden platforms or any website that monetizes based on ads offering WebMCP. Agents using their site represent humans who aren't.
Both. I imagine if using this there is a tell (e.g. UA or other header). Sites can just block unauthenticated sessions using it but allow it to be used when they know who.
WebMCP should be a really easy way to add some handy automation functionality to your website. This is probably most useful for internal applications.
Also, as someone who has tried to build tools that automate finding flights, The existing players in the space have made it nearly impossible to do. But now Google is just going to open the door for it?
It’s weirder than that. There is a surge of companies working on how to provide automated access to things like payments, email, signup flows, etc to *Claw.
And what site is going to open their api up to everyone? Document endpoints already exist, why make it more complicated.
In early experiments with the Claude Chrome extension Google sites detected Claude and blocked it too. Shrug
Is the website Stripe or NYTimes?
I feel like this is a way to ultimately limit the ability to scrape but also the ability to use your own AI agent to take actions across the internet for you. Like how Amazon doesn’t let your agent to shop their site for you, but they’ll happily scrape every competitor’s website to enforce their anti competitive price fixing scheme. They want to allow and deny access on their terms.
WebMCP will become another channel controlled by big tech and it’ll come with controls. First they’ll lure people to use this method for the situations they want to allow, and then they’ll block everything else.
Oh, that's an easy one. LLMs have made people lose their god damned minds. It makes sense when you think about it as breaking a few eggs to get to the promised land omelette of laying off the development staff.
> Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever,
Not if they don't want their rankings to tank. Now you'll need to make your website machine friendly while the lords of walled gardens will relentlessly block any sort of 'rogue' automated agent from accessing their services.
They will wish that you use an official API, follow the funnel they settled for you, and make purchases no matter how
Why should a browser care about how websites want you to use them?
In my opinion sites that want agent access should expose server-side MCP, server owns the tools, no browser middleman. Already works today.
Sites that don’t want it will keep blocking. WebMCP doesn’t change that.
Your point about selenium is absolutely right. WebMCP is an unnecessary standard. Same developer effort as server-side MCP but routed through the browser, creating a copy that drifts from the actual UI. For the long tail that won’t build any agent interface, the browser should just get smarter at reading what’s already there.
Wrote about it here: https://open.substack.com/pub/manveerc/p/webmcp-false-econom...
In a nutshell: Google wants your websites to be more easily used by the agents they are putting in the browser and other products.
They own the user layer and models, and get to decide if your product will be used.
Think search monopoly, except your site doesn't even exist as far as users are concerned, it's only used via an agent, and only if Google allows.
The work of implementing this is on you. Google is building the hooks into the browser for you to do it; that's WebMCP.
It's all opaque; any oopsies/dark patterns will be blamed on the AI. The profits (and future ad revenue charged for sites to show up on the LLM's radar) will be claimed by Google.
The other AI companies are on board with this plan. Any questions?