This seems backwards, somehow. Like you're asking for an nth view and an nth API, and services are being asked to provide accessibility bridges redundant with our extant offerings.
Sites are now expected duplicate effort by manually defining schemas for the same actions — like re-describing a button's purpose in JSON when it's already semantically marked up?
Great to see people thinking about this. But it feels like a step on the road to something simpler.
For example, web accessibility has potential as a starting point for making actions automatable, with the advantage that the automatable things are visible to humans, so are less likely to drift / break over time.
Any work happening in that space?
This was announced in early preview a few days ago by Chrome as well: https://developer.chrome.com/blog/webmcp-epp
I think that the github repo's README may be more useful: https://github.com/webmachinelearning/webmcp?tab=readme-ov-f...
Also, the prior implementations may be useful to look at: https://github.com/MiguelsPizza/WebMCP and https://github.com/jasonjmcghee/WebMCP
Hmmm... so are we imagining a future where every website has a vector to mainline prompt injection text directly from an otherwise benign looking web page?
This is great. I'm all for agents calling structured tools on sites instead of poking at DOM/screenshots.
But no MCP server today has tools that appear on page load, change with every SPA route, and die when you close the tab. Client support for this would have to be tightly coupled to whatever is controlling the browser.
What they really built is a browser-native tool API borrowing MCP's shape. If calling it "MCP" is what gets web developers to start exposing structured tools for agents, I'll take it.
This is coming late as skills have largely replaced MCP. Now your site can just host a SKILL.md to tell agents how to use the site.
The web was initially meant to be browsed by desktop computers.
Then came mobile phones with their small screens and touch control which forced the web to adapt: responsive design.
Now it’s the turn of agents that need to see and interact with websites.
Sure you could keep on feeding them html/js and have them write logic to interact with the page, just like you can open a website in desktop mode and still navigate it: but it’s clunky.
Don’t stop at the name “MCP” that is debased: it’s much bigger than that
I’m just personally really excited about building cli tools that are deployed with uvx. One line, instructions to add a skill, no faffing about with the mcp spec and server implementations. Feels like so much less dev friction.
I think this is a good idea.
The next one would be to also decouple the visual part of a website from the data/interactions: Let the users tell their in-browser agent how to render - or even offer different views on the same data. (And possibly also WHAT to render: So your LLM could work as an in-website adblocker for example; Similar to browser extensions such as a LinkedIn/Facebook feed blocker)
Wes Bos has a pretty cool demo of this: https://www.youtube.com/watch?v=sOPhVSeimtI
I really like the way you can expose your schema through adding fields to a web form, that feels like a really nice extension and a great way to piggyback on your existing logic.
To me this seems much more promising than either needing an MCP server or the MCP Apps proposal.
Have any sickos tried to point AI at SOAP APIs with WSDL definitions, yet?
Most teams that want their data to be operated programmatically expose an API. For who does this solve a problem?
The problem with agents browsing the web, is that most interesting things on the web are either information or actions, and for mostly static information (resources that change on the scale of days) the format doesn't matter so MCP is pointless, and for actions, the owner of the system will likely want to run the MCP server as an external API... so this is cool but does not have room.
Very cool! I imagine it'll be possible to start a static webserver + WebMCP app then use browser as virtualization layer instead of npm/uvx.
The browser has tons of functionality baked in, everything from web workers to persistence.
This would also allow for interesting ways of authenticating/manipulating data from existing sites. Say I'm logged into image-website-x. I can then use the WebMCP to allow agents to interact with the images I've stored there. The WebMCP becomes a much more intuitive way than interpreting the DOM elements
I’m working on a DOM agent and I think MCP is overkill. You have a few “layers” you can imply by just executing some simple JS (eg: visible text, clickable surfaces, forms, etc). 90% of the time, the agent can imply the full functionality, except for the obvious edge cases (which trip up even humans): infinite scrolling, hijacking navigation, etc.
You could get rid of the need for the browser completely just by publishing an OpenAPI spec for the API your frontend calls. Why introduce this and add a massive dependency on a browser with a JavaScript engine and all the security nightmares that comes with?
Finally, I was hoping for this to be implemented in 2026. Rendered DOM is for humans, not for agents.
MCP is cool, but it's too open ended security wise.
People should be mindful of using magic that has no protection of their data and then discover it's too late.
That's not a gap in the technology, it's just early.
Cannot wait to be able to have a browser that show me the web as if it were a gopher website and i don't have to deal with ever changing to worse JavaScript heavy UX.
This is true excitement. I am not being ironic.
Now we just need a proxy server that automatically turns any API with published openapi spec into a WebMCP server, and we've completed the loop
I've prepared a thoughtful reply saved to /Users/yoshikondo/HN_REPLY.md
HN Thread Link: https://news.ycombinator.com/item?id=47037501
Quick summary of my reply:
- Your 70+ MCP tools show exactly what WebMCP aims to solve
- Key insight: MCP for APIs vs MCP for consumer apps are different
- WebMCP makes sense for complex sites (Amazon, Booking.com)
- The "drift problem" is real - WebMCP should be source of truth
- Suggested embed pattern for in-page tools
The fact that the "Security and privacy considerations" and the "Accessibility considerations" sections are completely blank in this proposal is delightful meta commentary on the state of the AI hype cycle. I know it's just a draft so far, but it got a laugh out of me.