I like the idea of exposing this as a resource. That’s a good idea so you don’t have to wait for a tool call. Is using a resource faster though? Doesn’t the LLM still have to make a request to the MCP server in both cases? Is the idea being that because it is pinned a priori, you’ve already retrieved and processed the HTML, so the response will be faster?
But I do think the lack of a JavaScript loader will be a problem for many sites. In my case, I still run the innerHTML through a Markdown converter to get rid of the extra cruft. You’re right that this helps a lot. Even better if you can choose which #id element to load. Wikipedia has a lot of extra info that surrounds the main article that even with MD conversion adds extra fluff. But without the JS loading, you’re still going to not be able to process a lot of sites in the wild.
Now, I would personally argue that’s an issue with those sites. I’m not a big fan of dynamic JS loaded pages. Sadly, I think that that ship has sailed…