logoalt Hacker News

ami3466last Friday at 8:50 PM1 replyview on HN

The simplicity is a feature. I avoided headless Chrome because standard fetch tools (and raw DOM dumps) pollute the context with navbars and scripts, wasting tokens. This parser converts to clean Markdown for maximum density.

Also, by treating this as an MCP Resource rather than a Tool, the docs are pinned permanently instead of relying on the model to "decide" to fetch them.

Cloudflare Workers handle this perfectly for free (100k reqs/day) without the overhead of managing a dockerized browser instance.


Replies

mbreeselast Friday at 9:03 PM

I like the idea of exposing this as a resource. That’s a good idea so you don’t have to wait for a tool call. Is using a resource faster though? Doesn’t the LLM still have to make a request to the MCP server in both cases? Is the idea being that because it is pinned a priori, you’ve already retrieved and processed the HTML, so the response will be faster?

But I do think the lack of a JavaScript loader will be a problem for many sites. In my case, I still run the innerHTML through a Markdown converter to get rid of the extra cruft. You’re right that this helps a lot. Even better if you can choose which #id element to load. Wikipedia has a lot of extra info that surrounds the main article that even with MD conversion adds extra fluff. But without the JS loading, you’re still going to not be able to process a lot of sites in the wild.

Now, I would personally argue that’s an issue with those sites. I’m not a big fan of dynamic JS loaded pages. Sadly, I think that that ship has sailed…