I thought this is what the web_fetch tools already did? Tools are configured through MCP also, right? So why am I prepending a URL, and not just using the web_fetch tool that already works?
Does this skirt the robots.txt by chance? Not being to fetch any web page is really bugging me and I'm hoping to use a better web_fetch that isn't censored. I'm just going to copy/paste the content anyway.
I think the idea here is that the web_fetch is restricted to the target site. I might want to include my documentation in an MCP server (from docs.example.com), but that doesn’t mean I want the full web available.