If I didn't misunderstood you, it doesn't really matter if it's an endpoint or a (remote) mcp, either someone else wants to run llms to provide a service for you or they don't.
A local mcp doesn't come in play because they just couldn't offer the same features in this case.
The MCP server usually provides some functions you can run, possibly with some database interaction.
So when you run it, your codign agent is using AI to run that code (what to call, what parameters to pass, and so on). Via MCP, they don't pay any LLM cost; they just offer the code and the endpoint.
But this is usually messy for the coding agent since it fills up the context. While if you use skill + API, it's easier for the agent since there's no code in the context, just how to call the API and what to pass.
With something like this, you can then have very complex things happening in the endpoint without the agent worrying about context rot or being able to deal with that functionality.
But to have that difficult functionality, you also need to call an LLM inside the endpoint, which is problematic if the person offering the MCP service does not want to cover LLM costs.
So it does matter if it's an endpoint or an MCP because the agent is able to do more complex and robust stuff if it uses skill and HTTP.