Our workflows must be massively different.
I code in 8 languages, regularly, for several open source and industry projects.
I use AI a lot nowadays, but have never ever interacted with an MCP server.
I have no idea what I'm missing. I am very interested in learning more about what do you use it for.
> I have no idea what I'm missing.
The questions I'd ask: - Do you work in a team context of 10+ engineers?
- Do you all use different agent harnesses?
- Do you need to support the same behavior in ephemeral runtimes (GH Agents in Actions)?
- Do you need to share common "canonical" docs across multiple repos?
- Is it your objective to ensure a higher baseline of quality and output across the eng org?
- Would your workload benefit from telemetry and visibility into tool activation?
If none of those apply, then it's not for you. Server hosted MCP over streamable HTTP benefits orgs and teams and has virtually no benefit for individuals.I can't go into specifics about exactly what I'm doing but I can speak generically:
I have been working on a system using a Fjall datastore in Rust. I haven't found any tools that directly integrate with Fjall so even getting insight into what data is there, being able to remove it etc is hard so I have used https://github.com/modelcontextprotocol/rust-sdk to create a thin CRUD MCP. The AI can use this to create fixtures, check if things are working how they should or debug things e.g. if a query is returning incorrect results and I tell the AI it can quickly check to see if it is a datastore issue or a query layer issue.
Another example is I have a simulator that lets me create test entities and exercise my system. The AI with an MCP server is very good at exercising the platform this way. It also lets me interact with it using plain english even when the API surface isn't directly designed for human use: "Create a scenario that lets us exercise the bug we think we have just fixed and prove it is fixed, create other scenarios you think might trigger other bugs or prove our fix is only partial"
One more example is I have an Overmind style task runner that reads a file, starts up every service in a microservice architecture, can restart them, can see their log output, can check if they can communicate with the other services etc. Not dissimilar to how the AI can use Docker but without Docker to get max performance both during compilation and usage.
Last example is using off the shelf MCP for VCS servers like Github or Gitlab. It can look at issues, update descriptions, comment, code review. This is very useful for your own projects but even more useful for other peoples: "Use the MCP tool to see if anyone else is encountering similar bugs to what we just encountered"
Its very similar to the switch from a text editor + command line, to having an IDE with a debugger.
the AI gets to do two things:
- expose hidden state - do interactions with the app, and see before/after/errors
it gives more time where the LLM can verify its own work without you needing to step in. Its also a bit more integration test-y than unit.
if you were to add one mcp, make it Playwright or some similar browser automation mcp. Very little has value add over just being able to control a browser
Many products provide MCP servers to connect LLMs. For example I can have claude examine things through my ahrefs account without me using the UI etc
I've managed to ignore MCP servers for a long time as well, but recently I found myself creating one to help the LLM agents with my local language (Papiamentu) in the dialect I want.
I made a prolog program that knows the valid words and spelling along with sentence conposition rules.
Via the MCP server a translated text can be verified. If its not faultless the agent enters a feedback loop until it is.
The nice thing is that it's implemented once and I can use it in opencode and claude without having to explain how to run the prolog program, etc.