Thanks. Yeah, Cursor / Claude code + MCP is powerful. We differentiate on two fronts, mainly:
1) Greater accuracy with our specialized tools: Most MCP tools allow agents to query data, or run *ql queries - this overwhelms context windows given the scale of telemetry data. Raw data is also not great for reasoning - we’ve designed our tools to ensure that models get data in the right format, enriched with statistical summaries, baselines, and correlation data, so LLMs can focus on reasoning.
2) Product UX: You’ll also find that text based outputs from general purpose agents are not sufficient for this task - our notebook UX offers a great way to visualize the underlying data so you can review and build trust with the AI.
To be clear, are the main differentiators basically better built-in MCPs and better UX? Not knocking just trying to understand the differences.
I have had incredible success debugging issues by just hooking up Datadog MCP and giving agents access to it. Claude/cursor don't seem to have any issues pulling in the raw data they need in amounts that don't overload their context.
Do you consider this a tool to be used in addition to something like cursor cloud agents or to replace it?