I’m Michel, co-founder and CEO of Airbyte (https://airbyte.com/). We’ve spent the last six years building data connectors. Today we're launching Airbyte Agents (https://docs.airbyte.com/ai-agents/), a unified data layer for agents to discover information and take action across operational systems.
Here’s a quick walkthrough: https://www.youtube.com/watch?v=ZosDytyf1fg
As agents move into real workflows, they need access to more tools (e.g. Slack, Salesforce, Linear). That means a ton of API plumbing: authentication, pagination, filters, handling schema, and matching entities across systems.
Most MCPs don’t fix this. They’re thin wrappers over APIs, so agents inherit their weak primitives and still get it wrong most of the time, especially when working across tools.
An even deeper issue is that APIs assume you already know what to query (think endpoints, Object IDs, fields), whereas agents usually start one step earlier: they need first to discover what matters before they can even start reasoning.
So we built Airbyte Agents to be a context layer between your Agents and all of your data. The core of this is something we call Context Store: a data index optimized for agentic search, populated by our replication connectors. All that work on data connectors the last six years comes in handy here!
This gives agents a structured way to discover data, while still allowing them to read and write directly to the upstream system when needed.
What got us working on this was an insane trace from an agent we were migrating to our new SDK. It was supposed to answer "which customers are at risk of leaving this quarter?" The trace had 47 steps. Most were API calls. The agent first had to find a bunch of accounts, then map them to the right customers, then look for tickets, bla bla... and when the Agent finally responded, the answer sounded ok, but was wrong. Not only that, it was excruciatingly slow. So we had to do something about it.
That 47-step agent is one example of a question where Airbyte Agents does particularly well. Other examples: - “Show me all enterprise deals closing this month with open support tickets." - “Find every support ticket that doesn’t have a Github issue opened”
Some of these might sound simple, but the quality of the answer changes dramatically when the agent doesn’t have to assemble all that context at runtime.
Once we had an early version of the product, I spent a weekend building a benchmark harness to see if it worked. Also for fun, I like writing benchmarks :). I compared calling the Airbyte Agent MCP vs calling a bunch of vendor MCPs directly. I tested retrieval, and search.
For the sake of simplicity, I used token consumption as a unit of measure. I think that’s a good proxy for how well agents are working. A failing agent (like the one that took 47 steps), will churn through lots of tokens while getting nowhere, while a successful one will get straight to the point.
Here's what I found when measuring: for Gong, it used up to 80% fewer tokens than their own MCP, for Zendesk up to 90% fewer, for Linear up to 75%, and for Salesforce up to 16% (Salesforce’s own SOQL does a good job here).
Of course there is the usual obvious bias: we are the builders of what we are benchmarking. So we made the test harness public: https://github.com/airbytehq/airbyte-agents-benchmarks. Feel free to poke at it, and please tell us what you find if you do!
It's still early and some parts are rough, but we wanted to share this with the community asap. We'd love to hear from people building agents: - Are you indexing data ahead of time, or letting the agent call APIs live? - How are you matching entities across systems?
Would also love to hear any thoughts, comments, or ideas of how we could make this better, and if there are obvious things we’re missing. For now, we’re excited to keep building!
Your billing support email forwards to a google group which rejects the email entirely. So i embedded my question inside the websites sales enquiry form and received multiple rounds of emails that couldn’t be further from human.
It’s not why we started using posthog but it definitely sealed the deal when you see how simple and reliable that experience is
Looks interesting!
If I'm reading correctly, the indexing (Context Store) is neutral/unopinionated? How does it select fields for indexing?
Have you done any testing on guided indexing, or metadata layers on top of the data? My experience so far on similar work is that getting data in front of an agent isn't enough context to get useful/reliable answers enough of the time. I.e. _what_ you index, and how you signpost for agents, becomes really important (unless your data is super clean I guess). This does look like a good foundation for that kind of tooling though!
This is such a great direction airbyte is taking and congrats to the lunch! I think you're very well-positioned for this opportunity than most people realize, given your reputable brand and your uncanny expertise in etl. It's honestly a natural progression of airbyte as far as the current AI landscape goes. Kudos to you and the team!
(We use airbyte at my company, although we self-host it.)
I feel like we've been working in parallel here :) We are using PyAirbyte (hi aaronsteers) for our users to connect their data sources to our agents. We originally wanted to use the airbyte white-label platform, but the team said that it was being deprecated. I think this really drives home just how crucial it is to have a clear model for accessing your data, and Airbyte has been great at that for quite a while.
What actions does agents enable that weren't already available from Airbyte?
Hi Michel, congrats and I have nice memories of working with you in lafayette street !! Keep up the good work on airbyte ! :)
Just want to call out a couple of nuances in our methodology. In general, we tried our best to do apples-to-apples comparisons where we could, and gave ourselves a discount where we couldn’t. Unsurprisingly, it’s a challenge to find MCPs for various vendors (which is another reason we are trying to solve this). Here’s a video walkthrough of the benchmark harness:https://www.loom.com/share/9d96c8c64c1a4b7fad0356774fc54acc
Where the comparison wasn't valid or not apples-to-apples:
Gong and Zendesk: no official native MCP exists, so we used the most popular community implementations we could find. We were only able to benchmark Gong Search as the Gong MCP does not have a Get tool call.
While our Search testing yielded the same number of records on either path, vendor-specific search implementations means results aren’t identical. Contents are similar in general, so the ratios remain directionally correct.
The general test set:
2 scenarios (Retrieval and Search) across 4 connectors isn’t a huge test set. While we hope to extend this over time, we’ve made the harness public so anyone can contribute in the meantime. Let us know if you find any MCP with better results!
Where the vendor MCP wins or ties:
Salesforce showed the smallest win at 16%. This is primarily because Salesforce, unlike many vendors, uniquely provides great search support out of the box with their SOQL.
We see identical records for Get. As noted, Search returns different sets of identical counts. Airbyte uses fewer tokens because the Salesforce records contain mandatory metadata (type and url).
Where the vendor MCP is costly to context:
Zendesk is a great example of this. The extreme gap is because the Zendesk MCP (reminder - a community alternative) returns the entire API response in search results. This averages to 9KB per record against our production Zendesk account!
Airbyte’s implementation provides filtering, which allows agents to retrieve the minimal data needed to achieve the outcome, explaining the drastic gap.
sounds very familiar to what I ended up doing on my internal system - especially anything to do with search - much better to just sync everything to a DB and give the agent access to the DB
Did you find that some data model patterns were easier to detect for some LLM ? I am curious on how training might have made some agents better at graph navigation for instance?
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
(former employee here) congrats Michel! so glad to see you guys adapting to the AI age so well (and using the crap out of Devin!)
hmm so airbyte agents could serve as a form of MCP gateway, or a key building block of an MCP gateway, which btw is how anthropic uses mcp themselves for all their internal apps https://www.youtube.com/watch?v=CD6R4Wf3jnY&t=1s&pp=0gcJCd4K...
i think my most sad/interesting observation about ai engineers is that many ai apps are super data hungry, but many dont have the necessary data engineering background to even know they need an airbyte or what tradeoffs to make in an etl pipeline. would love a "data engineering for ai engineers" type braindump session from someone from airbyte at AIE (https://ai.engineer/cfp )