Hi HN, we’re Sam, Shane, and Abhi, and we’re building Mastra (https://mastra.ai), an open-source JavaScript SDK for building agents on top of Vercel’s AI SDK.
You can start a Mastra project with `npm create mastra` and create workflow graphs that can suspend/resume, build a RAG pipeline and write evals, give agents memory, create multi-agent workflows, and view it all in a local playground.
Previously, we built Gatsby, the open-source React web framework. Later, we worked on an AI-powered CRM but it felt like we were having to roll all the AI bits (agentic workflows, evals, RAG) ourselves. We also noticed our friends building AI applications suffering from long iteration cycles: they were getting stuck debugging prompts, figuring out why their agents called (or didn’t call) tools, and writing lots of custom memory retrieval logic.
At some point we just looked at each other and were like, why aren't we trying to make this part easier, and decided to work on Mastra.
Demo video: https://www.youtube.com/watch?v=8o_Ejbcw5s8
One thing we heard from folks is that seeing input/output of every step, of every run of every workflow, is very useful. So we took XState and built a workflow graph primitive on top with OTel tracing. We wrote the APIs to make control flow explicit: `.step()` for branching, `.then()` for chaining, and `.after()` for merging. We also added .`.suspend()/.resume()` for human-in-the-loop.
We abstracted the main RAG verbs like `.chunk()`, `embed()`, `.upsert(),’ `.query()`, and `rerank()` across document types and vector DBs. We shipped an eval runner with evals like completeness and relevance, plus the ability to write your own.
Then we read the MemGPT paper and implemented agent memory on top of AI SDK with a `lastMessages` key, `topK` retrieval, and a `messageRange` for surrounding context (think `grep -C`).
But we still weren’t sure whether our agents were behaving as expected, so we built a local dev playground that lets you curl agents/workflows, chat with agents, view evals and traces across runs, and iterate on prompts with an assistant. The playground uses a local storage layer powered by libsql (thanks Turso team!) and runs on localhost with `npm run dev` (no Docker).
Mastra agents originally ran inside a Next.js app. But we noticed that AI teams’ development was increasingly decoupled from the rest of their organization, so we built Mastra so that you can also run it as a standalone endpoint or service.
Some things people have been building so far: one user automates support for an iOS app he owns with tens of thousands of paying users. Another bundled Mastra inside an Electron app that ingests aerospace PDFs and outputs CAD diagrams. Another is building WhatsApp bots that let you chat with objects like your house.
We did (for now) adopt an Elastic v2 license. The agent space is pretty new, and we wanted to let users do whatever they want with Mastra but prevent, eg, AWS from grabbing it.
If you want to get started: - On npm: npm create mastra@latest - Github repo: https://github.com/mastra-ai/mastra - Demo video: https://www.youtube.com/watch?v=8o_Ejbcw5s8 - Our website homepage: https://mastra.ai (includes some nice diagrams and code samples on agents, RAG, and links to examples) - And our docs: https://mastra.ai/docs
Excited to share Mastra with everyone here – let us know what you think!
Very excited about Mastra! We have a number of Agent-ic things we'll be building at ElectricSQL and Mastra looks like a breath of fresh air.
Also the team is top-notch — Sam was my co-founder at Gatsby and I worked closely with Shane and Abhi and I have a ton of confidence in their product & engineering abilities.
This looks awesome! Quick question, are there plans to support SSE MCP servers? I see Stdio [0] are supported and I can always run a proxy but SSE would be awesome.
Happy Mastra user here! Strikes the right balance between letting me build with higher level abstractions but providing lower level controls when needed. I looked at a handful of other frameworks before getting started and the clarity & easy of use of Mastra stood out. Nice work.
Got excited, was hoping to see a repository of Go Agents.
I don’t really understand agents. I just don’t get why we need to pretend we have multiple personalities, especially when they’re all using the same model.
Can anyone please give me a usecase, that couldn’t be solved with a single API call to a modern LLM (capable of multi-step planning/reasoning) and a proper prompt?
Or is this really just about building the prompt, and giving the LLM closer guidance by splitting into multiple calls?
I’m specifically not asking about function calling.
By the developers of Gatsby is a minus, not a plus makes me think this is going to be the next abandonware.
I don't want to be that person but there are hundreds of other similar frameworks doing more or less the same thing. Do you know why? Because writing a framework that orchestrates a number of tools with a model is the easy part. In fact, most of the time you don't even need a framework. All of these framework focus on the trivial and you can tell that simply by browsing the examples section.
This is like 5% of the work. The developer needs to fill the other 95% which involves a lot more things that are strictly outside of scope of the framework.
Congrats on launching. I've noticed that switching prompts without edits between different LLM providers has degradation on performance. I'm wondering if you guys have noticed how developers do these "translations", I'm wondering since maybe your eval framework might have data for best practices.
I created a similar library for orchestrations, but it’s more explicit and lightweight. https://github.com/langtail/ai-orchestra
Congrats, looks promising! 1. Is it possible to create custom endpoints? I see that several endpoints are created when running “mastra dev”.
2. Related to previous question, since this is node based, is it possible to support websockets?
Does Mastra support libraries of tools for agents like toolhouse.ai or https://github.com/transitive-bullshit/agentic
This looks really great! How do you make money? Do you charge for deploying these to your platform? I couldnt find anything on pricing
Congrats on launching! Curious how early the Mastra team thinks people should be thinking about evals and setting up a pipeline for them.
Impressive. Have you seen any success with Mastra being used to build voice agents? Our company has been experimenting with VAPI, which just launched a workflow builder into open beta (https://docs.vapi.ai/workflows), but it has a lot of rough edges.
Interested to learn more about the PDF -> CAD project built on mastra, can you share a link?
i am very long on TS as the future of agent applications. nice work team
I basically learned everything about how agents work by using Mastra's framework and going through their documentation. The founders are also super hands-on and love to help!
This looks really nice. We've been considering developing something very similar in-house. Are you guys looking at supporting MLC Web LLM, or someother local models?
Do the workflows support voice-to-voice models like openai's realtime? Or if something like that exists I'd be curios.
Congrats! This is exactly what the AI world needs. I'm thinking about using Mastra for a class I'm working on with AI Agents.
I thought Kyle Matthews was the creator of Gatsby
Are there any plans to add automatic retries for individual steps (with configurable max attempts and backoff strategy)?
Congrats guys! really excited to try this out!
Super excited to try out the new agent memory features
"You may not provide the software to third parties as a hosted or managed service" - The Elastic v2 license isn't actually open source like your title mentions: "Open-source JS agent framework"
> Mastra uses the Vercel AI SDK
It started off wrong.
Very interesting set of abstractions that address lots of the pain points when building agents, also the team is super eager to help out!
[dead]
You’re awesome guys! I had so many problems with lanchain and am very happy since switching to Mastra
A TypeScript first AI framework is something that has been missing. How do you work with AI SDK?
The example from the landing page does not exactly spark joy:
On a first glance, this looks like a very awkward way to represent the graph from the picture. And this is just a simple "workflow" (the structure of the graph does not depend on the results of the execution), not an agent.