Hi HN, we're Sam, Shane, and Abhi.
Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.
Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.
If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.
Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.
Agent development is changing quickly, so we’ve added a lot since February:
- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.
- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.
- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.
- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.
(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)
Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.
We'll be around and happy to answer any questions!
I worked with Mastra for three months and it is awesome. Thank you for making a great product.
One thing to consider is that it felt clunky working with workflows and branching logic with non LLM agents. I have a strong preference for using rules based logic and heuristics first. That way, if I do need to bring in the big gun LLM models, I already have the context engineering solved. To me, an agent means anything with agency. After a couple weeks of frustration, I started using my own custom branching workflows.
One reason to use rules, they are free and 10,000x faster, with an LLM agent fallback if validation rules were not passing. Instead of running an LLM agent to solve a problem every single time, I can have the LLM write the rules once. The whole thing got messy.
Otherwise, Mastra is best in class for working with TypeScript.
Mastra looks great!
- How do you compare Mastra with Tanstack AI? And/or do you plan to build on top of Tanstack AI like the Vercel AI SDK?
- Since there's a Mastra cloud, do you have an idea as to what features will be exclusive to the hosted version?
Ran through quickstart, created my first agent "Friendo" that acts as my best friend, chatted a bit. Nice UI, cool systems, hope to play with it more and build something, but I'm just not sure what yet.
I've been building with Mastra for a couple of weeks now and loving it, so congratulations on reaching 1.0!
It's built on top of Vercel AI elements/SDK and it seems to me that was a good decision.
My mental heuristic is:
Vercel AI SDK = library, low level
Mastra = framework
Then Vercel AI Elements gives you an optional pre built UI.
However, I read the blog post for the upcoming AI SDK 6.0 release last week, and it seems like it's shifting more towards being a framework as well. What are your thoughts on this? Are these two tools going to align further in the future?
Congratulations! I’m a fan of the publicity work and general out-of-the-box DX! That stuff matters a lot and I’m happy you’re aware.
I wonder: Are there any large general purpose agent harnesses developed using Mastra? From what I can tell OpenCode chose not to use it.
A lot of people on here repeat that rolling your own is more powerful than using Langchain or other frameworks and I wonder how Mastra relates to this sentiment.
We use typescript for all our entire stack and it's super dope to see a production-grade framework (with no vendor lock in) launch!
Congrats on the launch ! Someone told me that you have an excellent product but I don't have a need for it yet.
Why should I use this over say Strands Agents [1] or Spring AI [2]?
Been using Mastra for some side projects for months and it's just phenomenal. Congrats to the team!
is "from the Gatsby devs" some how supposed to help the credential? Looks like a cool framework regardless of that.
From punch cards to assembly, to C, to modern languages and web frameworks, each generation raised the abstraction. Agentic frameworks are the next one.
> a `.network()` method that turns any agent into a routing agent
say more pls?
You’re not locked into a model, but you likely are locked in to a platform. This DX and convenience just shifts within the stack where the lock in occurs. Not criticizing - just a choice people should be conscious of.
Another useful question to ask: since you’re likely using 1 of 3 frontier models anyway, do you believe Claude Agent SDK will increasingly become the workflow and runtime of agentic work? Or if not Claude itself, will that set the pattern for how the work is executed? If you do, why use a wrapper?
the framework is great, but how are you gonna make real money?
Congratulations on the launch. The landing page looks dope.
Offtopic but how much is AI used these days for generating code at your place? Curious because we see a major shift last months where almost everything is generated. Still human checked and human quality gates. Big difference compared to last year.
So the ultimate real life use case of this is having a bubble on your site that you click to chat with a bot?! Most users prefer to chat with an actual human being 99% of the times or immediately ask the bot to chat with one.
[dead]
> That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.
shudders in vietnam war flashbacks congrats on launch guys!!!
for those who want an independent third party endorsement, here's Brex CTO talking about Mastra in their AI engineering stack http://latent.space/p/brex