Hey HN, Waleed here. We're building Sim (https://sim.ai/), an open-source visual editor to build agentic workflows. Repo here: https://github.com/simstudioai/sim/. Docs here: https://docs.sim.ai.
You can run Sim locally using Docker, with no execution limits or other restrictions.
We started building Sim almost a year ago after repeatedly troubleshooting why our agents failed in production. Code-first frameworks felt hard to debug because of implicit control flow, and workflow platforms added more overhead than they removed. We wanted granular control and easy observability without piecing everything together ourselves.
We launched Sim [1][2] as a drag-and-drop canvas around 6 months ago. Since then, we've added:
- 138 blocks: Slack, GitHub, Linear, Notion, Supabase, SSH, TTS, SFTP, MongoDB, S3, Pinecone, ...
- Tool calling with granular control: forced, auto
- Agent memory: conversation memory with sliding window support (by last n messages or tokens)
- Trace spans: detailed logging and observability for nested workflows and tool calling
- Native RAG: upload documents, we chunk, embed with pgvector, and expose vector search to agents
- Workflow deployment versioning with rollbacks
- MCP support, Human-in-the-loop block
- Copilot to build workflows using natural language (just shipped a new version that also acts as a superagent and can call into any of your connected services directly, not just build workflows)
Under the hood, the workflow is a DAG with concurrent execution by default. Nodes run as soon as their dependencies (upstream blocks) are satisfied. Loops (for, forEach, while, do-while) and parallel fan-out/join are also first-class primitives.
Agent blocks are pass-through to the provider. You pick your model (OpenAI, Anthropic, Gemini, Ollama, vLLM), and and we pass through prompts, tools, and response format directly to the provider API. We normalize response shapes for block interoperability, but we're not adding layers that obscure what's happening.
We're currently working on our own MCP server and the ability to deploy workflows as MCP servers. Would love to hear your thoughts and where we should take it next :)
This looks really cool for DIYing workflows, especially since you seem to have a very useful selection of tools!
Did you build your own agent engine? Why not LangGraph?
Say I was building a general agentic chat app with LangGraph in the backend (as it seems to provide a lot of infrastructure for highly reliable and interactive agents, all the way up to a protocol usable by UIs, plus a decent ecosystem, making it very easily extensible). Could I integrate with this for DIY workflows in a high quality fashion (high-precision updates and control)?
Is there a case for switching out LangGraph‘s backend with Sim (can you build agents of the same quality and complexity - I’m thinking coding agent)? Could it interact with LangGraph agents in a high quality way so you can tap that ecosystem?
Can I use Sim workflows with my current agent, say, via MCP?
So here is a case that I wanted to implement in n8n a few years ago and it required quite heavy JS blocks:
- I want to check some input - pick one of your 138 blocks
- I want to extract a list of items from that input
- I want to check which items did I encounter before <- that's the key bit
- Do something for the items that have not been encountered before; bonus point for detecting updated and deleted items
- Rinse and repeat
It could be a row added to a CSV file, a new file dropped into a Nextcloud folder, a list of issues pulled from a repo, or an RSS feed (Yahoo! Pipes, what a sweet memory).
How good is the support for such a case in Sim? And did it get better in n8n?
Open Sourced until we get rug pulled..
> You can run Sim locally using Docker, with no execution limits or other restrictions.
This is big. Thank you.
Excited to try this out, I've been looking at LangFlow and similar tools for doing DAG workflows. Sure, I could prompt or try to do an MCP or a claude skill for my utility workflows, but they aren't strongly followed and I want to, where possible, make each AI agent call be smaller, like a function.
This is definitely going to be given a try tomorrow morning. I think first up will be something easy and personal like going through the collection of NPC character sheets in my recent campaign and ensuring all NPCs have the following sections with some content in them, and if not, flagging them for my review.
Development seems pretty rapid, how often are breaking changes forcing workflow modifications to keep updated with the latest versions?
A bit of feedback, the readme gifs are a little too fast, it's hard to tell what exactly is happening.
I wonder how the free open-source in this stacks up to the free open-source in n8n.
Looks interesting. The 12GB minimum RAM requirements seem quite steep though. Why so much?
How does it deal with loops? I’ve often see workflow builders struggle at that?
What does “n8n” stand for? I’m assuming it’s a shortening of a longer word, like k8s.
[dead]
This looks really cool, but not being able to use my own LLM endpoints for the Copilot is an instant turn-off.