Is someone at Anthropic reading my AI chats? I literally came to this conclusion a few weeks ago after studying the right framework for building long running browser agents. Have zero experience with Elixir but as soon as I looked at their language constructs it just "clicked" immediately that this is exactly suited to AI agent frameworks. Another problem you solve immediately: distributed deployment without kubernetes hell.
Broadly agree with the author's points, except for this one:
> TypeScript/Node.js: Better concurrency story thanks to the event loop, but still fundamentally single-threaded. Worker threads exist but they're heavyweight OS threads, not 2KB processes. There's no preemptive scheduling: one CPU-bound operation blocks everything.
This cannot be a real protest: 100% of the time spent in agent frameworks is spent ... waiting for the agent to respond, or waiting for a tool call to execute. Almost no time is spent in the logic of the framework itself.
Even if you use heavyweight OS threads, I just don't believe this matters.
Now, the other points about hot code swapping ... so true, painfully obvious to those of us who have used Elixir or Erlang.
For instance, OpenClaw: how much easier would "in-place updating" be if the language runtime was just designed with the ability in mind in the first place.
I don’t see the point of agent frameworks. Other than durability and checkpoints how does it help me?
Claude code already works as an agent that calls tools when necessary so it’s not clear how an abstraction helps here.
I have been really confused by langchain and related tech because they seem so bloated without offering me any advantages?
I genuinely would like to know what I’m missing.
You Should Just Use Rust
This is all well and good but Elixir with Phoenix and Liveview is super bloated and you have to have a real reason to buy into such a monster that you couldn't do with a simpler stack.
Ackshually...
Erlang didn't introduce the actor model, any more than Java introduced garbage collection. That model was developed by Hewitt et al. in the 70s, and the Scheme language was developed to investigate it (core insights: actors and lambdas boil down to essentially the same thing, you really don't need much language to support some really abstract concepts).
Erlang was a fantastic implementation of the actor model for an industrial application, and probably proved out the model's utility for large-scale "real" work more than anything else. That and it being fairly semantically close to Scheme are why I like it.
I’ve built fairly large OTP systems in the past, and I think the core claim is directionally right: long lived, stateful, failure prone "conversations" map very naturally to Erlang processes plus supervision trees. An agent session is basically a call session with worse latency and more nondeterminism.
That said, a lot of current agent workloads are I/O bound around external APIs. If 95% of the time is waiting on OpenAI or Anthropic, the scheduling model matters less than people think. The BEAM’s preemption and per process GC shine when you have real contention or CPU heavy work in the same runtime. Many teams quietly push embeddings, parsing, or model hosting to separate services anyway.
Hot code swapping is genuinely interesting in this context. Updating agent logic without dropping in flight sessions is non trivial on most mainstream stacks. In practice though, many startups are comfortable with draining connections behind a load balancer and calling it a day.
So my take is: if you actually need millions of concurrent, stateful, soft real time sessions with strong fault isolation, the BEAM is a very sane default. If you are mostly gluing API calls together for a few thousand users, the runtime differences are less decisive than the surrounding tooling and hiring pool.