logoalt Hacker News

Show HN: Pollen – distributed WASM runtime, no control plane, single binary

76 pointsby sambigearalast Thursday at 1:15 PM39 commentsview on HN

Comments

m_ramdhantoday at 6:24 PM

Really neat project. The idea of a fully decentralized leaderless WASM runtime is bold. My main question is around failure modes -- how does it handle network segmentation or split-brain scenarios? Does the gossip protocol deal with this gracefully, or is there an eventual consistency aspect that workloads need to be aware of?

show 1 reply
dbalaterotoday at 1:56 PM

I suspect you have something cool, but I think if you told a clearer example story that solves a real-world problem on the homepage it might alleviate some questions I'm seeing (and also having) in the thread here!

show 2 replies
sambigearalast Thursday at 1:15 PM

Hi everyone, I'm Sam. I started Pollen as an experiment last summer, got carried away, and have landed here.

It's a single Go binary. Install it on every machine you want in the cluster and they self-organise. Topology is derived deterministically from gossiped state, so workloads land where there's capacity, replicas migrate toward demand, and survivors rehost from failed nodes. The mesh is built on ed25519 identity with signed properties; any TCP or UDP service you pin gets mTLS. Connections punch direct between peers where possible, otherwise they relay through mutually accessible nodes.

I built it because I'm fascinated by local-first, convergent systems, and because I wanted to see if said systems could be applied to flip the traditional workload orchestration patterns on their head. I also _despise_ the operational complexity of modern systems and the thousands of bolted-on tools they demand. So I've attempted to make Pollen's ergonomics a primary concern (two-ish commands to a cluster, etc).

It serves busy, live, globally distributed clusters (per the demo), but it's very early days, so don't be surprised by any rough edges!

Very happy to answer anything in the thread!

Cheers.

Docs: https://docs.pln.sh

show 6 replies
monster_trucktoday at 1:49 PM

This is neat, what does the actual throughput look like though?

Have been hacking on a wasm+webtransport stack for distributed simulation workers and found the ceiling on one connection/worker per machine pretty quick. Had to pin adapters/workers to cores to get the latency I was expecting, then needed to use dedicated tx/rx adapters to eliminate jitter. Some bullshit about interrupt scheduling

show 1 reply
kaoDtoday at 1:45 PM

I know the individual words in the description but I'm a bit confused about what this is.

What would I use Pollen for?

I'm not sure I understand the "seed" metaphor.

show 2 replies
evacchitoday at 5:06 PM

I am a simple man, I see wazero, I upvote :)

(I am one of the maintaners, interesting work!)

show 1 reply
sambigearatoday at 1:54 PM

No idea why this post has picked up traction 2 days later, I’m out and about right now but will endeavour to respond thoughtfully when I’m back at my keyboard later on!

show 1 reply
docheinestagestoday at 3:35 PM

Even after looking at the homepage and the GitHub README, I don't really understand how this could help.

show 1 reply
jitltoday at 1:56 PM

Wow, this is super cool. It almost feels like a DIY pocket-Cloudflare. I’m curious how a WASM binary gets mapped to HTTP endpoints that take JSON, how much of that is Pollen vs Extism? Are the routes encoded in the WASM binary somehow?

show 1 reply
esafaktoday at 3:57 PM

Did you have any applications in mind when you were designing this? Any weakness in precedents that you wanted to rectify? Are you familiar with Lunatic (https://lunatic.solutions/), and wasmCloud (https://wasmcloud.com/) ?

show 1 reply
hsaliaktoday at 2:03 PM

Using CRDT gossip to inform scaling is a clever idea. You are on to something there. Perhaps extract it as a core library/concept from the runtime? I feel that would be generally useful!

show 1 reply
Remi_Etienlast Thursday at 2:19 PM

[flagged]

Huzzitoday at 3:19 PM

[flagged]