logoalt Hacker News

hoshyesterday at 11:45 PM2 repliesview on HN

It is not so black and white.

The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.

A single monolith can be deployed in different ways to handle different scalability requirements. For example, a distinct set of pods responding to endpoints for reports, another set for just websocket connections, and the remaining ones for the rest of the endpoints. Those can be independently scaled but released on the same cadence.

There was a long form article I once read that reasoned through this. Given M number of code sources, there are N number of deployables. It is the delivery system’s job to transform M -> N. M is based on how the engineering team(s) work on code, whether that is a monorepo, multiple repos, shared libraries, etc. N is what makes sense operationally . By making it the delivery system’s job to transform M -> N, then you can decouple M and N. I don’t remember the title of that article anymore. (Maybe someone on the internet remembers).


Replies

Nextgridyesterday at 11:58 PM

> The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.

This ain't new. Any language supporting loading modules can give you the organization benefit of microservices (if you consider it a benefit that is - very few orgs actually benefit from the separation) while operating like a monolith. Java could do it 20+ years ago, just upload your .WAR files to an application server.

show 2 replies
rdtscyesterday at 11:58 PM

Yup, good point on the BEAM. The joke we used when microservices were hot was that the BEAM is already ahead with nano-services: a gen_server is a nice lightweight, isolated process. You can define a callback API wrapper for it and deploy millions of them on a cluster.