logoalt Hacker News

mjr00yesterday at 8:09 PM2 repliesview on HN

I feel like this has been beaten to death and this article isn't saying much new. As usual the answer is somewhere in the middle (what the article calls "miniservices"). Ultimately

1. Full-on microservices, i.e. one independent lambda per request type, is a good idea pretty much never. It's a meme that caught on because a few engineers at Netflix did it as a joke that nobody else was in on

2. Full-on monolith, i.e. every developer contributes to the same application code that gets deployed, does work, but you do eventually reach a breaking point as either the code ages and/or the team scales. The difficulty of upgrading core libraries like your ORM, monitoring/alerting, pandas/numpy, etc, or infrastructure like your Java or Python runtime, grows superlinearly with the amount of code, and everything being in one deployed artifact makes partial upgrades either extremely tricky or impossible depending on the language. On the operational and managerial side, deployments and ownership (i.e. "bug happened, who's responsible for fixing?") eventually get way too complex as your organization scales. These are solvable problems though, so it's the best approach if you have a less experienced team.

3. If you're implementing any sort of SoA without having done it before -- you will fuck it up. Maybe I'm just speaking as a cynical veteran now, but IMO lots of orgs have keen but relatively junior staff leading the charge for services and kubernetes and whatnot (for mostly selfish resume-driven development purposes, but that's a separate topic) and end up making critical mistakes. Usually some combination of: multiple services using a shared database; not thinking about API versioning; not properly separating the domains; using shared libraries that end up requiring synchronized upgrades.

There's a lot of service-oriented footguns that are much harder to unwind than mistakes made in a monolithic app, but it's really hard to beat SoA done well with respect to maintainability and operations, in my opinion.


Replies

SatvikBeriyesterday at 8:13 PM

Re 1: I like Matt Ranney's take on it, where he says microservices are a form of technical debt – they let you deploy faster and more independently in exchange for an overall more complex codebase.

This makes it clear when you might want microservices: you're going through a period of hypergrowth and deployment is a bigger bottleneck than code. This made sense for DoorDash during covid, but that's a very unusual circumstance

yowlingcatyesterday at 8:20 PM

I see a lot of value in spinning up microservices where the database is global across all services (and not inside the service) but I struggle more to see the value of separate core transactional databases for separate services unless/until the point where two separate parts of the organizations are almost two separate companies that cannot operate as a single org/single company. You lose data integrity, joining ability, one coherent state of the world, etc.

The main time I can see this making sense is when the data access patterns are so different in scale and frequency that they're optimizing for different things that cause resource contention, but even then, my question would become do you really need a separate instance of the same kind of DB inside the service, or do you need another global replica/a new instance of a new but different kind of DB (for example Clickhouse if you've been running Postgres and now need efficient OLAP on large columnar data).

Once you get to this scale, I can see the idea of cell-based architecture [1] making sense -- but even at this point, you're really looking at a multi-dimensionally sharded global persistence store where each cell is functionally isolated for a single slice of routing space. This makes me question the value of microservices with state bound to the service writ large and I can't really think of a good use case for it.

[1] https://docs.aws.amazon.com/wellarchitected/latest/reducing-...

show 1 reply