I would really like to send this article out to all the developers in my small company (only 120+ people, about 40 dev & test) but the political path has been chosen and the new shiny tech has people entranced.
What we do (physics simulation software) doesn’t need all the complexity (in my option as a long time software developer & tester) and software engineering knowledge that splitting stuff into micro services require.
Only have as much complexity as you absolutely need, the old saying “Keep it simple, stupid” still has a lot of truth.
But the path is set, so I’ll just do my best as an individual contributor for the company and the clients who I work with.
It is not so black and white.
The BEAM ecosystem (Erlang, Elixir, Gleam, etc) can do distributed microservices within a monolith.
A single monolith can be deployed in different ways to handle different scalability requirements. For example, a distinct set of pods responding to endpoints for reports, another set for just websocket connections, and the remaining ones for the rest of the endpoints. Those can be independently scaled but released on the same cadence.
There was a long form article I once read that reasoned through this. Given M number of code sources, there are N number of deployables. It is the delivery system’s job to transform M -> N. M is based on how the engineering team(s) work on code, whether that is a monorepo, multiple repos, shared libraries, etc. N is what makes sense operationally . By making it the delivery system’s job to transform M -> N, then you can decouple M and N. I don’t remember the title of that article anymore. (Maybe someone on the internet remembers).
I'm helping a company get out of legacy hell right now. And instead of saying we need microservices, let's start with just a service oriented architecture. That would be a huge step forward.
Most companies should be perfectly fine with a service oriented architecture. When you need microservices, you have made it. That's a sign of a very high level of activity from your users, it's a sign that your product has been successful.
Don't celebrate before you have cause to do so. Keep it simple, stupid.
You need multiple services whenever the scaling requirements of two components of your system are significantly different. That's pretty much it. These are often called micro services, but they don't have to actually be "micro"
On the theme of several other responders:
I don't want microservices; I want an executable. Memory is shared directly, and the IDE and compiler know about the whole system by virtue of it being integrated.
I like goldilocks services, as big or as small as actually makes sense for your domain/resource considerations, usually no single http endpoint services in sight.
It's funny that we've now been having this conversation on HN for at least a decade.
I don't want or need microservices. What I want is for people to stop putting TCP roundtrips in between what would otherwise be simple function calls in a sane universe. I don't want to have to take a graduate-level course on the CAP theorem to clock in and work on whatever "Uber for dogs" nonsense is paying my rent. You almost certainly don't have a scaling problem that necessitates a distributed system, I guarantee it. I have had an average career, and every single time someone shoved a Kubernetes-shaped peg into a server-shaped hole, it's been a shitshow. These systems are slow, expensive, difficult to reason about, and largely unnecessary for most people who handle a few hundred or thoudsand connections per second (on average. Don't @ me about bursty traffic, I understand how it works).
And in a few days, we're going to get a long thread about how software is slow and broken and terrible, and nobody will connect the dots. Software sucks because the way we build it sucks. I've had the distinct privilege of helping another team support their Kubernetes monstrosity, which shat the bed around double-digit requests per second, and it was a comedy of errors. What should've otherwise just been some Rails or Django application with HTML templating and a database was three or four different Kubernetes pods, using gRPC to poorly and unnecessarily communicate with each other. It went down all. The. Time. And it was a direct result of the unnecessary complexity of Kubernetes and the associated pageantry.
I would also like to remind everyone that Kubernetes isn't doing anything your operating system can't do, only better. Networking? Your OS does that. Scheduling? Your OS does that. Resource allocation and sandboxing? If your OS is decent, it can absolutely do that. Access control? Yup.
I can confidently say that 95% of the time, you don't need Kubernetes. For the other 5%, really look deep into your heart and ask yourself if you actually have the engineering problems that distributed systems solve (and if you're okay with the other problems distributed systems cause). I've had five or six jobs now that shoehorned Kubernetes into things, and I can confidently say that the juice ain't worth the squeeze.
I don't want microservices, I think what I really want is self contained WebAssembly modules!
microservices were an effect of the ZIRP era. you literally have places like Monzo bragging that they've 3 microservices for each engineer.
3 tier architecture proves time and time again to be robust for most workloads.
There's one thing I've learned about microservices. If you've ever gone down the path of making them, failing and making them again until they all worked as they should with the desired 9's of uptime, then you'll only want to make them if it's really the right thing to make. It's not worth the effort otherwise.
So no I don't want microservices (again), but sometimes it's still the right thing.
in my opinion "you need microservices" peaked around 2018-2019 ... does nowadays someone think that, apart from when you reach certain limits and specific contexts, they are a good idea?
I feel like this has been beaten to death and this article isn't saying much new. As usual the answer is somewhere in the middle (what the article calls "miniservices"). Ultimately
1. Full-on microservices, i.e. one independent lambda per request type, is a good idea pretty much never. It's a meme that caught on because a few engineers at Netflix did it as a joke that nobody else was in on
2. Full-on monolith, i.e. every developer contributes to the same application code that gets deployed, does work, but you do eventually reach a breaking point as either the code ages and/or the team scales. The difficulty of upgrading core libraries like your ORM, monitoring/alerting, pandas/numpy, etc, or infrastructure like your Java or Python runtime, grows superlinearly with the amount of code, and everything being in one deployed artifact makes partial upgrades either extremely tricky or impossible depending on the language. On the operational and managerial side, deployments and ownership (i.e. "bug happened, who's responsible for fixing?") eventually get way too complex as your organization scales. These are solvable problems though, so it's the best approach if you have a less experienced team.
3. If you're implementing any sort of SoA without having done it before -- you will fuck it up. Maybe I'm just speaking as a cynical veteran now, but IMO lots of orgs have keen but relatively junior staff leading the charge for services and kubernetes and whatnot (for mostly selfish resume-driven development purposes, but that's a separate topic) and end up making critical mistakes. Usually some combination of: multiple services using a shared database; not thinking about API versioning; not properly separating the domains; using shared libraries that end up requiring synchronized upgrades.
There's a lot of service-oriented footguns that are much harder to unwind than mistakes made in a monolithic app, but it's really hard to beat SoA done well with respect to maintainability and operations, in my opinion.
I found a different benefit to micro services — AI understands them and context matters. Monolithic app confuse ai where micro services enables them to be far more effective.
I think most of the time when small teams say “we should do microservices” what they really mean is “we should try a service oriented architecture.” Especially if you’re doing a monorepo, it becomes fairly routine to make choices around how to consolidate like modules.
For example, I work in a small company with a data processing pipeline that has lots of human in the loop steps. A monolith would work, but a major consideration with it being a small company is cloud cost, and a monolith would mean slow load times in serverless or persistent node costs regardless of traffic. A lot of our processing steps are automated and ephemeral, and across all our customers, the data tends to look like a wavelet passing through the system with an average center of mass mostly orbiting around a given step. A service oriented architecture let us:
- Separate steps into smaller “apps” that run on demand with serverless workers.
- avoid the scaling issues of killing our database with too many concurrent connections by having a single “data service”—essentially organizing all the wires neatly.
- ensure that data access (read/write on information extracted from our core business objects) happens in a unified manner, so that we don’t end up with weird, fucky API versioning.
- for the human in the loop steps, data stops in the job queue at a CRUD app as a notification, where data analysts manually intervene.
A monolith would have been an impedance mismatch for the inherent “assembly line” model here, regardless of dogma and the fact that yes, a monolith could conceivably handle a system like this without as much network traffic.
You could argue that the data service is a microservice. It’s a single service that serves a single use case and guards its database access behind an API. I would reply to any consternation or foreboding due to its presence in a small company by saying “guess what, it works incredibly well for us. Architecture is architecture: the pros and cons will out, just read them and build what works accordingly.”
No
I don’t want or need microservices.
I want just services.
The other problem is that very very few people actually know how to design a microservice based architecture. I've worked with half a dozen different teams who claim they're building microservices, but when you look at the system it's just a giant distributed monolith. Most of them are people who worked in legacy code bases, and while they like the idea of microservices, they can't let go of those design patterns. So they do the exact same thing but just out everything behind network calls. Drives me absolutely fucking nuts
We've removed/merged most of the unnecessary services. The ones left have operational needs to stay separate.
The current hell is x years of undisciplined (in terms of perf and cost) new ORM code being deployed (SQLAlchemy). We do an insane number of read queries per second relative to our usage.
I honestly think the newish devs we have hired don't understand SQL at all. They seem to think of it as some arcane low level thing people used in the 80s/90s.
Usually no
Another good use case for a microservice - if you are going to have to change the compute size for your monolith just to accommodate the new functionality.
I had an architect bemoan the suggestion we use a microservice, until he had to begrudgingly back down when he was told that the function we were talking about (Running a CLIP model) would mean attaching a GPU to every task instance.
During a major site rewrite, one of my junior cohorts, suggested a monolithic re-entrant site... Easily tripled the TPS, and halved the response time.
I was stunned... He comes up with this stuff all the time. Thanks Matt.
We watched kernels go from monoliths to micro to hybrid.
And, now, SAAS is finally making the jump to the last position - hybrid/mini
I don't want microservices!
What I want is a lightweight infrastructure for macro-services. I want something to handle the user and machine-to-machine authentication (and maybe authorization).
I don't WANT the usual K8s virtual network for that, just an easy-to-use module inside the service itself.
You should be able to spin up everything localy in a docker-compose container.
The one thing I would like to preserve from microservices is stuff about database table hygiene.
Large, shared database tables have been a huge issue in the last few jobs that I have had, and they are incredibly labor intensive to fix.
[dead]
While not a complete rebuttal, allow me the following. I manage a team of 4 scrum masters each with 5-6 engineers. We provide services via a user interface we'll call the console, as would be fairly familiar to any B2B or B2C service provider. The backends of this portal are split up by functional area, so we have a compute management service providing CRUD apis for dealing with our compute offerings, a storage service for CRUD on our storage offerings, a network service for interacting with networks etc. all sharing a single, albeit sharded, underlying data store.
My teams pick up a piece of work, check out the code, run the equivalent of docker compose up, and build their feature. They commit to git, merge to dev, then to main, and it runs through a pipeline to deploy. We do this multiple times a day. Doing that with a large monolith that combines all these endpoints into one app wouldn't be hard, but it adds no benefits, and the overhead that now we have 4 teams frequently working on the same code and needing to rebase and pull in change, rather than driving simple atomic changes. Each service gets packaged as a container and deployed to ECS fargate, on a couple of EC2 instances that are realistically a bit oversubscribed if all the containers suddenly got hammered, but 90% of the time they don't, so its incredibly cost effective.
When I see the frequent discussions around microservices, I always want to comment that if you have a disfunctional org, no architecture will save you, and if you have a functional org, basically any architecture is fine, but for my cases, I find that miniservices if you will, domain driven and sharing a persistence layer, is often a good way to go for a couple of small teams.