logoalt Hacker News

PebblesHDtoday at 2:53 AM0 repliesview on HN

While not a complete rebuttal, allow me the following. I manage a team of 4 scrum masters each with 5-6 engineers. We provide services via a user interface we'll call the console, as would be fairly familiar to any B2B or B2C service provider. The backends of this portal are split up by functional area, so we have a compute management service providing CRUD apis for dealing with our compute offerings, a storage service for CRUD on our storage offerings, a network service for interacting with networks etc. all sharing a single, albeit sharded, underlying data store.

My teams pick up a piece of work, check out the code, run the equivalent of docker compose up, and build their feature. They commit to git, merge to dev, then to main, and it runs through a pipeline to deploy. We do this multiple times a day. Doing that with a large monolith that combines all these endpoints into one app wouldn't be hard, but it adds no benefits, and the overhead that now we have 4 teams frequently working on the same code and needing to rebase and pull in change, rather than driving simple atomic changes. Each service gets packaged as a container and deployed to ECS fargate, on a couple of EC2 instances that are realistically a bit oversubscribed if all the containers suddenly got hammered, but 90% of the time they don't, so its incredibly cost effective.

When I see the frequent discussions around microservices, I always want to comment that if you have a disfunctional org, no architecture will save you, and if you have a functional org, basically any architecture is fine, but for my cases, I find that miniservices if you will, domain driven and sharing a persistence layer, is often a good way to go for a couple of small teams.