logoalt Hacker News

mjr00yesterday at 9:46 PM14 repliesview on HN

> Once the code for all destinations lived in a single repo, they could be merged into a single service. With every destination living in one service, our developer productivity substantially improved. We no longer had to deploy 140+ services for a change to one of the shared libraries. One engineer can deploy the service in a matter of minutes.

If you must to deploy every service because of a library change, you don't have services, you have a distributed monolith. The entire idea of a "shared library" which must be kept updated across your entire service fleet is antithetical to how you need to treat services.


Replies

wowohwowyesterday at 9:54 PM

I think your point while valid, it is probably a lot more nuanced. From the post it's more akin to an Amazon shared build and deployment system than "every library update needs to redeploy every time scenario".

It's likely there's a single source of truth where you pull libraries or shared resources from, when team A wants to update the pointer to library-latest to 2.0 but the current reference of library-latest is still 1.0, everyone needs to migrate off of it otherwise things will break due to backwards compatibility or whatever.

Likewise, if there's a -need- to remove a version for a vulnerability or what have you, then everyone needs to redeploy, sure, but the centralized benefit of this likely outweighs the security cost and complexity of tracking the patching and deployment process for each and every service.

I would say those systems -are- and likely would be classified as micro services but from a cost and ease perspective operate within a shared services environment. I don't think it's fair to consider this style of design decision as a distributed monolith.

By that level of logic, having a singular business entity vs 140 individual business entities for each service would mean it's a distributed monolith.

show 2 replies
andrewmutzyesterday at 9:56 PM

Needing to upgrade a library everywhere isn’t necessarily a sign of inappropriate coupling.

For example, a library with a security vulnerability would need to be upgraded everywhere regardless of how well you’ve designed your system.

In that example the monolith is much easier to work with.

show 4 replies
reactordevyesterday at 10:55 PM

I was coming here to say this. That the whole idea of a shared library couples all those services together. Sounds like someone wanted to be clever and then included their cleverness all over the platform. Dooming all services together.

Decoupling is the first part of microservices. Pass messages. Use json. I shouldn’t need your code to function. Just your API. Then you can be clever and scale out and deploy on saturdays if you want to and it doesn’t disturb the rest of us.

show 1 reply
necovektoday at 5:05 AM

Imagine your services were built on react-server-* components or used Log4J logging.

This is simply dependency hell exploding with microservices.

mlhpdxyesterday at 11:44 PM

Agreed. It sounds like they never made it to the distributed architecture they would have benefited from. That said, if the team thrives on a monolithic one they made the right choice.

ChuckMcMtoday at 1:30 AM

While I think that's a bit harsh :-) the sentiment of "if you have these problems, perhaps you don't understand systems architecture" is kind of spot on. I have heard people scoff at a bunch of "dead legacy code" in the Windows APIs (as an example) without understanding the challenge of moving millions of machines, each at different places in the evolution timeline, through to the next step in the timeline.

To use an example from the article, there was this statement: "The split to separate repos allowed us to isolate the destination test suites easily. This isolation allowed the development team to move quickly when maintaining destinations."

This is architecture bleed through. The format produced by Twilio "should" be the canonical form, which is submitted the adapter which mangles it to the "destination" form. Great, that transformation is expressible semantically in a language that takes the canonical form and spits out the special form. Changes to the transformation expression should not "bleed through" to other destinations, and changes to the canonical form should be backwards compatible to prevent bleed through of changes in the source from impacting the destination. At all times, if something worked before, it should continue to work without touching it because the architecture boundaries are robust.

Being able to work with a team that understood this was common "in the old days" when people were working on an operating system. The operating system would evolve (new features, new devices, new capabilities) but because there was a moat between the OS and applications, people understood that they had to architect things so that the OS changes would not cause applications that currently worked to stop working.

I don't judge Twilio for not doing robust architecture, I was astonished when I went to work of Google how lazy everyone got when the entire system is under their control (like there are no third party apps running in the fleet). The was a persistent theme of some bright person "deciding" to completely change some interface and Wham! every other group at Google had to stop what they were doing and move their code to the new thing. There was a particularly poor 'mandate' on a new version of their RPC while I was there. As Twilio notes, that can make things untenable.

Aeoluntoday at 12:40 AM

Once all the code for the services lived in one repo there was nothing preventing them from deploying the thing 140 times. I’m not sure why they act like that wasn’t an option.

__abctoday at 1:51 AM

So you should re-write your logging code on each and every one of your 140+ services vs. leverage a shared module?

show 1 reply
philwelchtoday at 4:30 AM

If there’s any shared library across all your services, even a third party library, if that library has a security patch you now need to update that shared library across your entire service fleet. Maybe you don’t have that; maybe each service is written in a completely different programming language, uses a different database, and reimplements monitoring in a totally different way. In that case you have completely different problems.

threethirtytwoyesterday at 11:24 PM

Then every microservice network in existence is a distributed monolith so long as they communicate with one another.

If you communicate with one another you are serializing and deserializing a shared type. That shared type will break at the communication channels if you do not simultaneously deploy the two services. The irony is to prevent this you have to deploy simultaneously and treat it as a distributed monolith.

This is the fundamental problem of micro services. Under a monorepo it is somewhat more mitigated because now you can have type checking and integration tests across multiple repos.

Make no mistake the world isn’t just library dependencies. There are communication dependencies that flow through communication channels. A microservice architecture by definition has all its services depend on each other through this communication channels. The logical outcome of this is virtually identical to a distributed monolith. In fact shared libraries don’t do much damage at all if the versions are off. It is only shared types in the communication channels that break.

There is no way around this unless you have a mechanism for simultaneous merging code and deploying code across different repos which breaks the definition of what it is to be a microservice. Microservices always and I mean always share dependencies with everything they communicate with. All the problems that come from shared libraries are intrinsic to microservices EVEN when you remove shared libraries.

People debate me on this but it’s an invariant.

show 4 replies
j45yesterday at 10:03 PM

Monorepos reasonably well designed and fleixble to grow with you can increase development speed quite a bit.

smrtinsertyesterday at 10:56 PM

100%. It's almost like they jumped into it not understanding what they were signing up for.

echelontoday at 1:08 AM

> If you must to deploy every service because of a library change

Hello engineer. Jira ticket VULN-XXX had been assigned to you as your team's on call engineer.

A critical vulnerability has been found in the netxyz library. Please deploy service $foo after SHA before 2025-12-14 at 12:00 UTC.

Hello engineer. Jira ticket VULN-XXX had been assigned to you as your team's on call engineer.

A critical vulnerability has been found in the netxyz library. Please deploy service $bar after SHA before 2025-12-14 at 12:00 UTC.

...

It's never ending. You get a half dozen of these on each on call rotation.