logoalt Hacker News

vlovich123last Saturday at 10:36 PM4 repliesview on HN

You can do that but you keep missing that you’re no longer a true microservice as originally defined and envisioned, which is that you can deploy the service independently under local control.

Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.

OP is correct that you are indeed now in a weird hybrid monolith application where it’s deployed piecemeal but can’t really be deployed that way because of tightly coupled dependencies.

Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.


Replies

GeneralMayhemlast Saturday at 11:14 PM

Internal and external have wildly different requirements. Google internally can't update a library unless the update is either backward-compatible for all current users or part of the same change that updates all those users, and that's enforced by the build/test harness. That was an explicit choice, and I think an excellent one, for that scenario: it's more important to be certain that you're done when you move forward, so that it's obvious when a feature no longer needs support, than it is to enable moving faster in "isolation" when you all work for the same company anyway.

But also, you're conflating code and services. There's a huge difference between libraries that are deployed as part of various binaries and those that are used as remote APIs. If you want to update a utility library that's used by importing code, then you don't need simultaneous deployment, but you would like to update everywhere to get it done with - that's only really possible with a monorepo. If you want to update a remote API without downtime, then you need a multi-phase rollout where you introduce a backward-compatibility mode... but that's true whether you store the code in one place or two.

show 1 reply
wowohwowlast Saturday at 11:19 PM

To reference my other comment. This thread is about the nuance of if a dependency on a shared software repository means you are a microservice or not. I'm saying it's immaterial to the definition.

A dependency on an external software repository does not make a microservice no longer a microservice. It's the deployment configuration around said dependency that matters.

show 1 reply
smaudetlast Sunday at 12:25 AM

> Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.

Some of their "solutions" I kind of wonder how they plan on resolving this, like the black box "magic" queue service they subbed back in, or the fault tolerance problem.

That said, I do think if you have a monolith that just needs to scale (single service that has to send to many places), they are possibly taking the correct approach. You can design your code/architecture so that you can deploy "services" separately, in a fault tolerant manner, but out of a mono repo instead of many independent repos.

show 2 replies
dmoylast Saturday at 11:11 PM

> Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.

Internal Google services: *sweating profusely*

(Mostly in jest, it's obviously a different ballgame internal to the monorepo on borg)