logoalt Hacker News

maxdoyesterday at 10:12 PM2 repliesview on HN

Both approaches can fail. Especially in environments like Node.js or Python, there's a clear limit to how much code an event loop can handle before performance seriously degrades.

I managed a product where a team of 6–8 people handles 200+ microservices. I've also managed other teams at the same time on another product where 80+ people managed a monolith.

What i learned? Both approaches have pros and cons.

With microservices, it's much easier to push isolated changes with just one or two people. At the same time, global changes become significantly harder.

That's the trade-off, and your mental model needs to align with your business logic. If your software solves a tightly connected business problem, microservices probably aren't the right fit.

On the other hand, if you have a multitude of integrations with different lifecycles but a stable internal protocol, microservices can be a lifesaver.

If someone tries to tell you one approach is universally better, they're being dogmatic/religious rather than rational.

Ultimately, it's not about architecture, it's about how you build abstractions and approach testing and decoupling.


Replies

rozapyesterday at 10:39 PM

To me this rationalization has always felt like duct tape over the real problem, which is that the runtime is poorly suited to what people are trying to do.

These problems are effectively solved on beam, the jvm, rust, go, etc.

strkenyesterday at 10:24 PM

Can you explain a bit more about what you mean by a limit on how much code an event loop can handle? What's the limit, numerically, and which units does it use? Are you running out of CPU cache?

show 2 replies