people talk about "one change, everywhere, all at once." That is a great way to break production on any api change. if you have a db and >2 nodes, you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes. While more obvious with a db schema, it is true for any networked api.
At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
And since you have to design your changes to be backwards compatible already, why not leverage a gradual roll out?
Do you update your app lock-step when AWS updates something? Or when your email service provider expands their API? No, of course not. And you don't have to lock yourself to other teams in your org for the same reason.
Monorepos are hotbeds of cross contamination and reaching beyond API boundaries. Having all the context for AI in one place is hard to beat though.
I’m not sure why you made the logical leap from having all code stored in a single repo to updating/deploying code in lockstep. Where you put your code (the repo) can and should be decoupled from how you deploy changes.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation. You need defined policies that balance the needs of any individual team versus the entire org. You need to talk and find a mutually accepted middle ground between teams that want new features and teams that want stability and no regressions.
I think I disagree.
We have a monorepo, we use automated code generation (openapi-generator) for API clients for each service derived from an OpenAPI.json generated by the server framework. Service client changes cascade instantly. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt/redeployed. We may just not be at scale—thank God. We're a small team.
> Having all the context for AI in one place is hard to beat though.
Seems like a weird workaround, you could just clone multiple repos into a workspace. Agree with all your other points though.
> At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
The alternative of every service being on their own version of libraries and never updating is worse.
atomic updates in particular is one of those things that sounds good to the C-suite, but falls apart extremely badly in the lower levels.
months-long delays on important updates due to some large project doing extremely bad things and pushing off a minor refactor endlessly has been the norm for me. but they're big so they wield a lot of political power so they get away with it every time.
or worse, as a library owner: spending INCREDIBLE amounts of time making sure a very minor change is safe, because you can't gradually roll it out to low-risk early adopter teams unless it's feature-flagged to hell and back. and if you missed something, roll back, write a report and say "oops" with far too many words in several meetings, spend a couple weeks triple checking feature flagging actually works like everyone thought (it does not, for at least 27 teams using your project), and then try again. while everyone else working on it is also stuck behind that queue.
monorepos suck imo. they're mostly company lock-in, because they teach most absolutely no skills they'd need in another job (or for contributing to open source - it's a brain drain on the ecosystem), and all external skill is useless because every monorepo is a fractal snowflake of garbage.
You always have this problem thats why you have a release process for apis.
And monorepo or not, bad software developers will always run into this issue. Most software will not have 'many teams'. Most software is written by a lot of small companies doing niche things. Big software companies with more than one team, normally have release managers.
My tipp: use architecture unit tests for external facing APIs. If you are a smaller company: 24/7 doesn't has to be the thing, just communicate this to your customers but overall if you run SaaS Software and still don't know how to do zero-downtime-deployment in 2025/2026, just do whatever you are still doing because man come on...
I really have never been able to grasp how people who believe that forward-compatible data schema changes are daunting can ever survive contact with the industry at scale. It's extremely simple to not have this problem. "design for forwards-backwards compatible changes" is what every grown-up adult programmer does.
100%, this is all true and something you have to tackle eventually. Companies like this one (Kasava) can get away with it because, well, they likely don't have very many customers and it doesn't really matter. But when you're operating at a scale where you have international customers relying on your SaaS product 24/7, suddenly deploys having a few minutes of downtime matters.
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.