I think I disagree.
We have a monorepo, we use automated code generation (openapi-generator) for API clients for each service derived from an OpenAPI.json generated by the server framework. Service client changes cascade instantly. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt/redeployed. We may just not be at scale—thank God. We're a small team.
Monorepo vs multiple repos isn't really relevant here, though. It's all about how many independently deployed artifacts you have. e.g. a very simple modern SaaS app has a database, backend servers and some kind of frontend that calls the backend servers via API. These three things are all deployed independently in different physical places, which means when you deploy version N, there will be some amount of time they are interacting with version N-1 of the other components. So you either have to have a way of managing compatibility, or you accept potential downtime. It's just a physical reality of distributed systems.
> We may just not be at scale—thank God. We a small team.
It's perfectly acceptable for newer companies and small teams to not solve these problems. If you don't have customers who care that your website might go down for a few minutes during a deploy, take advantage of that while you can. I'm not saying that out of arrogance or belittlement or anything; zero-downtime deployments and maintaining backwards compatibility have an engineering cost, and if you don't have to pay that cost, then don't! But you should at least be cognizant that it's an engineering decision you're explicitly making.