The arguments for doing frequent releases partially apply to upgrading dependencies. Upgrading gets harder the longer you put it off. It’s better to do it on a regular schedule, so there are fewer changes at once and it preserves knowledge about how to do it.
A cooldown is a good idea, though.
There's another variable, though, which is how valuable "engineering time now" is vs. "engineering time later."
Certainly, having a regular/automated update schedule may take less clock time in total (due to preserved knowledge etc.), and incur less long-term risk, than deferring updates until a giant, risky multi-version multi-dependency bump months or years down the road.
But if you have limited engineering resources (especially for a bootstrapped or cost-conscious company), or if the risks of outages now are much greater than the risks of outages later (say, once you're 5 years in and have much broader knowledge on your engineering team), then the calculus may very well shift towards freezing now, upgrading later.
And in a world where supply chain attacks will get far more subtle than Shai-Hulud, especially with AI-generated payloads that can evolve as worms spread to avoid detection, and may not require build-time scripting but defer their behavior to when called by your code - macro-level slowness isn't necessarily a bad thing.
(It should go without saying that if you choose to freeze things, you should subscribe to security notification services that can tell you when a security update does release for a core server-side library, particularly for things like SQL injection vulnerabilities, and that your team needs the discipline to prioritize these alerts.)