Statically linked binaries are a huge security problem, as are containers, for the same reason. Vendors are too slow to patch.
When dynamically linking against shared OS libraries, Updates are far quicker and easier.
And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...
I've heard this many times, and while there might be data out there in support of it, I've never seen that, and my anecdotal experience is more complicated.
In the most security-forward roles I've worked in, the vast, vast majority of vulnerabilities identified in static binaries, Docker images, Flatpaks, Snaps, and VM appliance images fell into these categories:
1. The vendor of a given piece of software based their container image on an outdated version of e.g. Debian, and the vulnerabilities were coming from that, not the software I cared about. This seems like it supports your point, but consider: the overwhelming majority of these required a distro upgrade, rather than a point dependency upgrade of e.g. libcurl or whatnot, to patch the vulnerabilities. Countless times, I took a normal long-lived Debian test VM and tried to upgrade it to the patched version and then install whatever piece of software I was running in a docker image, and had the upgrade fail in some way (everything from the less-common "doesn't boot" to the very-common "software I wanted didn't have a distribution on its website for the very latest Debian yet, so I was back to hand-building it with all of the dependencies and accumulated cruft that entails").
2. Vulnerabilities that were unpatched or barely patched upstream (as in: a patch had merged but hadn't been baked into released artifacts yet--this applied equally to vulns in things I used directly, and vulns in their underlying OSes).
3. Massive quantities of vulnerabilities reported in "static" languages' standard libraries. Golang is particularly bad here, both because they habitually over-weight the severity of their CVEs and because most of the stdlib is packaged with each Golang binary (at least as far as SBOM scanners are concerned).
That puts me somewhat between a rock and a hard place. A dynamic-link-everything world with e.g. a "libgolang" versioned separately from apps would address the 3rd item in that list, but would make the 1st item worse. "Updates are far quicker and easier" is something of a fantasy in the realm of mainstream Linux distros (or copies of the userlands of those distros packaged into container images); it's certainly easier to mechanically perform an update of dependency components of a distro, but whether or not it actually works is another question.
And I'm not coming at this from a pro-container-all-the-things background. I was a Linux sysadmin long before all this stuff got popular, and it used to be a little easier to do patch cycles and point updates before container/immutable-image-of-userland systems established the convention of depending on extremely specific characteristics of a specific revision of a distro. But it was never truly easy, and isn't easy today.
This is the theory, but not the practice.
In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.
On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications, not only on Linux, but even on Windows and even for Microsoft products, such as Visual Studio.
I have also seen a lot of space and time wasted by the necessity of having installed in the same system, by using various hacks, a great number of versions of the same dynamic library, in order to satisfy the conflicting requirements of various applications. I have also seen systems bricked by a faulty update of glibc, if they did not have any statically-linked rescue programs.
On Windows such problems are much less frequent only because a great number of applications bundle with the them, in their own directory, the desired versions of various dynamic libraries, and Windows is happy to load those libraries. On UNIX derivatives, this usually does not work as the dynamic linker searches only standard places for libraries.
Therefore, in my opinion static linking should always be the default, especially for something like the standard C library. Dynamic linking shall be reserved for some very special libraries, where there are strong arguments that this should be beneficial, i.e. that there really exists a need to upgrade the library without upgrading the main executable.
Golang is probably an anomaly. C-based programs are rarely much bigger when statically linked than when dynamically linked. Only using "printf" is typically implemented in such a way that it links a lot into any statically-linked program, so the C standard libraries intended for embedded computers typically have some special lightweight "printf" versions, to avoid this overhead.