There is a lot to blame on the OS side, but Docker/OCI are also to blame, not allowing for permission bounds and forcing everything to the end user.
Open desktop is also problematic, but the issue is more about user land passing the buck, across multiple projects that can easily justify local decisions.
As an example, if crun set reasonable defaults and restricted namespace incompatible features by default we would be in a better position.
But docker refused to even allow you to disable the —privileged flag a decade ago,
There are a bunch of *2() system calls that decided to use caller sized structs that are problematic, and apparmor is trivial to bypass with ld_preload etc…
But when you have major projects like lamma.cpp running as container uid0, there is a lot of hardening tha could happen with projects just accepting some shared responsibility.
Containers are just frameworks to call kernel primitives, they could be made more secure by dropping more.
But OCI wants to stay simple and just stamp couple selinux/apparmor/seccomp and dbus does similar.
Berkeley sockets do force unsharing of netns etc, but Unix is about dropping privileges to its core.
Network aware is actually the easier portion, and I guess if the kernel implemented posix socket authorization it would help, but when user land isn’t even using basic features like uid/gid, no OS would work IMHO.
We need some force that incentivizes security by design and sensible defaults, right now we have wack-a-mole security theater. Strong or frozen caveman opinions win out right now.