Okay, I'll ask the dumb question: Couldn't you also reduce the number of layers per container? Sure, if you can reuse layers you should, but unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
Its not a dumb question. It seems like when it comes to these supposed high tech enterprise solutions, they spend so much churn in doing something that is very complex and impressive like investigating architecture performance when it comes to kernel level operations and figuring out the kernel specifics that are causing slowdowns. Instead they can put that talent into just writing software without containers that can just max out any EC2 instance in terms of delivering streamed content, and then you don't worry about why your containers are taking so long to load.
> unless you've done something very clever like 1 package per layer I struggle to think that 50 is really useful?
1 package per layer can actually be quite nice, since it means that any package updates will only affect that layer, meaning that downloading container updates will use much less network bandwidth. This is nice for things like bootc [0] that are deployed on the "edge", but less useful for things deployed in a well-connected server farm.
[0]: https://bootc-dev.github.io/bootc/