Its not a dumb question. It seems like when it comes to these supposed high tech enterprise solutions, they spend so much churn in doing something that is very complex and impressive like investigating architecture performance when it comes to kernel level operations and figuring out the kernel specifics that are causing slowdowns. Instead they can put that talent into just writing software without containers that can just max out any EC2 instance in terms of delivering streamed content, and then you don't worry about why your containers are taking so long to load.
Content is not streamed from these containers.
I have seen these comments quite a bit but they gloss over a major feature of a large company.
In a large company you can have thousands of developers just coding away at their features without worrying about how any of it runs. You can dislike that, but that's how that goes.
From a company perspective this is preferable as those developers are supposedly focussed on building the things that make the company money. It also allows you to hire people that might be good at that but have no idea how the deployment actually works or how to optimize that. Meanwhile with all code running sort of the same way, that makes the operations side easier.
When the company grows and you're dealing with thousands of people contributing code. These optimizations might save a lot of money/time. But those savings might be peanuts compared with every 10 devs coming up with their own deployment and the ops overhead of that.