> This difference is particularly noticeable with multiple images sharing the same base layers. With legacy storage drivers, shared base layers were stored once locally, and reused images that depended on them. With containerd, each image stores its own compressed version of shared layers, even though the uncompressed layers are still de-duplicated through snapshotters.
This seems like a really weird decision. If base images are duplicated for every image you have, that will add up quickly.
I think there is an Issue/PR right now to change this. See: https://github.com/containerd/containerd/issues/13307
Docker is already hogging a lot of disk space and needs to be pruned regularly. I can't imagine what's it's going to be like now.
This is hell for a lot of ML containers, that have gigabytes of CUDA and PyTorch. Before at least you could keep your code contained to a layer. But if I understand this correctly every code revision duplicates gigabytes of the same damn bloated crap.