My first reaction: 800GB who committed that?!? This size alone screams something is wrong. To be fair even with basic dockerfiles it’s easy to build up a lot of junk. But there should be a general size limit in any workflow that just alerts when something grows out of proportion. We had this in our shop just a few weeks ago. A docker image for some ai training etc grew too big and nobody got alerted about the image final size. It got committed and pushed to jfrog. From there the image synced to a lot of machines. Jfrog informed us that something is off with our amount of data we shuffle around. So on one end this should not happen but it seems to easily end up in production without warning.
Given that Jfrog bills on egress for these container images I’m sure you guys saw an eye watering bill for the privilege of distributing your bloated container