I’ve mentioned this before, but at my previous employer we set up staged Artifactory so tha production couldn’t pull from anything that hadn’t been through the test stage, and test couldn’t pull from anything that hadn’t been through CI.
Because releases were relatively slow (weekly) compared to other places I worked (continuous), we had a reasonable lead time to have third party packages scanned for vulns before they made it to production.
The setup was very minimal, really just a script to link one stage’s artifacts to the next stage’s repo. But the end effect was production never pulled from the internet and never pulled packages that hadn’t been deployed to the previous stage.
Something similar is easy with docker. Build the image when releasing to your first stage env, deploy the very same image to the next stage, until it reaches production. Nothing can break in between and enough time to test.
Reminds me of my favorite setup for a React project. We had rotating deployed builds (like three or more of them) for pull requests, the React UI pointed to a specific system so you always had consistent data, devs would immediately be able to test the UI, QA could follow immediately after.
>> But the end effect was production never pulled from the internet
Having production ever pull from the interwebs just seems bonkers to me. Even if you (for some reason?) want to stay up-to-date on every single dependency release you should be pulling to your own repo with some sort of gated workflow. If you're doing continuous deployment you definitely want to put extra control around your external dependencies, and releasing your product quickly after they change is probably the rare exception.