The winning strategy for all CI environments is a build system facsimile that works on your machine, your CI's machine, and your test/uat/production with as few changes between them as your project requirements demand.
I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.
But it starts with one unitary tool for triggering work.
This line of thinking inspired me to write mkincl [0] which makes Makefiles composable and reusable across projects. We're a couple of years into adoption at work and it's proven to be both intuitive and flexible.
I agree, but this is kind of an unachievable dream in medium to big projects.
I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.
Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system. https://www.gnu.org/software/make/manual/html_node/Catalogue...
What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.