1. Don't use bash, use a scripting language that is more CI friendly. I strongly prefer pwsh.
2. Don't have logic in your workflows. Workflows should be dumb and simple (KISS) and they should call your scripts.
3. Having standalone scripts will allow you to develop/modify and test locally without having to get caught in a loop of hell.
4. Design your entire CI pipeline for easier debugging, put that print state in, echo out the version of whatever. You don't need it _now_, but your future self will thank you when you do it need it.
5. Consider using third party runners that have better debugging capabilities
I don't agree with (1), but agree with (2). I recommend just putting a Makefile in the repo and have that have CI targets, which you can then easily call from CI via a simple `make ci-test` or similar. And don't make the Makefiles overcomplicated.
Of course, if you use something else as a task runner, that works as well.
I was once hired to manage a build farm. All of the build jobs were huge pipelines of Jenkins plugins that did various things in various orders. It was a freaking nightmare. Never again. Since then, every CI setup I’ve touched is a wrapper around “make build” or similar, with all the smarts living in Git next to the code it was building. I’ll die on this hill.
#2 is not a slam dunk because the CI system loses insight into your build process if you just use one big script.
Does anyone have a way to mark script sections as separate build steps with defined artifacts? Would be nice to just have scripts with something like.
BeginStep("Step Name")
...
EndStep("Step Name", artifacts)
They could noop on local runs but be reflected in the github/gitlab as separate steps/stages and allow resumes and retries and such. As it stands there's no way to really have CI/CD run the exact same scripts locally and get all the insights and functionality.I haven't seen anything like that but it would be nice to know.
Do you (or does anyone) see possible value in a CI tool that just launches your script directly?
It seems like if you
> 2. Don't have logic in your workflows. Workflows should be dumb and simple (KISS) and they should call your scripts.
then you’re basically working against or despite the CI tool, and at that point maybe someone should build a better or more suitable CI tool.
Build a CLI in python or whatever which does the same thing as CI, every CI stage should just call its subcommands.
How do you handle persistent state in your actions?
For my actions, the part that takes the longest to run is installing all the dependencies from scratch. I'd like to speed that up but I could never figure it out. All the options I could find for caching deps sounded so complicated.
Minor variance on #1, I've come to use Deno typescripts for anything more complex than what can be easily done in bash or powershell. While I recognize that pwsh can do a LOT in the box, I absolutely hate the ergonomics and a lot of the interactions are awkward for people not used to it, while IMO more developers will be more closely aligned to TypeScript/JavaScript.
Not to mention, Deno can run TS directly and can reference repository/http modules directly without a separate install step, which is useful for shell scripting beyond what pwsh can do. ex: pulling a dbms client and interacting directly for testing, setup or configuration.
For the above reasons, I'll also use Deno for e2e testing over other languages that may be used for the actual project/library/app.
> Don't use bash
What? Bash is the best scripting language available for CI flows.
1. Just no. Unless you are some sort of Windows shop.
Step 0. Stop using CI services that purposefully waste your time, and use CI services that have "Rebuild with SSH" or similar. From previous discussions (https://news.ycombinator.com/item?id=46592643), seems like Semaphore CI still offers that.
I would disagree with 1. if you need anything more than shell that starts to become a smell to me. The build/testing process etc should be simple enough to not need anything more.