I don't agree with (1), but agree with (2). I recommend just putting a Makefile in the repo and have that have CI targets, which you can then easily call from CI via a simple `make ci-test` or similar. And don't make the Makefiles overcomplicated.
Of course, if you use something else as a task runner, that works as well.
Makefile or scripts/do_thing either way this is correct. CI workflows should only do 1 thing each step. That one thing should be a command. What that command does is up to you in the Makefile or scripts. This keeps workflows/actions readable and mostly reusable.
>I don't agree with (1)
Neither do most people, probably but it's kinda neat how they suggested fix for github actions' ploy to maintain vendor lock-in is to swap it with a language invented by that very same vendor.
makefile commands are the way
For certain things, makefiles are great options. For others though they are a nightmare. From a security perspective, especially if you are trying to reach SLSA level 2+, you want all the build execution to be isolated and executed in a trusted, attestable and disposable environment, following predefined steps. Having makefiles (or scripts) with logical steps within them, makes it much, much harder to have properly attested outputs.
Using makefiles mixes execution contexts between the CI pipeline and the code within the repository (that ends up containing the logic for the build), instead of using - centrally stored - external workflows that contains all the business logic for the build steps (e.g., compiler options, docker build steps etc.).
For example, how can you attest in the CI that your code is tested if the workflow only contains "make test"? You need to double check at runtime what the makefile did, but the makefile might have been modified by that time, so you need to build a chain of trust etc. Instead, in a standardized workflow, you just need to establish the ground truth (e.g., tools are installed and are at this path), and the execution cannot be modified by in-repo resources.