Some thoughts on the patterns here:
- Removing requirements.txt makes it harder to track the high-level deps your code requires (and their install options/flags). Typically requirements.txt should be the high level requirements, and you should pass them to another process that produces pinned versions. You regenerate the pinned versions/deps from the requirements.txt, so you have a way to reset all dependencies as your core ones gain or lose nested dependencies.
- +COPY --from=ghcr.io/astral-sh/uv:0.7.13 /uv /uvx /usr/local/bin/ seems useful, but the upstream docker tag could be repinned on a different hash, causing conflicts. Use the hash, or use a different way to stage your dependencies and copy them into the file. Whenever possible, confirm your artifacts match known hashes.
- Installing into the container's /home/project/.local may preserve the uv pattern, but it's going to make a container that's harder to debug. Production containers (if not all containers) should install files into normal global paths so that it's easy to find the, reason about them, and use standard tools to troubleshoot. This allows non-uv users to diagnose the application running, and removes extra abstraction layers which create unneeded complexity.
- +RUN chmod 0755 bin/ && bin/uv-install* - using scripts makes things easier to edit, but it makes it harder to understand what's going on in a container, because you have to run around the file tree reading files and building a mental map of execution. Whenever possible, just shove all the commands into RUN lines in the Dockerfile. This allows a user to just view the Dockerfile and know the entire execution without extra effort. It also removes some complexity in terms of checking out files, building Docker context, etc.
- Try to avoid docker compose and other platform-constrained tools for the running of your tests, for the freezing of versions, etc. You SDLC should first be composed of your build tools/steps using just native tools/environments. Then on top of that should go the CI tools. This separation of "dev/test environment" from CI allows you to take your "dev/test environment" and run it on any CI platform - Docker Compose, GitHub Actions, CircleCI, GitLab CI, Jenkins, etc - without modifying the "dev/test environment" tools or workflow. Personally I have a dev.sh that sets up the dev environment, build.sh to run any build steps, test.sh to run all the test stuff, ci.sh to run ci/cd specific stuff (it just calls the CI/CD system's API and waits for status), and release.sh to cut new releases.
> Removing requirements.txt makes it harder to track the high-level deps your code requires
The very first section of the article talks about replacing requirements.txt with pyproject.toml which contains a similar high-level list of deps
Agreed, custom scripts are great for the person who wrote them and/or uses them all the time but I much prefer to add as little veneer over upstream tools as possible lest I get messages like “hey how do I actually restart this process/get these logs/upgrade this package?”
Only thing I’m not sure about: why is having your list of requirements in requirements.txt vs project.toml? Isn’t it just one file vs another?
I find it's clearer to store all pinned dependencies in requirements.txt using pip-compile (or pip freeze). There's no finagling in trying to determine which file contains the dependency snapshot for installing an application. High level dependencies can be defined in requirements.in or pyproject.toml.