I have a lot of years of actual experience. Maybe not as much as you, but a good 12 years in the industry (including 3 at Google, and Google doesn't use Docker, it probably wouldn't be effective enough) and a lot more as a hobbyist.
I just don't agree. I don't find Docker too complicated to get started with at all. A lot of my projects have very simple Dockerfiles. For example, here is a Dockerfile I have for a project that has a Node.JS frontend and a Go backend:
FROM node:alpine AS npmbuild
WORKDIR /app
COPY package.json package-lock.json .
RUN npm ci
COPY . .
RUN npm run build
FROM golang:1.25-alpine AS gobuilder
WORKDIR /app
COPY go.mod go.sum .
RUN go mod download
COPY . .
COPY --from=npmbuild /app/dist /app/dist
RUN go build -o /server ./cmd/server
FROM scratch
COPY --from=gobuilder /server /server
ENTRYPOINT ["/server"]
It is a glorified shell script that produces an OCI image with just a single binary. There's a bit of boilerplate but it's nothing out of the ordinary in my opinion. It gives you something you can push to an OCI registry and deploy basically anywhere that can run Docker or Podman, whether it's a Kubernetes cluster in GCP, a bare metal machine with systemd and podman, a NAS running Synology DSM or TrueNAS or similar, or even a Raspberry Pi if you build for aarch64. All of the configuration can be passed via environment variables or if you want, additional command line arguments, since starting a container very much is just like starting a process (because it is.)But of course, for development you want to be able to iterate rapidly, and don't want to be dealing with a bunch of Docker build BS for that. I agree with this. However, the utility of Docker doesn't really stop at building for production either. Thanks to the utility of OCI images, it's also pretty good for setting up dev environment boilerplate. Here's a docker-compose file for the same project:
services:
ui:
image: node:alpine
ports: ["5173:5173"]
working_dir: /app
volumes: [".:/app:ro", "node_modules:/app/node_modules"]
command: ["/bin/sh", "-c", "npm ci && npx vite --host 0.0.0.0 --port 5173"]
server:
image: cosmtrek/air:v1.60.0
ports: ["8080:8080"]
working_dir: /app
volumes: [".:/app:ro"]
depends_on: ["postgres"]
postgres:
image: postgres:16-alpine
ports: ["5432:5432"]
volumes: ["postgres_data:/var/lib/postgresql/data"]
volumes:
node_modules:
postgres_data:
And if your application is built from the ground up to handle these environments well, which doesn't take a whole lot (basically, just needs to be able to handle configuration from the environment, and to make things a little neater it can have defaults that work well for development), this provides a one-command, auto-reloading development environment whose only dependency is having Docker or Podman installed. `docker compose up` gives you a full local development environment.I'm omitting a bit of more advanced topics but these are lightly modified real Docker manifests mainly just reformatted to fewer lines for HN.
I adopted Kubernetes pretty early on. I felt like it was a much better abstraction to use for scheduling compute resources than cloud VMs, and it was how I introduced infrastructure-as-code to one of the first places I ever worked.
I'm less than thrilled about how complex Kubernetes can be, once you start digging into stuff like Helm and ArgoCD and even more, but in general it's an incredible asset that can take a lot of grunt work out of deployment while providing quite a bit of utility on top.
About 25 years here and 10 years embedded / EE before that.
The problem is that containers are made of images and those and kubernetes are incredibly stateful. They need to be stored. They need to be reachable. They need maintenance. And the control responsibility is inverted. You end up with a few problems which I think are not tenable.
Firstly, the state. Neither docker itself or etcd behind Kubernetes are particularly good at maintaining state consistently. Anyone who runs a large kubernetes cluster will know that once it's full of state, rebuilding it consistently in a DR scenario is HORRIBLE. It is not just a case of rolling in all your services. There's a lot of state like storage classes, roles, secrets etc which nothing works if you don't have in there. Unless you have a second cluster you can tear down and rebuild regularly, you have no idea if you can survive a control plane failure (we have had one of those as well).
Secondly, reachability. The container engine and kubernetes require the ability to reach out and get images. This is such a fucking awful idea from a security and reliability perspective it's unreal. I don't know how people even accept this. Typically your kubernetes cluster or container engine has the ability to just pull any old shit off docker hub. That also couples to you that service being up, available and not subject to the whims of whatever vendor figures they don't want to do their job any more (broadcom for example). To get around this you end up having to cache images which means more infrastructure to maintain. There is of course a whole secondary market for that...
Thirdly, maintainance. We have about 220 separate services. When there's a CVE, you have to rebuild, test and deploy ALL those containers. We can't just update an OS package and bounce services or push a new service binary out and roll it. It's a nightmare. It can take a month to get through this and believe me we have all the funky CD stuff.
And as mentioned above, control is inverted. I think it's utterly stupid on this basis that your container engine or cluster pulls containers in. When you deploy, the relationship should be a push because you can control that and mandate all of the above at once.
In the attempt to solve problems, we created worse ones. And no one is really happy.
Is there a book like Docker: The Good Parts that would build a thorough understanding of the basics before throwing dozens of ecosystem brand words at you? How does virtualisation not incur an overhead? How do CPU- and GPU-bound tasks work?