It's simple to use only for toy use cases, that's why nobody uses it. The article everyone in this thread seems to like only goes as far as 'I pushed to git so it must be ok' which is laughable and I'm not even DevOps.
What happens if it errored on deployment or after that? you wanna write custom (bash? :D) hooks for that? What about upgrading your 'very vertically scalable' box? What if it doesn't come up after the upgrade? your downtime is suddenly hours, oops.
The k8s denial is strong and now rivals frontend frameworks denial. Never fails to amuse.
Most people/apps only need the toy cases... If you're writing an internalized tool for a company that will only have a handful of users, then doing much more than compose for deployments is a violation of KISS and YAGNI.
Are you really going to try to get 4+ 9's of uptime for a small, one-off app? Do you really need to use a cloud distributed data store that only slows things down for no real gains in practice? Do you really think the cloud services are never down, and you're willing to spend a f*ck-ton of money to create a distributed app when historically an Access DB or VB6 app would have done the job?
I've moved applications deployed via compose pretty easily... compose down -t 30 then literally sftp the application to a backup location then to the new server which only needs the Docker Engine community stack installed.. then compose up -d ... tada! In terms of deployment, you can use github action runners if you want, or anything else... you can even do it by hand pretty easily.
The post receive hook provides you real-time feedback as it runs in the terminal where you did the git push. If something broke during the deployment you'd get notified by looking at the output. If it's running in CI, you'd see a CI failure and get notified using whatever existing mechanisms you have in place to get notified of deployment pipeline failures.
Zero downtime server upgrades are easy. You could make a new server, ensure it's working in private and then adjust DNS or your floating IP address to point to the new server when you're happy. I've done this pattern hundreds of times over the years for doing system upgrades without interruption and safely. The only requirement is your servers are stateless but that's a good pattern in general.
Fair points, and yes, failed deploys need to be handled explicitly.
In our case, the answer is not "hope and bash". We deploy versioned images, use health checks, monitor the result, and keep rollback simple: redeploy the previous known-good image/config. Host upgrades are also treated as maintenance events, with backups and a recovery path, not as something Compose magically solves.
But I think there is an opposite mistake too: assuming every production system should be operated like a high-scale tech company.
Many production workloads are boring, predictable, and business-critical. They do not need aggressive autoscaling, multi-node orchestration, or constant traffic-spike handling. They need reliable deploys, backups, monitoring, health checks, and a clear rollback path.
That is where Compose can be a good fit: simple operational model, understood failure modes, low moving parts.
Kubernetes becomes much more compelling when you actually need automated failover, rolling deploys, autoscaling, multi-node scheduling, and stronger deployment primitives.
Not needing Kubernetes is not necessarily denial, it is just choosing the complexity budget that matches the problem.
Define production. Docker compose is fine for running a small internal service in production for dozens of users (i.e. not for developing said service, but for using it). I would assume it isn't fine to run a hyperscaler (but I wouldn't know). Those are extremes, and there are going to be a ton of situations in between.
I can't personally speak to what the limit of docker compose is, as I have only worked on the lower end of this: self hosting for personal use and for small internal services serving maybe 20 users.
I think people are using different meanings of “production environment.”
I agree with gear54us and upvoted their comment, but I also understand what the author of the root comment is saying.
I have also delivered systems using Docker Compose that are actually running in production. The point I want to make is that people may define “production” differently depending on the number of active users, operational requirements, and risk level.
To me, this debate feels similar to the broader monolith vs. microservices debate.