For those who have even simpler needs (like side projects, or 1 dev projects), I found using simply docker and git to be plenty enough.

Basically, you can create a bare git repository on your server (`git init --bare`), and put a `hooks/post-receive` script within it that will clone sources in a temporary directory, build the docker image and rotate containers. That way, you can `git push` to build and deploy, and it's easy to migrate server.

The added bonus is that you now have a central git repos that can act as backup, so you don't need github or gitlab.

The main painpoint, which I find dokku interesting for (and I assume caprover too) is zero-downtime deployment. But well, if this is critical, you probably need something more extensive.

I actually developed a system similar to this but used docker compose as an alternative to Procfiles and nginx+le to handle dynamic virtual hosting. It's actually a golang app that will automatically provision git repos with the necessary hooks and also allow you to exec into a container directly over SSH. I had the thought of using docker stack to achieve zero downtime but haven't had a chance to try that out. Happy to open source it if anyone is interested in using it.

The problem with nginx based setups is that one wrong container option (label, env var, etc ...) may cause a syntax error in the nginx configuration file and then nginx won't start, so all services will be down. Loved apache, loved nginx, but I can see traefik is the only http server on my tech blog for the last years...

Nginx is made to load a configuration: you don't have the auto-configuration that comes with service discovery. Service discovery is doable with a standard HTTP API through /var/run/docker.sock or /var/run/podman/podman.sock in more advanced systems.

As such, service discovering HTTP servers are more reliable because it's built with a service isolation from the ground up: if one service has some poor value then it won't work, but it won't block the other services.

Nginx is too far behind now, they might have some service discovery module, but even then the thing that happens when your configuration autogenerates (like with snapshot testing) is that you still have to read the configuration it generates. Traefik offers a great dashboard for this so it's even more pleasant than reading a configuration file that you didn't even write ;)

For sure, I bet that in a patch into something like CapRover (or your own solution), changing nginx to traefik would end up removing quite a lot of code ;)

I'm not really sure what you mean "acheiving ZDD", ZDD is complicated any time there's a data schema migration, not to mention that containers deployment traditionally is "delete a container: KILL a process" and "create another one like cattle". uWSGI for example, could gracefully renew every worker process on SIGHUP, but re-creating the uWGSI process in another container defeats that. Maybe you have some kind of blue green deployment, maybe even canary, in this case I wonder if basing a container platform on configuration files such as those for nginx would really make it to ZDD. Would love to read more about your setup

These are all extremely valid points! I guess I should've clarified how things work a bit (which I am currently in the process of documenting how it works and how to deploy it).

Ingress management is done with the very useful nginx-proxy[0] service that loads virtual host definitions directly from the docker daemon and sets virtual hosts based on env vars set on the container. Configuration changes are loaded using an nginx reload, so even if there was an error in the configuration (which I personally have never run into though is likely possible), it wouldn't take effect. LE is then handled using the nginx-proxy-letsencrypt-companion[1]. My goal was to abstract away reverse proxy+cert management and I think any solution (traefik, caddy, etc) would work here and I'm more than happy to change it. More or less just went with nginx since it was easy and I didn't have to do any configuration other than adding it to a docker-compose file.

I guess my goal wasn't to handle ZDD for stateful applications. As you mentioned, there's a plethora of issues that arise and make that type of application much more difficult to do ZDD. I tend to write a lot of stateless web apps for simple use cases and like to have an easy way to deploy them. In the primitive sense, creating a new container, waiting for it to be ready, and then swapping the upstream used for the reverse proxy with the new pointer would be ideal but isn't supported directly with docker-compose (as mentioned).

Happy to talk this through also, especially if you'd be interested in contributing!

[0] https://github.com/nginx-proxy/nginx-proxy [1] https://github.com/nginx-proxy/docker-letsencrypt-nginx-prox...