Over the years, I kept tweaking my setup and now settled with running everything as a docker container. The orchestrator is docker-compose instead of systemd. The proxy is caddy instead of nginx. But same as the author, I also write a deploy script for each project I need to run. Overall I think it's quite similar.

One of the many benefits of using docker is that I can use the same setup to run 3rd party software. I've been using this setup for a few years now and it's awesome. It's robust like the author mentioned. But if you need the flexibility, you can also do whatever you want.

The only pain point I have right now is on rolling deployment. As my software scales, a few second of downtime every deployment is becoming an issue. I don't have a simple solution yet but perhaps docker swarm is the way to go.

I do the same as you using Caddy.

To avoid downtime try using:

    health_uri /health
    lb_try_duration 30s
Full example:

    api.xxx.se {
      encode gzip
      reverse_proxy api:8089 {
        health_uri /health
        lb_try_duration 30s
      }
    }
This way, Caddy will buffer the request and give 30 seconds for your new service to get online when you're deploying a new version.

Ideally, during deployment of a new version the new version should go live and healthy before caddy starts using it (and kills the old container). I've looked at https://github.com/Wowu/docker-rollout and https://github.com/lucaslorentz/caddy-docker-proxy but haven't had time to prioritize it yet.