What does HackerNews think of nginx-proxy?

Automated nginx proxy for Docker containers using docker-gen

Language: Python

#24 in Docker
If Traefik is not your thing Im happily using https://github.com/nginx-proxy/nginx-proxy and sslip.io for local docker compose development.

And then even plain nginx under that to proxy to non docker services...

(And ipv6 for really short urls. example.com.--1.sslip.io etc)

I don't want to open my home network to just anybody, so I have a "jumpbox" that is the lowest-end shared VM at Hetzner. It runs nginx, dnsmasq, and Wireguard; my home servers connect to it, I add other peers as I need to, and dnsmasq resolves the hostnames using Zeroguard IPs for the home network.

I have 3 sets of DNS entries for the home lab servers:

1. "internal"/home network addresses (e.g. your 192.168.x.x) 2. Wireguard addresses (e.g. 10.0.x.x) 3. public DNS entries that all resolve to the jumpbox

The purpose of #3 is to support simple Letsencrypt setup: nginx on jumpbox forwards Letsencrypt requests to the internal servers over the Wireguard connection.

Internally, I use a https://github.com/nginx-proxy/nginx-proxy setup, so that any time I want a new service running inside the home lab I just have to:

1. Pick a hostname and add it to public DNS 2. Configure its Docker container to add the environment variables that nginx-proxy looks for 3. Add the hostname to the jumpbox /etc/hosts 4. Add the hostname to internal LAN DNS

It's a little much but I like how it works. It's not so bad to get setup.

There is also nginx_proxy, based on nginx.

https://github.com/nginx-proxy/nginx-proxy

Note on both projects, if you care about security, you should split the generator/controller container out of the main webserver container, so the docker socket is not used in container that is directly exposed.

You can try nginx-proxy, its similar to trafeik but based in nginx and a bit easier.

https://github.com/nginx-proxy/nginx-proxy

Try this, its pretty easy, but it depends on the docker socket. https://github.com/nginx-proxy/nginx-proxy
There is nginx-proxy for this, with auto TLS, its pretty good! https://github.com/nginx-proxy/nginx-proxy
Not a secret now haha!

I used to use this https://github.com/nginx-proxy/nginx-proxy (used to be under jwilder's GitHub account) which was also a good tool, and then used behind CloudFlare for free SSL certificates, but the Caddy container with automatic LetsEncrypt is fantastic also.

> From a server point of view, I think a raspberry pi build that scripted most of the setup, is an achievable goal. Not an easy one, but doable.

Yes! I'm thinking something like standardizing Docker deployments with nginx-proxy[0] (which takes care of automatic Let's Encrypt certificates, it Just Works™[1]). If everyone shipped a docker-compose.yml tailored to nginx-proxy this would make it very easy for anyone to deploy stuff on their home server.

Then you'd need a standardized interactive install script that asks for things like hostname, email details, whatever else needed in .env (or whatever config). Perhaps a good ol' Makefile?

  make setup      # interactive install
  make up         # run docker-compose up -d
  make down       # run docker-compose down
  make uninstall  # uninstall, plus clean up volumes and images
People would just need to learn how to (after installing Docker, which is trivial) git clone a repo and then run those commands and go to the URL and finish the setup (if relevant).

[0] https://github.com/nginx-proxy/nginx-proxy

[1] This is how I deploy all my Docker stuff, it usually takes from 1-5 minutes to modify docker-compose.yml to fit

If you want to hack up a quick setup, you could use: https://github.com/nginx-proxy/nginx-proxy

I've used it for local development setups and deployed internal apps using it, so no FAANG scale traffic, but it's quite simple to drop it in and then manage routing traffic using the `VIRTUAL_HOST` env variable on your containers. It supports custom configration if you mount a file in the right location. Also it has automatic letsencrypt with: https://github.com/nginx-proxy/docker-letsencrypt-nginx-prox....

I've recently moved my "personal infrastructure" from a docker-compose setup to a k3s setup, and ultimately I think k3s is better for most cases here.

FWIW, my docker-compose setup used https://github.com/nginx-proxy/nginx-proxy and it's letsencrypt companion image, which "automagically" handles adding new apps, new domains, and all ssl cert renewals, which is awesome. It was also relatively easy to start up a brand new fresh machine and re-deploy everything with a few commands.

I started down the route of using kubeadm, but then quickly switched to k3s and never looked back. It's now trivial to add more horsepower to my infrastructure without having to re-create everything (spin up a new EC2 machine, run one command to install k3s & attach to the cluster as a worker node). There's also some redundancy there, as any of my tiny ec2 boxes crashes, the apps will be moved to healthy boxes automatically. I'm also planning on digging out a few old Raspberry Pi's to attach as nodes from home (over a VPN) just for funsies.

Ultimately k8s certainly has a well earned reputation for having a steep learning curve, but once you get past that curve, managing a personal cluster using k3s is pretty trivial.

These are all extremely valid points! I guess I should've clarified how things work a bit (which I am currently in the process of documenting how it works and how to deploy it).

Ingress management is done with the very useful nginx-proxy[0] service that loads virtual host definitions directly from the docker daemon and sets virtual hosts based on env vars set on the container. Configuration changes are loaded using an nginx reload, so even if there was an error in the configuration (which I personally have never run into though is likely possible), it wouldn't take effect. LE is then handled using the nginx-proxy-letsencrypt-companion[1]. My goal was to abstract away reverse proxy+cert management and I think any solution (traefik, caddy, etc) would work here and I'm more than happy to change it. More or less just went with nginx since it was easy and I didn't have to do any configuration other than adding it to a docker-compose file.

I guess my goal wasn't to handle ZDD for stateful applications. As you mentioned, there's a plethora of issues that arise and make that type of application much more difficult to do ZDD. I tend to write a lot of stateless web apps for simple use cases and like to have an easy way to deploy them. In the primitive sense, creating a new container, waiting for it to be ready, and then swapping the upstream used for the reverse proxy with the new pointer would be ideal but isn't supported directly with docker-compose (as mentioned).

Happy to talk this through also, especially if you'd be interested in contributing!

[0] https://github.com/nginx-proxy/nginx-proxy [1] https://github.com/nginx-proxy/docker-letsencrypt-nginx-prox...