Nginx Reverse Proxy with a minimum of Fuss
Last Updated: 2021-05-05
Setting Up Webservices Sucks
For a guy that seems to stand up a new service, website, or API every six months, I sure hate it. The process of manually obtaining certificates, mapping them into containers as parts of volume mounts, and juggling configuration was a real hassle. Fortunately for me, this is a thing a lot of people find annoying, and so when I was all the way fed up it was actually super easy to find an extant solution. There’s a million and thirty other guides on setting this up, but I felt it was necessary to build a new one for two reasons:
- The piminder repo needs to be able to link to such a guide, so it might as well be mine, and;
- The guides are surface level and miss a few details that I think are helpful.
So What’s the Goal?
Instead of having a bunch of dockerized webservices running on a host somewhere in my lab, all talking to the internet willy-nilly, we’re going to stand up an NGINX-based reverse proxy in the way, to achieve three main goals:
- Isolate the processes/services behind something that’s developed and maintained by people who are smarter than me. Nginx is a very robust tool in ways that, for example, custom APIs I’ve stood up aren’t.
- Automate the process of routing once the forwarded traffic reaches the host, since all the requests to my various domains are ultimately hitting one machine, and automate the process of obtaining and renewing certificates for TLS.
- Add a quick-and simple way to isolate certain hosts so that they are still serviced by the reverse proxy and its aforementioned features, but are furthermore accessible only from inside the local network.
And while we’re getting started - this is going to assume you’re using docker-compose
and have roughly the same use case I do.
Assembling the Pieces
Okay, so we need a few tools. docker and docker-compose are givens since the whole purpose of this is to orchestrate a sort of interface layer between other docker-compose services and the wider world.
I’m going to go ahead and base this example on what I currently have committed for a docker-compse.yaml
. Note that this might be slightly outdated and that if you instead want to use latest
you should entertain the idea of rewriting the compose file as discussed here. It looks like the changes are relatively mild, but ymmv.
This compose file creates three containers that you’re going to want to be sure to keep up:
nginx-proxy
, the actual proxy which will accept inbound traffic on80/tcp
and443/tcp
dockergen
which is gouing to listen to the docker socket, used bynginx-proxy
and adaptively change various configuration files within the proxy service as a result.letsencrypt
which is where a lot of the real magic happens. This device also shares volumes with the other two, but very citically it is going to handle interrogating Lets Encrypt and handling challenge-response authentication to get TLS certificates for new services as they come up.
This also creates a network called nginx-proxy
. The upshot of this is that any container you want to talk to the open world through the reverse proxy should be connected to this network. New pods coming up and down on that network will have certain values checked in their environment variables to allow certificates and proxy configuration to happen automagically. And we like automagical, here at Arcana.
Critically you probably noticed three volume bindings that aren’t named volumes:
/var/run/docker.sock:/var/run/docker.sock:ro
. This is a read-only binding which allows these containers to read from, but not write to, the docker socket. It’s how they know when other pods come and go, for example../nginx.tmpl
is a template file (provided here). This can be modified to modify the overall configuration as needed, but I haven’t found the need../network_internal.conf
which is a missing bit of configuration needed if you want to host internal-only networks, described below:
In cases where this configuration flag is available it is possible to set certain hosts as reachable only internally.
To get this running, simply issue the docker network create nginx-proxy
command to create the network (handled externally so it does not go down if the service does), then bring the service you’ve just created up via docker-compose
.
Okay, but how do I had new services?
This is easy. In the docker-compose file that controls your new service, for the hosts which need to be able to be reached through the proxy, you just need to set a few envvars and make sure the container in question is joined to the nginx-proxy network with port 80 exposed:
VIRTUAL_HOST
is the bare minimum to attach to the proxy service and controls the hostname which the container represents, for examplewww.sanityline.net
LETSENCRYPT_HOST
which is the CN value needed for the certficate (should match the value forVIRTUAL_HOST
.LETSENCRYPT_EMAIL
which is the email to be used on the certification and therefore should be valid.
If desired, setting NETWORK_ACCESS: internal
will cause the nginx proxy to use that internal network ruleset described above for this host, making it accessible only internally. This is particularly useful if you’re using something in-house and minimally robust like Piminder.
So for a given example service, allow me to present the actual service behind the frontend for this very site, with some of its backend omitted: