Digital Ocean introduced proxy passing on their load balancers. Without proxy passing all requests going to your web server/container will have the source IP of your load balancer. With proxy pass enabled you’ll be able to access the client’s real IP. This is a must-have for rate limiting, logging, or restricting resources based on IP/CIDR ranges.
This post assumes you’ve already:
- Spun up a Digital Ocean Load Balancer.
- Connected your droplets and underlying containers.
- Things are working with
Proxy Passdisabled (default).
- You enabled
Proxy Passand nothing works anymore!
How we’ll solve this:
- Introduce a second Docker container for NGINX
docker-composeor b) Kubernetes Pod w/ multiple containers
1. Setting up our NGINX configuration
nginx.conf file shown below, making sure to change the following values:
[LB_IP]- The public IP address for your load balancer
[SERVICE_OR_POD_NAME]- The name of the web app service (
docker-compose) or the container (
Kubernetes). Name this whatever you want; just use the same name in the next step.
[PORT]- The port your web app container will be exposing. You can omit this if you’re using
2a. Implementing for standalone Docker with docker-compose
If you’re using Kubernetes, briefly skim this section and then proceed to 2b for the actual Pod implementation you’ll use.
You could do this step without
docker-compose, but the
docker-compose.yml file makes managing two containers per droplet easier. The main takeway is the bridged networking setup (called
backbone in this example). This will allow our
nginx service to talk with
Once you’ve copied the
docker-compose.yml file to your droplet, run
docker-compose up -d on the droplet. If all goes well your load balancer should be healthy and happy again with a few minutes!
2b. Implementing with Kubernetes
Very similar to the configuration above, except in Pod format for Kubernetes: