1
0
mirror of https://github.com/thib8956/nginx-proxy synced 2025-07-03 15:25:45 +00:00

3 Commits

Author SHA1 Message Date
7470514e7b Merge pull request #1 from harvdogg/master
Allow complete override of location blocks
2020-04-13 19:45:23 +02:00
593c3c29b0 Adjusted formatting on README 2018-10-17 19:22:14 -07:00
c984ed2b18 Added support for completely overriding the location blocks for proxied containers. 2018-10-17 19:21:02 -07:00
314 changed files with 2266 additions and 8879 deletions

View File

@ -1,9 +1,6 @@
.git .git
.github
test
.dockerignore .dockerignore
.gitignore circle.yml
*.yml
Dockerfile*
Makefile Makefile
README.md README.md
test

View File

@ -1,35 +1,14 @@
# ⚠️ PLEASE READ ⚠️ # !!!PLEASE READ!!!
## Questions or Features ## Questions
If you have a question or want to request a feature, please **DO NOT SUBMIT** a new issue. If you have a question, DO NOT SUBMIT a new issue. Please ask the question on the Q&A Group: https://groups.google.com/forum/#!forum/nginx-proxy
Instead please use the relevant Discussions section's category: ## Bugs or Features
- 🙏 [Ask a question](https://github.com/nginx-proxy/nginx-proxy/discussions/categories/q-a)
- 💡 [Request a feature](https://github.com/nginx-proxy/nginx-proxy/discussions/categories/ideas)
## Bugs If you are logging a bug or feature request, please search the current open issues to see if there is already a bug or feature opened.
If you are logging a bug, please search the current open issues first to see if there is already a bug opened. For bugs, the easier you make it to reproduce the issue you see, the easier and faster it can get fixed. If you can provide a script or docker-compose file that reproduces the problems, that is very helpful.
For bugs, the easier you make it to reproduce the issue you see and the more initial information you provide, the easier and faster the bug can be identified and can get fixed.
Please at least provide:
- the exact nginx-proxy version you're using (if using `latest` please make sure it is up to date and provide the version number printed at container startup).
- complete configuration (compose file, command line, etc) of both your nginx-proxy container(s) and proxied containers. You should redact sensitive info if needed but please provide **full** configurations.
- generated nginx configuration obtained with `docker exec nameofyournginxproxycontainer nginx -T`
If you can provide a script or docker-compose file that reproduces the problems, that is very helpful.
## General advice about `latest`
Do not use the `latest` tag for production setups.
`latest` is nothing more than a convenient default used by Docker if no specific tag is provided, there isn't any strict convention on what goes into this tag over different projects, and it does not carry any promise of stability.
Using `latest` will most certainly put you at risk of experiencing uncontrolled updates to non backward compatible versions (or versions with breaking changes) and makes it harder for maintainers to track which exact version of the container you are experiencing an issue with.
This recommendation stands for pretty much every Docker image in existence, not just nginx-proxy's ones.
Thanks, Thanks,
Nicolas Jason

View File

@ -1,32 +0,0 @@
version: 2
updates:
# Maintain dependencies for Docker
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
commit-message:
prefix: "build"
labels:
- "type/build"
- "scope/dockerfile"
# Maintain Python dependencies (test suite)
- package-ecosystem: "pip"
directory: "/test/requirements"
schedule:
interval: "weekly"
commit-message:
prefix: "ci"
labels:
- "type/ci"
# Maintain GitHub Actions
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
commit-message:
prefix: "ci"
labels:
- "type/ci"

View File

@ -1,85 +0,0 @@
name: Build and publish Docker images on demand
on:
workflow_dispatch:
inputs:
image_tag:
description: "Image tag"
type: string
required: true
jobs:
multiarch-build:
name: Build and publish ${{ matrix.base }} image with tag ${{ inputs.image_tag }}
strategy:
matrix:
base: [alpine, debian]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Retrieve nginx-proxy version
id: nginx-proxy_version
run: echo "VERSION=$(git describe --tags)" >> "$GITHUB_OUTPUT"
- name: Retrieve docker-gen version
id: docker-gen_version
run: sed -n -e 's;^FROM nginxproxy/docker-gen:\([0-9.]*\).*;VERSION=\1;p' Dockerfile.${{ matrix.base }} >> "$GITHUB_OUTPUT"
- name: Get Docker tags
id: docker_meta
uses: docker/metadata-action@v5
with:
images: |
nginxproxy/nginx-proxy
tags: |
type=raw,value=${{ inputs.image_tag }},enable=${{ matrix.base == 'debian' }}
type=raw,value=${{ inputs.image_tag }},suffix=-alpine,enable=${{ matrix.base == 'alpine' }}
labels: |
org.opencontainers.image.authors=Nicolas Duchon <nicolas.duchon@gmail.com> (@buchdag), Jason Wilder
org.opencontainers.image.version=${{ steps.nginx-proxy_version.outputs.VERSION }}
flavor: |
latest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push the image
id: docker_build
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.${{ matrix.base }}
build-args: |
NGINX_PROXY_VERSION=${{ steps.nginx-proxy_version.outputs.VERSION }}
DOCKER_GEN_VERSION=${{ steps.docker-gen_version.outputs.VERSION }}
platforms: linux/amd64,linux/arm64,linux/s390x,linux/arm/v7
sbom: true
push: true
provenance: mode=max
tags: ${{ steps.docker_meta.outputs.tags }}
labels: ${{ steps.docker_meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Images digests
run: echo ${{ steps.docker_build.outputs.digest }}

View File

@ -1,101 +0,0 @@
name: Build and publish Docker images
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * 1"
push:
branches:
- main
tags:
- "*.*.*"
paths-ignore:
- "test/*"
- ".gitignore"
- "docker-compose-separate-containers.yml"
- "docker-compose.yml"
- "LICENSE"
- "Makefile"
- "*.md"
jobs:
multiarch-build:
name: Build and publish image
strategy:
matrix:
base: [alpine, debian]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Retrieve nginx-proxy version
id: nginx-proxy_version
run: echo "VERSION=$(git describe --tags)" >> "$GITHUB_OUTPUT"
- name: Retrieve docker-gen version
id: docker-gen_version
run: sed -n -e 's;^FROM nginxproxy/docker-gen:\([0-9.]*\).*;VERSION=\1;p' Dockerfile.${{ matrix.base }} >> "$GITHUB_OUTPUT"
- name: Get Docker tags
id: docker_meta
uses: docker/metadata-action@v5
with:
images: |
ghcr.io/nginx-proxy/nginx-proxy
nginxproxy/nginx-proxy
jwilder/nginx-proxy
tags: |
type=semver,pattern={{version}},enable=${{ matrix.base == 'debian' }}
type=semver,pattern={{major}}.{{minor}},enable=${{ matrix.base == 'debian' }}
type=semver,suffix=-alpine,pattern={{version}},enable=${{ matrix.base == 'alpine' }}
type=semver,suffix=-alpine,pattern={{major}}.{{minor}},enable=${{ matrix.base == 'alpine' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' && matrix.base == 'debian' }}
type=raw,value=alpine,enable=${{ github.ref == 'refs/heads/main' && matrix.base == 'alpine' }}
labels: |
org.opencontainers.image.authors=Nicolas Duchon <nicolas.duchon@gmail.com> (@buchdag), Jason Wilder
org.opencontainers.image.version=${{ steps.nginx-proxy_version.outputs.VERSION }}
flavor: |
latest=false
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push the image
id: docker_build
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile.${{ matrix.base }}
build-args: |
NGINX_PROXY_VERSION=${{ steps.nginx-proxy_version.outputs.VERSION }}
DOCKER_GEN_VERSION=${{ steps.docker-gen_version.outputs.VERSION }}
platforms: linux/amd64,linux/arm64,linux/s390x,linux/arm/v7
sbom: true
push: true
provenance: mode=max
tags: ${{ steps.docker_meta.outputs.tags }}
labels: ${{ steps.docker_meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Images digests
run: echo ${{ steps.docker_build.outputs.digest }}

View File

@ -1,27 +0,0 @@
name: Update Docker Hub Description
on:
push:
branches:
- main
paths:
- README.md
- .github/workflows/dockerhub-description.yml
jobs:
dockerHubDescription:
name: Update Docker Hub Description
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Docker Hub Description
uses: peter-evans/dockerhub-description@v4
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN_RWD }}
repository: nginxproxy/nginx-proxy
short-description: ${{ github.event.repository.description }}
enable-url-completion: true

View File

@ -1,50 +0,0 @@
name: Tests
on:
workflow_dispatch:
push:
branches:
- main
paths-ignore:
- "LICENSE"
- "**.md"
pull_request:
paths-ignore:
- "LICENSE"
- "**.md"
jobs:
unit:
name: Unit Tests
runs-on: ubuntu-latest
strategy:
matrix:
base_docker_image: [alpine, debian]
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.12
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r python-requirements.txt
working-directory: test/requirements
- name: Pull nginx:alpine image
run: docker pull nginx:alpine
- name: Build Docker web server image
run: make build-webserver
- name: Build Docker nginx proxy test image
run: make build-nginx-proxy-test-${{ matrix.base_docker_image }}
- name: Run tests
run: pytest
working-directory: test

1
.gitignore vendored
View File

@ -1,4 +1,3 @@
**/__pycache__/ **/__pycache__/
**/.cache/ **/.cache/
.idea/ .idea/
wip

22
.travis.yml Normal file
View File

@ -0,0 +1,22 @@
dist: trusty
sudo: required
env:
matrix:
- TEST_TARGET: test-debian
- TEST_TARGET: test-alpine
before_install:
- sudo apt-get -y remove docker docker-engine docker-ce
- sudo rm /etc/apt/sources.list.d/docker.list
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- sudo apt-get update
- sudo apt-get -y install docker-ce
- docker version
- docker info
# prepare docker test requirements
- make update-dependencies
script:
- make $TEST_TARGET

37
Dockerfile Normal file
View File

@ -0,0 +1,37 @@
FROM nginx:1.17.8
LABEL maintainer="Jason Wilder mail@jasonwilder.com"
# Install wget and install/updates certificates
RUN apt-get update \
&& apt-get install -y -q --no-install-recommends \
ca-certificates \
wget \
&& apt-get clean \
&& rm -r /var/lib/apt/lists/*
# Configure Nginx and apply fix for very long server names
RUN echo "daemon off;" >> /etc/nginx/nginx.conf \
&& sed -i 's/worker_processes 1/worker_processes auto/' /etc/nginx/nginx.conf
# Install Forego
ADD https://github.com/jwilder/forego/releases/download/v0.16.1/forego /usr/local/bin/forego
RUN chmod u+x /usr/local/bin/forego
ENV DOCKER_GEN_VERSION 0.7.4
RUN wget https://github.com/jwilder/docker-gen/releases/download/$DOCKER_GEN_VERSION/docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
&& tar -C /usr/local/bin -xvzf docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
&& rm /docker-gen-linux-amd64-$DOCKER_GEN_VERSION.tar.gz
COPY network_internal.conf /etc/nginx/
COPY . /app/
WORKDIR /app/
ENV DOCKER_HOST unix:///tmp/docker.sock
VOLUME ["/etc/nginx/certs", "/etc/nginx/dhparam"]
ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["forego", "start", "-r"]

View File

@ -1,38 +1,34 @@
FROM docker.io/nginxproxy/docker-gen:0.14.5 AS docker-gen FROM nginx:1.17.8-alpine
LABEL maintainer="Jason Wilder mail@jasonwilder.com"
FROM docker.io/nginxproxy/forego:0.18.2 AS forego # Install wget and install/updates certificates
RUN apk add --no-cache --virtual .run-deps \
ca-certificates bash wget openssl \
&& update-ca-certificates
# Build the final image
FROM docker.io/library/nginx:1.27.3-alpine
ARG NGINX_PROXY_VERSION # Configure Nginx and apply fix for very long server names
# Add DOCKER_GEN_VERSION environment variable because RUN echo "daemon off;" >> /etc/nginx/nginx.conf \
# acme-companion rely on it (but the actual value is not important) && sed -i 's/worker_processes 1/worker_processes auto/' /etc/nginx/nginx.conf
ARG DOCKER_GEN_VERSION="unknown"
ENV NGINX_PROXY_VERSION=${NGINX_PROXY_VERSION} \
DOCKER_GEN_VERSION=${DOCKER_GEN_VERSION} \
DOCKER_HOST=unix:///tmp/docker.sock
# Install dependencies # Install Forego
RUN apk add --no-cache --virtual .run-deps bash openssl ADD https://github.com/jwilder/forego/releases/download/v0.16.1/forego /usr/local/bin/forego
RUN chmod u+x /usr/local/bin/forego
# Configure Nginx ENV DOCKER_GEN_VERSION 0.7.4
RUN echo -e "\ninclude /etc/nginx/toplevel.conf.d/*.conf;" >> /etc/nginx/nginx.conf \
&& sed -i 's/worker_connections.*;$/worker_connections 10240;/' /etc/nginx/nginx.conf \
&& sed -i -e '/^\}$/{s//\}\nworker_rlimit_nofile 20480;/;:a' -e '$!N;$!ba' -e '}' /etc/nginx/nginx.conf \
&& mkdir -p '/etc/nginx/toplevel.conf.d' \
&& mkdir -p '/etc/nginx/dhparam' \
&& mkdir -p '/etc/nginx/certs' \
&& mkdir -p '/usr/share/nginx/html/errors'
# Install Forego + docker-gen RUN wget --quiet https://github.com/jwilder/docker-gen/releases/download/$DOCKER_GEN_VERSION/docker-gen-alpine-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
COPY --from=forego /usr/local/bin/forego /usr/local/bin/forego && tar -C /usr/local/bin -xvzf docker-gen-alpine-linux-amd64-$DOCKER_GEN_VERSION.tar.gz \
COPY --from=docker-gen /usr/local/bin/docker-gen /usr/local/bin/docker-gen && rm /docker-gen-alpine-linux-amd64-$DOCKER_GEN_VERSION.tar.gz
COPY network_internal.conf /etc/nginx/ COPY network_internal.conf /etc/nginx/
COPY app nginx.tmpl LICENSE /app/ COPY . /app/
WORKDIR /app/ WORKDIR /app/
ENV DOCKER_HOST unix:///tmp/docker.sock
VOLUME ["/etc/nginx/certs", "/etc/nginx/dhparam"]
ENTRYPOINT ["/app/docker-entrypoint.sh"] ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["forego", "start", "-r"] CMD ["forego", "start", "-r"]

View File

@ -1,35 +0,0 @@
FROM docker.io/nginxproxy/docker-gen:0.14.5-debian AS docker-gen
FROM docker.io/nginxproxy/forego:0.18.2-debian AS forego
# Build the final image
FROM docker.io/library/nginx:1.27.3
ARG NGINX_PROXY_VERSION
# Add DOCKER_GEN_VERSION environment variable because
# acme-companion rely on it (but the actual value is not important)
ARG DOCKER_GEN_VERSION="unknown"
ENV NGINX_PROXY_VERSION=${NGINX_PROXY_VERSION} \
DOCKER_GEN_VERSION=${DOCKER_GEN_VERSION} \
DOCKER_HOST=unix:///tmp/docker.sock
# Configure Nginx
RUN echo "\ninclude /etc/nginx/toplevel.conf.d/*.conf;" >> /etc/nginx/nginx.conf \
&& sed -i 's/worker_connections.*;$/worker_connections 10240;/' /etc/nginx/nginx.conf \
&& sed -i -e '/^\}$/{s//\}\nworker_rlimit_nofile 20480;/;:a' -e '$!N;$!ba' -e '}' /etc/nginx/nginx.conf \
&& mkdir -p '/etc/nginx/toplevel.conf.d' \
&& mkdir -p '/etc/nginx/dhparam' \
&& mkdir -p '/etc/nginx/certs' \
&& mkdir -p '/usr/share/nginx/html/errors'
# Install Forego + docker-gen
COPY --from=forego /usr/local/bin/forego /usr/local/bin/forego
COPY --from=docker-gen /usr/local/bin/docker-gen /usr/local/bin/docker-gen
COPY network_internal.conf /etc/nginx/
COPY app nginx.tmpl LICENSE /app/
WORKDIR /app/
ENTRYPOINT ["/app/docker-entrypoint.sh"]
CMD ["forego", "start", "-r"]

View File

@ -1,7 +1,6 @@
The MIT License (MIT) The MIT License (MIT)
Copyright (c) 2014-2020 Jason Wilder Copyright (c) 2014 Jason Wilder
Copyright (c) 2021-2022 Nicolas Duchon
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@ -2,19 +2,15 @@
.PHONY : test-debian test-alpine test .PHONY : test-debian test-alpine test
build-webserver: update-dependencies:
docker build --pull -t web test/requirements/web test/requirements/build.sh
build-nginx-proxy-test-debian: test-debian: update-dependencies
docker build --pull --build-arg NGINX_PROXY_VERSION="test" -f Dockerfile.debian -t nginxproxy/nginx-proxy:test . docker build -t jwilder/nginx-proxy:test .
build-nginx-proxy-test-alpine:
docker build --pull --build-arg NGINX_PROXY_VERSION="test" -f Dockerfile.alpine -t nginxproxy/nginx-proxy:test .
test-debian: build-webserver build-nginx-proxy-test-debian
test/pytest.sh test/pytest.sh
test-alpine: build-webserver build-nginx-proxy-test-alpine test-alpine: update-dependencies
docker build -f Dockerfile.alpine -t jwilder/nginx-proxy:test .
test/pytest.sh test/pytest.sh
test: test-debian test-alpine test: test-debian test-alpine

View File

@ -1,2 +1,2 @@
dockergen: docker-gen -watch -notify "nginx -s reload" /app/nginx.tmpl /etc/nginx/conf.d/default.conf dockergen: docker-gen -watch -notify "nginx -s reload" /app/nginx.tmpl /etc/nginx/conf.d/default.conf
nginx: nginx -g "daemon off;" nginx: nginx

478
README.md
View File

@ -1,77 +1,453 @@
[![Test](https://github.com/nginx-proxy/nginx-proxy/actions/workflows/test.yml/badge.svg)](https://github.com/nginx-proxy/nginx-proxy/actions/workflows/test.yml) ![latest 0.7.0](https://img.shields.io/badge/latest-0.7.0-green.svg?style=flat)
[![GitHub release](https://img.shields.io/github/v/release/nginx-proxy/nginx-proxy)](https://github.com/nginx-proxy/nginx-proxy/releases) ![nginx 1.17.8](https://img.shields.io/badge/nginx-1.17.8-brightgreen.svg) ![License MIT](https://img.shields.io/badge/license-MIT-blue.svg) [![Build Status](https://travis-ci.org/jwilder/nginx-proxy.svg?branch=master)](https://travis-ci.org/jwilder/nginx-proxy) [![](https://img.shields.io/docker/stars/jwilder/nginx-proxy.svg)](https://hub.docker.com/r/jwilder/nginx-proxy 'DockerHub') [![](https://img.shields.io/docker/pulls/jwilder/nginx-proxy.svg)](https://hub.docker.com/r/jwilder/nginx-proxy 'DockerHub')
[![nginx 1.27.3](https://img.shields.io/badge/nginx-1.27.3-brightgreen.svg?logo=nginx)](https://nginx.org/en/CHANGES)
[![Docker Image Size](https://img.shields.io/docker/image-size/nginxproxy/nginx-proxy?sort=semver)](https://hub.docker.com/r/nginxproxy/nginx-proxy "Click to view the image on Docker Hub")
[![Docker stars](https://img.shields.io/docker/stars/nginxproxy/nginx-proxy.svg)](https://hub.docker.com/r/nginxproxy/nginx-proxy "DockerHub")
[![Docker pulls](https://img.shields.io/docker/pulls/nginxproxy/nginx-proxy.svg)](https://hub.docker.com/r/nginxproxy/nginx-proxy "DockerHub")
nginx-proxy sets up a container running nginx and [docker-gen](https://github.com/nginx-proxy/docker-gen). docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
See [Automated Nginx Reverse Proxy for Docker](http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/) for why you might want to use this. nginx-proxy sets up a container running nginx and [docker-gen][1]. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
See [Automated Nginx Reverse Proxy for Docker][2] for why you might want to use this.
### Usage ### Usage
To run it: To run it:
```console $ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker run --detach \
--name nginx-proxy \
--publish 80:80 \
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
nginxproxy/nginx-proxy:1.6
```
Then start any containers (here an nginx container) you want proxied with an env var `VIRTUAL_HOST=subdomain.yourdomain.com` Then start any containers you want proxied with an env var `VIRTUAL_HOST=subdomain.youdomain.com`
```console $ docker run -e VIRTUAL_HOST=foo.bar.com ...
docker run --detach \
--name your-proxied-app \
--env VIRTUAL_HOST=foo.bar.com \
nginx
```
Provided your DNS is setup to resolve `foo.bar.com` to the host running nginx-proxy, a request to `http://foo.bar.com` will then be routed to a container with the `VIRTUAL_HOST` env var set to `foo.bar.com` (in this case, the **your-proxied-app** container). The containers being proxied must [expose](https://docs.docker.com/engine/reference/run/#expose-incoming-ports) the port to be proxied, either by using the `EXPOSE` directive in their `Dockerfile` or by using the `--expose` flag to `docker run` or `docker create` and be in the same network. By default, if you don't pass the --net flag when your nginx-proxy container is created, it will only be attached to the default bridge network. This means that it will not be able to connect to containers on networks other than bridge.
The containers being proxied must : Provided your DNS is setup to forward foo.bar.com to the host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
- [expose](https://docs.docker.com/engine/reference/run/#expose-incoming-ports) the port to be proxied, either by using the `EXPOSE` directive in their `Dockerfile` or by using the `--expose` flag to `docker run` or `docker create`.
- share at least one Docker network with the nginx-proxy container: by default, if you don't pass the `--net` flag when your nginx-proxy container is created, it will only be attached to the default bridge network. This means that it will not be able to connect to containers on networks other than bridge.
Note: providing a port number in `VIRTUAL_HOST` isn't suported, please see [virtual ports](https://github.com/nginx-proxy/nginx-proxy/tree/main/docs#virtual-ports) or [custom external HTTP/HTTPS ports](https://github.com/nginx-proxy/nginx-proxy/tree/main/docs#custom-external-httphttps-ports) depending on what you want to achieve.
### Image variants ### Image variants
The nginx-proxy images are available in two flavors. The nginx-proxy images are available in two flavors.
#### Debian based version #### jwilder/nginx-proxy:latest
This image is based on the nginx:mainline image, itself based on the debian slim image. This image uses the debian:jessie based nginx image.
```console $ docker pull jwilder/nginx-proxy:latest
docker pull nginxproxy/nginx-proxy:1.6
#### jwilder/nginx-proxy:alpine
This image is based on the nginx:alpine image. Use this image to fully support HTTP/2 (including ALPN required by recent Chrome versions). A valid certificate is required as well (see eg. below "SSL Support using letsencrypt" for more info).
$ docker pull jwilder/nginx-proxy:alpine
### Docker Compose
```yaml
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=whoami.local
``` ```
#### Alpine based version (`-alpine` suffix) ```shell
$ docker-compose up
This image is based on the nginx:alpine image. $ curl -H "Host: whoami.local" localhost
I'm 5b129ab83266
```console
docker pull nginxproxy/nginx-proxy:1.6-alpine
``` ```
> [!IMPORTANT] ### IPv6 support
>
> #### A note on `latest` and `alpine`:
>
> It is not recommended to use the `latest` (`nginxproxy/nginx-proxy`, `nginxproxy/nginx-proxy:latest`) or `alpine` (`nginxproxy/nginx-proxy:alpine`) tag for production setups.
>
> [Those tags point](https://hub.docker.com/r/nginxproxy/nginx-proxy/tags) to the latest commit in the `main` branch. They do not carry any promise of stability, and using them will probably put your nginx-proxy setup at risk of experiencing uncontrolled updates to non backward compatible versions (or versions with breaking changes). You should always specify the version you want to use explicitly to ensure your setup doesn't break when the image is updated.
### Additional documentation You can activate the IPv6 support for the nginx-proxy container by passing the value `true` to the `ENABLE_IPV6` environment variable:
Please check the [docs section](https://github.com/nginx-proxy/nginx-proxy/tree/main/docs). $ docker run -d -p 80:80 -e ENABLE_IPV6=true -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
### Powered by ### Multiple Ports
[![GoLand logo](https://resources.jetbrains.com/storage/products/company/brand/logos/GoLand_icon.svg)](https://www.jetbrains.com/go/) If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
[![PyCharm logo](https://resources.jetbrains.com/storage/products/company/brand/logos/PyCharm_icon.svg)](https://www.jetbrains.com/pycharm/)
[1]: https://github.com/jwilder/docker-gen
[2]: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
### Multiple Hosts
If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, `foo.bar.com,baz.bar.com,bar.com` and each host will be setup the same.
### Wildcard Hosts
You can also use wildcards at the beginning and the end of host name, like `*.bar.com` or `foo.bar.*`. Or even a regular expression, which can be very useful in conjunction with a wildcard DNS service like [xip.io](http://xip.io), using `~^foo\.bar\..*\.xip\.io` will match `foo.bar.127.0.0.1.xip.io`, `foo.bar.10.0.2.2.xip.io` and all other given IPs. More information about this topic can be found in the nginx documentation about [`server_names`](http://nginx.org/en/docs/http/server_names.html).
### Multiple Networks
With the addition of [overlay networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) in Docker 1.9, your `nginx-proxy` container may need to connect to backend containers on multiple networks. By default, if you don't pass the `--net` flag when your `nginx-proxy` container is created, it will only be attached to the default `bridge` network. This means that it will not be able to connect to containers on networks other than `bridge`.
If you want your `nginx-proxy` container to be attached to a different network, you must pass the `--net=my-network` option in your `docker create` or `docker run` command. At the time of this writing, only a single network can be specified at container creation time. To attach to other networks, you can use the `docker network connect` command after your container is created:
```console
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro \
--name my-nginx-proxy --net my-network jwilder/nginx-proxy
$ docker network connect my-other-network my-nginx-proxy
```
In this example, the `my-nginx-proxy` container will be connected to `my-network` and `my-other-network` and will be able to proxy to other containers attached to those networks.
### Internet vs. Local Network Access
If you allow traffic from the public internet to access your `nginx-proxy` container, you may want to restrict some containers to the internal network only, so they cannot be accessed from the public internet. On containers that should be restricted to the internal network, you should set the environment variable `NETWORK_ACCESS=internal`. By default, the *internal* network is defined as `127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16`. To change the list of networks considered internal, mount a file on the `nginx-proxy` at `/etc/nginx/network_internal.conf` with these contents, edited to suit your needs:
```
# These networks are considered "internal"
allow 127.0.0.0/8;
allow 10.0.0.0/8;
allow 192.168.0.0/16;
allow 172.16.0.0/12;
# Traffic from all other networks will be rejected
deny all;
```
When internal-only access is enabled, external clients with be denied with an `HTTP 403 Forbidden`
> If there is a load-balancer / reverse proxy in front of `nginx-proxy` that hides the client IP (example: AWS Application/Elastic Load Balancer), you will need to use the nginx `realip` module (already installed) to extract the client's IP from the HTTP request headers. Please see the [nginx realip module configuration](http://nginx.org/en/docs/http/ngx_http_realip_module.html) for more details. This configuration can be added to a new config file and mounted in `/etc/nginx/conf.d/`.
### SSL Backends
If you would like the reverse proxy to connect to your backend using HTTPS instead of HTTP, set `VIRTUAL_PROTO=https` on the backend container.
> Note: If you use `VIRTUAL_PROTO=https` and your backend container exposes port 80 and 443, `nginx-proxy` will use HTTPS on port 80. This is almost certainly not what you want, so you should also include `VIRTUAL_PORT=443`.
### uWSGI Backends
If you would like to connect to uWSGI backend, set `VIRTUAL_PROTO=uwsgi` on the
backend container. Your backend container should then listen on a port rather
than a socket and expose that port.
### FastCGI Backends
If you would like to connect to FastCGI backend, set `VIRTUAL_PROTO=fastcgi` on the
backend container. Your backend container should then listen on a port rather
than a socket and expose that port.
### FastCGI File Root Directory
If you use fastcgi,you can set `VIRTUAL_ROOT=xxx` for your root directory
### Default Host
To set the default host for nginx use the env var `DEFAULT_HOST=foo.bar.com` for example
$ docker run -d -p 80:80 -e DEFAULT_HOST=foo.bar.com -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
### Separate Containers
nginx-proxy can also be run as two separate containers using the [jwilder/docker-gen](https://index.docker.io/u/jwilder/docker-gen/)
image and the official [nginx](https://registry.hub.docker.com/_/nginx/) image.
You may want to do this to prevent having the docker socket bound to a publicly exposed container service.
You can demo this pattern with docker-compose:
```console
$ docker-compose --file docker-compose-separate-containers.yml up
$ curl -H "Host: whoami.local" localhost
I'm 5b129ab83266
```
To run nginx proxy as a separate container you'll need to have [nginx.tmpl](https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl) on your host system.
First start nginx with a volume:
$ docker run -d -p 80:80 --name nginx -v /tmp/nginx:/etc/nginx/conf.d -t nginx
Then start the docker-gen container with the shared volume and template:
```
$ docker run --volumes-from nginx \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
-v $(pwd):/etc/docker-gen/templates \
-t jwilder/docker-gen -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
```
Finally, start your containers with `VIRTUAL_HOST` environment variables.
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
### SSL Support using letsencrypt
[letsencrypt-nginx-proxy-companion](https://github.com/JrCs/docker-letsencrypt-nginx-proxy-companion) is a lightweight companion container for the nginx-proxy. It allows the creation/renewal of Let's Encrypt certificates automatically.
Set `DHPARAM_GENERATION` environment variable to `false` to disabled Diffie-Hellman parameters completely. This will also ignore auto-generation made by `nginx-proxy`.
The default value is `true`
$ docker run -e DHPARAM_GENERATION=false ....
### SSL Support
SSL is supported using single host, wildcard and SNI certificates using naming conventions for
certificates or optionally specifying a cert name (for SNI) as an environment variable.
To enable SSL:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
The contents of `/path/to/certs` should contain the certificates and private keys for any virtual
hosts in use. The certificate and keys should be named after the virtual host with a `.crt` and
`.key` extension. For example, a container with `VIRTUAL_HOST=foo.bar.com` should have a
`foo.bar.com.crt` and `foo.bar.com.key` file in the certs directory.
If you are running the container in a virtualized environment (Hyper-V, VirtualBox, etc...),
/path/to/certs must exist in that environment or be made accessible to that environment.
By default, Docker is not able to mount directories on the host machine to containers running in a virtual machine.
#### Diffie-Hellman Groups
Diffie-Hellman groups are enabled by default, with a pregenerated key in `/etc/nginx/dhparam/dhparam.pem`.
You can mount a different `dhparam.pem` file at that location to override the default cert.
To use custom `dhparam.pem` files per-virtual-host, the files should be named after the virtual host with a
`dhparam` suffix and `.pem` extension. For example, a container with `VIRTUAL_HOST=foo.bar.com`
should have a `foo.bar.com.dhparam.pem` file in the `/etc/nginx/certs` directory.
> NOTE: If you don't mount a `dhparam.pem` file at `/etc/nginx/dhparam/dhparam.pem`, one will be generated
at startup. Since it can take minutes to generate a new `dhparam.pem`, it is done at low priority in the
background. Once generation is complete, the `dhparam.pem` is saved on a persistent volume and nginx
is reloaded. This generation process only occurs the first time you start `nginx-proxy`.
> COMPATIBILITY WARNING: The default generated `dhparam.pem` key is 2048 bits for A+ security. Some
> older clients (like Java 6 and 7) do not support DH keys with over 1024 bits. In order to support these
> clients, you must either provide your own `dhparam.pem`, or tell `nginx-proxy` to generate a 1024-bit
> key on startup by passing `-e DHPARAM_BITS=1024`.
In the separate container setup, no pregenerated key will be available and neither the
[jwilder/docker-gen](https://index.docker.io/u/jwilder/docker-gen/) image nor the offical
[nginx](https://registry.hub.docker.com/_/nginx/) image will generate one. If you still want A+ security
in a separate container setup, you'll have to generate a 2048 bits DH key file manually and mount it on the
nginx container, at `/etc/nginx/dhparam/dhparam.pem`.
#### Wildcard Certificates
Wildcard certificates and keys should be named after the domain name with a `.crt` and `.key` extension.
For example `VIRTUAL_HOST=foo.bar.com` would use cert name `bar.com.crt` and `bar.com.key`.
#### SNI
If your certificate(s) supports multiple domain names, you can start a container with `CERT_NAME=<name>`
to identify the certificate to be used. For example, a certificate for `*.foo.com` and `*.bar.com`
could be named `shared.crt` and `shared.key`. A container running with `VIRTUAL_HOST=foo.bar.com`
and `CERT_NAME=shared` will then use this shared cert.
#### OCSP Stapling
To enable OCSP Stapling for a domain, `nginx-proxy` looks for a PEM certificate containing the trusted
CA certificate chain at `/etc/nginx/certs/<domain>.chain.pem`, where `<domain>` is the domain name in
the `VIRTUAL_HOST` directive. The format of this file is a concatenation of the public PEM CA
certificates starting with the intermediate CA most near the SSL certificate, down to the root CA. This is
often referred to as the "SSL Certificate Chain". If found, this filename is passed to the NGINX
[`ssl_trusted_certificate` directive](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate)
and OCSP Stapling is enabled.
#### How SSL Support Works
The default SSL cipher configuration is based on the [Mozilla intermediate profile](https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29) version 5.0 which
should provide compatibility with clients back to Firefox 27, Android 4.4.2, Chrome 31, Edge, IE 11 on Windows 7,
Java 8u31, OpenSSL 1.0.1, Opera 20, and Safari 9. Note that the DES-based TLS ciphers were removed for security.
The configuration also enables HSTS, PFS, OCSP stapling and SSL session caches. Currently TLS 1.2 and 1.3
are supported.
If you don't require backward compatibility, you can use the [Mozilla modern profile](https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility)
profile instead by including the environment variable `SSL_POLICY=Mozilla-Modern` to the nginx-proxy container or to your container.
This profile is compatible with clients back to Firefox 63, Android 10.0, Chrome 70, Edge 75, Java 11,
OpenSSL 1.1.1, Opera 57, and Safari 12.1. Note that this profile is **not** compatible with any version of Internet Explorer.
Other policies available through the `SSL_POLICY` environment variable are [`Mozilla-Old`](https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility)
and the [AWS ELB Security Policies](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html)
`AWS-TLS-1-2-2017-01`, `AWS-TLS-1-1-2017-01`, `AWS-2016-08`, `AWS-2015-05`, `AWS-2015-03` and `AWS-2015-02`.
Note that the `Mozilla-Old` policy should use a 1024 bits DH key for compatibility but this container generates
a 2048 bits key. The [Diffie-Hellman Groups](#diffie-hellman-groups) section details different methods of bypassing
this, either globally or per virtual-host.
The default behavior for the proxy when port 80 and 443 are exposed is as follows:
* If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS
is always preferred when available.
* If the container does not have a usable cert, a 503 will be returned.
Note that in the latter case, a browser may get an connection error as no certificate is available
to establish a connection. A self-signed or generic cert named `default.crt` and `default.key`
will allow a client browser to make a SSL connection (likely w/ a warning) and subsequently receive
a 500.
To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the
environment variable `HTTPS_METHOD=noredirect` (the default is `HTTPS_METHOD=redirect`). You can also
disable the non-SSL site entirely with `HTTPS_METHOD=nohttp`, or disable the HTTPS site with
`HTTPS_METHOD=nohttps`. `HTTPS_METHOD` must be specified on each container for which you want to
override the default behavior. If `HTTPS_METHOD=noredirect` is used, Strict Transport Security (HSTS)
is disabled to prevent HTTPS users from being redirected by the client. If you cannot get to the HTTP
site after changing this setting, your browser has probably cached the HSTS policy and is automatically
redirecting you back to HTTPS. You will need to clear your browser's HSTS cache or use an incognito
window / different browser.
By default, [HTTP Strict Transport Security (HSTS)](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security)
is enabled with `max-age=31536000` for HTTPS sites. You can disable HSTS with the environment variable
`HSTS=off` or use a custom HSTS configuration like `HSTS=max-age=31536000; includeSubDomains; preload`.
*WARNING*: HSTS will force your users to visit the HTTPS version of your site for the `max-age` time -
even if they type in `http://` manually. The only way to get to an HTTP site after receiving an HSTS
response is to clear your browser's HSTS cache.
### Basic Authentication Support
In order to be able to secure your virtual host, you have to create a file named as its equivalent VIRTUAL_HOST variable on directory
/etc/nginx/htpasswd/$VIRTUAL_HOST
```
$ docker run -d -p 80:80 -p 443:443 \
-v /path/to/htpasswd:/etc/nginx/htpasswd \
-v /path/to/certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
```
You'll need apache2-utils on the machine where you plan to create the htpasswd file. Follow these [instructions](http://httpd.apache.org/docs/2.2/programs/htpasswd.html)
### Custom Nginx Configuration
If you need to configure Nginx beyond what is possible using environment variables, you can provide custom configuration files on either a proxy-wide or per-`VIRTUAL_HOST` basis.
#### Replacing default proxy settings
If you want to replace the default proxy settings for the nginx container, add a configuration file at `/etc/nginx/proxy.conf`. A file with the default settings would
look like this:
```Nginx
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
```
***NOTE***: If you provide this file it will replace the defaults; you may want to check the .tmpl file to make sure you have all of the needed options.
***NOTE***: The default configuration blocks the `Proxy` HTTP request header from being sent to downstream servers. This prevents attackers from using the so-called [httpoxy attack](http://httpoxy.org). There is no legitimate reason for a client to send this header, and there are many vulnerable languages / platforms (`CVE-2016-5385`, `CVE-2016-5386`, `CVE-2016-5387`, `CVE-2016-5388`, `CVE-2016-1000109`, `CVE-2016-1000110`, `CERT-VU#797896`).
#### Proxy-wide
To add settings on a proxy-wide basis, add your configuration file under `/etc/nginx/conf.d` using a name ending in `.conf`.
This can be done in a derived image by creating the file in a `RUN` command or by `COPY`ing the file into `conf.d`:
```Dockerfile
FROM jwilder/nginx-proxy
RUN { \
echo 'server_tokens off;'; \
echo 'client_max_body_size 100m;'; \
} > /etc/nginx/conf.d/my_proxy.conf
```
Or it can be done by mounting in your custom configuration in your `docker run` command:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/my_proxy.conf:/etc/nginx/conf.d/my_proxy.conf:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
#### Per-VIRTUAL_HOST
To add settings on a per-`VIRTUAL_HOST` basis, add your configuration file under `/etc/nginx/vhost.d`. Unlike in the proxy-wide case, which allows multiple config files with any name ending in `.conf`, the per-`VIRTUAL_HOST` file must be named exactly after the `VIRTUAL_HOST`.
In order to allow virtual hosts to be dynamically configured as backends are added and removed, it makes the most sense to mount an external directory as `/etc/nginx/vhost.d` as opposed to using derived images or mounting individual configuration files.
For example, if you have a virtual host named `app.example.com`, you could provide a custom configuration for that host as follows:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/vhost.d:/etc/nginx/vhost.d:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
$ { echo 'server_tokens off;'; echo 'client_max_body_size 100m;'; } > /path/to/vhost.d/app.example.com
If you are using multiple hostnames for a single container (e.g. `VIRTUAL_HOST=example.com,www.example.com`), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
$ { echo 'server_tokens off;'; echo 'client_max_body_size 100m;'; } > /path/to/vhost.d/www.example.com
$ ln -s /path/to/vhost.d/www.example.com /path/to/vhost.d/example.com
#### Per-VIRTUAL_HOST default configuration
If you want most of your virtual hosts to use a default single configuration and then override on a few specific ones, add those settings to the `/etc/nginx/vhost.d/default` file. This file
will be used on any virtual host which does not have a `/etc/nginx/vhost.d/{VIRTUAL_HOST}` file associated with it.
#### Per-VIRTUAL_HOST location configuration
To add settings to the "location" block on a per-`VIRTUAL_HOST` basis, add your configuration file under `/etc/nginx/vhost.d`
just like the previous section except with the suffix `_location`.
For example, if you have a virtual host named `app.example.com` and you have configured a proxy_cache `my-cache` in another custom file, you could tell it to use a proxy cache as follows:
$ docker run -d -p 80:80 -p 443:443 -v /path/to/vhost.d:/etc/nginx/vhost.d:ro -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
$ { echo 'proxy_cache my-cache;'; echo 'proxy_cache_valid 200 302 60m;'; echo 'proxy_cache_valid 404 1m;' } > /path/to/vhost.d/app.example.com_location
If you are using multiple hostnames for a single container (e.g. `VIRTUAL_HOST=example.com,www.example.com`), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
$ { echo 'proxy_cache my-cache;'; echo 'proxy_cache_valid 200 302 60m;'; echo 'proxy_cache_valid 404 1m;' } > /path/to/vhost.d/app.example.com_location
$ ln -s /path/to/vhost.d/www.example.com /path/to/vhost.d/example.com
#### Per-VIRTUAL_HOST location default configuration
If you want most of your virtual hosts to use a default single `location` block configuration and then override on a few specific ones, add those settings to the `/etc/nginx/vhost.d/default_location` file. This file
will be used on any virtual host which does not have a `/etc/nginx/vhost.d/{VIRTUAL_HOST}_location` file associated with it.
#### Pre-VIRTUAL_HOST custom location blocks
In some circumstances you may want to override nginx's default `/` location block behavior. Typically, this block acts as a catch-all in order to forward requests not already matched by a specific `location` block directly onto your container as-is.
To provide your own location blocks and bypass the automatic generation of them, simply add your location blocks to a configuration file file under `/etc/nginx/vhost.d` like in the other Per-VIRTUAL_HOST sections except with the suffix `_locations`. Notice the 's' to make the filename plural.
The contents of this file will replace all auto-generated location blocks. Additionally, this file will take priority over the previously described location configuration.
When using location overrides, you are responsible for handling any requests that should be forwarded to your container. Passing a request to your container is done using the `proxy_pass` instruction within your defined location blocks. `proxy_pass` expects a qualified hostname in order
to forward a request. By default, nginx-proxy aliases containers to the defined `VIRTUAL_HOST` name. So if you launch your container with a `VIRTUAL_HOST` value of `app.example.com`, then forwarding a request to your container would look something like this:
```
location / {
proxy_pass http://app.example.com;
}
```
If you are using an SSL-enabled container, you would use `https://` in place of `http://`. You could include any number of other location blocks for nginx to consider and even forward requests to external hosts when they match certain conditions. You can also use any other rules and instructions
available to nginx location blocks.
### Contributing
Before submitting pull requests or issues, please check github to make sure an existing issue or pull request is not already open.
#### Running Tests Locally
To run tests, you need to prepare the docker image to test which must be tagged `jwilder/nginx-proxy:test`:
docker build -t jwilder/nginx-proxy:test . # build the Debian variant image
and call the [test/pytest.sh](test/pytest.sh) script.
Then build the Alpine variant of the image:
docker build -f Dockerfile.alpine -t jwilder/nginx-proxy:test . # build the Alpline variant image
and call the [test/pytest.sh](test/pytest.sh) script again.
If your system has the `make` command, you can automate those tasks by calling:
make test
You can learn more about how the test suite works and how to write new tests in the [test/README.md](test/README.md) file.
### Need help?
If you have questions on how to use the image, please ask them on the [Q&A Group](https://groups.google.com/forum/#!forum/nginx-proxy)

View File

@ -1,8 +0,0 @@
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEoXJf//////////wIBAg==
-----END DH PARAMETERS-----

View File

@ -1,11 +0,0 @@
-----BEGIN DH PARAMETERS-----
MIIBiAKCAYEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3
7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32
nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZsYu
N///////////AgEC
-----END DH PARAMETERS-----

View File

@ -1,13 +0,0 @@
-----BEGIN DH PARAMETERS-----
MIICCAKCAgEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEfz9zeNVs7ZRkDW7w09N75nAI4YbRvydbmyQd62R0mkff3
7lmMsPrBhtkcrv4TCYUTknC0EwyTvEN5RPT9RFLi103TZPLiHnH1S/9croKrnJ32
nuhtK8UiNjoNq8Uhl5sN6todv5pC1cRITgq80Gv6U93vPBsg7j/VnXwl5B0rZp4e
8W5vUsMWTfT7eTDp5OWIV7asfV9C1p9tGHdjzx1VA0AEh/VbpX4xzHpxNciG77Qx
iu1qHgEtnmgyqQdgCpGBMMRtx3j5ca0AOAkpmaMzy4t6Gh25PXFAADwqTs6p+Y0K
zAqCkc3OyX3Pjsm1Wn+IpGtNtahR9EGC4caKAH5eZV9q//////////8CAQI=
-----END DH PARAMETERS-----

View File

@ -1,121 +0,0 @@
#!/bin/bash
set -e
function _parse_true() {
case "$1" in
true | True | TRUE | 1)
return 0
;;
*)
return 1
;;
esac
}
function _parse_false() {
case "$1" in
false | False | FALSE | 0)
return 0
;;
*)
return 1
;;
esac
}
function _print_version {
if [[ -n "${NGINX_PROXY_VERSION:-}" ]]; then
echo "Info: running nginx-proxy version ${NGINX_PROXY_VERSION}"
fi
}
function _check_unix_socket() {
# Warn if the DOCKER_HOST socket does not exist
if [[ ${DOCKER_HOST} == unix://* ]]; then
local SOCKET_FILE="${DOCKER_HOST#unix://}"
if [[ ! -S ${SOCKET_FILE} ]]; then
cat >&2 <<-EOT
ERROR: you need to share your Docker host socket with a volume at ${SOCKET_FILE}
Typically you should run your nginxproxy/nginx-proxy with: \`-v /var/run/docker.sock:${SOCKET_FILE}:ro\`
See the documentation at: https://github.com/nginx-proxy/nginx-proxy/#usage
EOT
exit 1
fi
fi
}
function _resolvers() {
# Compute the DNS resolvers for use in the templates - if the IP contains ":", it's IPv6 and must be enclosed in []
RESOLVERS=$(awk '$1 == "nameserver" {print ($2 ~ ":")? "["$2"]": $2}' ORS=' ' /etc/resolv.conf | sed 's/ *$//g'); export RESOLVERS
SCOPED_IPV6_REGEX='\[fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}\]'
if [[ -z ${RESOLVERS} ]]; then
echo 'Warning: unable to determine DNS resolvers for nginx' >&2
unset RESOLVERS
elif [[ ${RESOLVERS} =~ ${SCOPED_IPV6_REGEX} ]]; then
echo -n 'Warning: Scoped IPv6 addresses removed from resolvers: ' >&2
echo "${RESOLVERS}" | grep -Eo "$SCOPED_IPV6_REGEX" | paste -s -d ' ' >&2
RESOLVERS=$(echo "${RESOLVERS}" | sed -r "s/${SCOPED_IPV6_REGEX}//g" | xargs echo -n); export RESOLVERS
fi
}
function _setup_dhparam() {
# DH params will be supplied for nginx here:
local DHPARAM_FILE='/etc/nginx/dhparam/dhparam.pem'
# Should be 2048, 3072, or 4096 (default):
local FFDHE_GROUP="${DHPARAM_BITS:=4096}"
# DH params may be provided by the user (rarely necessary)
if [[ -f ${DHPARAM_FILE} ]]; then
echo 'Warning: A custom dhparam.pem file was provided. Best practice is to use standardized RFC7919 DHE groups instead.' >&2
return 0
elif _parse_true "${DHPARAM_SKIP:=false}"; then
echo 'Skipping Diffie-Hellman parameters setup.'
return 0
elif _parse_false "${DHPARAM_GENERATION:=true}"; then
echo 'Warning: The DHPARAM_GENERATION environment variable is deprecated, please consider using DHPARAM_SKIP set to true instead.' >&2
echo 'Skipping Diffie-Hellman parameters setup.'
return 0
elif [[ ! ${DHPARAM_BITS} =~ ^(2048|3072|4096)$ ]]; then
echo "ERROR: Unsupported DHPARAM_BITS size: ${DHPARAM_BITS}. Use: 2048, 3072, or 4096 (default)." >&2
exit 1
fi
echo 'Setting up DH Parameters..'
# Use an existing pre-generated DH group from RFC7919 (https://datatracker.ietf.org/doc/html/rfc7919#appendix-A):
local RFC7919_DHPARAM_FILE="/app/dhparam/ffdhe${FFDHE_GROUP}.pem"
# Provide the DH params file to nginx:
cp "${RFC7919_DHPARAM_FILE}" "${DHPARAM_FILE}"
}
# Run the init logic if the default CMD was provided
if [[ $* == 'forego start -r' ]]; then
_print_version
_check_unix_socket
_resolvers
_setup_dhparam
if [ -z "${TRUST_DOWNSTREAM_PROXY}" ]; then
cat >&2 <<-EOT
Warning: TRUST_DOWNSTREAM_PROXY is not set; defaulting to "true". For security, you should explicitly set TRUST_DOWNSTREAM_PROXY to "false" if there is not a trusted reverse proxy in front of this proxy.
Warning: The default value of TRUST_DOWNSTREAM_PROXY might change to "false" in a future version of nginx-proxy. If you require TRUST_DOWNSTREAM_PROXY to be enabled, explicitly set it to "true".
EOT
fi
fi
exec "$@"

8
dhparam.pem.default Normal file
View File

@ -0,0 +1,8 @@
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEAzB2nIGzpVq7afJnKBm1X0d64avwOlP2oneiKwxRHdDI/5+6TpH1P
F8ipodGuZBUMmupoB3D34pu2Qq5boNW983sm18ww9LMz2i/pxhSdB+mYAew+A6h6
ltQ5pNtyn4NaKw1SDFkqvde3GNPhaWoPDbZDJhpHGblR3w1b/ag+lTLZUvVwcD8L
jYS9f9YWAC6T7WxAxh4zvu1Z0I1EKde8KYBxrreZNheXpXHqMNyJYZCaY2Hb/4oI
EL65qZq1GCWezpWMjhk6pOnV5gbvqfhoazCv/4OdRv6RoWOIYBNs9BmGho4AtXqV
FYLdYDhOvN4aVs9Ir+G8ouwiRnix24+UewIBAg==
-----END DH PARAMETERS-----

View File

@ -1,6 +1,4 @@
volumes: version: '2'
nginx_conf:
services: services:
nginx: nginx:
image: nginx image: nginx
@ -8,17 +6,18 @@ services:
ports: ports:
- "80:80" - "80:80"
volumes: volumes:
- nginx_conf:/etc/nginx/conf.d:ro - /etc/nginx/conf.d
dockergen: dockergen:
image: nginxproxy/docker-gen image: jwilder/docker-gen
command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl
- nginx_conf:/etc/nginx/conf.d
whoami: whoami:
image: jwilder/whoami image: jwilder/whoami
environment: environment:
- VIRTUAL_HOST=whoami.example - VIRTUAL_HOST=whoami.local

View File

@ -1,16 +1,14 @@
version: '2'
services: services:
nginx-proxy: nginx-proxy:
image: nginxproxy/nginx-proxy image: jwilder/nginx-proxy
container_name: nginx-proxy container_name: nginx-proxy
ports: ports:
- "80:80" - "80:80"
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
# if you want to proxy based on host ports, you'll want to use the host network
# network_mode: "host"
whoami: whoami:
image: jwilder/whoami image: jwilder/whoami
environment: environment:
- VIRTUAL_HOST=whoami.example - VIRTUAL_HOST=whoami.local

34
docker-entrypoint.sh Executable file
View File

@ -0,0 +1,34 @@
#!/bin/bash
set -e
# Warn if the DOCKER_HOST socket does not exist
if [[ $DOCKER_HOST = unix://* ]]; then
socket_file=${DOCKER_HOST#unix://}
if ! [ -S $socket_file ]; then
cat >&2 <<-EOT
ERROR: you need to share your Docker host socket with a volume at $socket_file
Typically you should run your jwilder/nginx-proxy with: \`-v /var/run/docker.sock:$socket_file:ro\`
See the documentation at http://git.io/vZaGJ
EOT
socketMissing=1
fi
fi
# Generate dhparam file if required
# Note: if $DHPARAM_BITS is not defined, generate-dhparam.sh will use 2048 as a default
# Note2: if $DHPARAM_GENERATION is set to false in environment variable, dh param generator will skip completely
/app/generate-dhparam.sh $DHPARAM_BITS $DHPARAM_GENERATION
# Compute the DNS resolvers for use in the templates - if the IP contains ":", it's IPv6 and must be enclosed in []
export RESOLVERS=$(awk '$1 == "nameserver" {print ($2 ~ ":")? "["$2"]": $2}' ORS=' ' /etc/resolv.conf | sed 's/ *$//g')
if [ "x$RESOLVERS" = "x" ]; then
echo "Warning: unable to determine DNS resolvers for nginx" >&2
unset RESOLVERS
fi
# If the user has run the default command and the socket doesn't exist, fail
if [ "$socketMissing" = 1 -a "$1" = forego -a "$2" = start -a "$3" = '-r' ]; then
exit 1
fi
exec "$@"

File diff suppressed because it is too large Load Diff

52
generate-dhparam.sh Executable file
View File

@ -0,0 +1,52 @@
#!/bin/bash -e
# The first argument is the bit depth of the dhparam, or 2048 if unspecified
DHPARAM_BITS=${1:-2048}
GENERATE_DHPARAM=${2:-true}
# If a dhparam file is not available, use the pre-generated one and generate a new one in the background.
# Note that /etc/nginx/dhparam is a volume, so this dhparam will persist restarts.
PREGEN_DHPARAM_FILE="/app/dhparam.pem.default"
DHPARAM_FILE="/etc/nginx/dhparam/dhparam.pem"
GEN_LOCKFILE="/tmp/dhparam_generating.lock"
# The hash of the pregenerated dhparam file is used to check if the pregen dhparam is already in use
PREGEN_HASH=$(md5sum $PREGEN_DHPARAM_FILE | cut -d" " -f1)
if [[ -f $DHPARAM_FILE ]]; then
CURRENT_HASH=$(md5sum $DHPARAM_FILE | cut -d" " -f1)
if [[ $PREGEN_HASH != $CURRENT_HASH ]]; then
# There is already a dhparam, and it's not the default
echo "Custom dhparam.pem file found, generation skipped"
exit 0
fi
if [[ -f $GEN_LOCKFILE ]]; then
# Generation is already in progress
exit 0
fi
fi
if [[ $GENERATE_DHPARAM =~ ^[Ff][Aa][Ll][Ss][Ee]$ ]]; then
echo "Skipping Diffie-Hellman parameters generation and Ignoring pre-generated dhparam.pem"
exit 0
fi
cat >&2 <<-EOT
WARNING: $DHPARAM_FILE was not found. A pre-generated dhparam.pem will be used for now while a new one
is being generated in the background. Once the new dhparam.pem is in place, nginx will be reloaded.
EOT
# Put the default dhparam file in place so we can start immediately
cp $PREGEN_DHPARAM_FILE $DHPARAM_FILE
touch $GEN_LOCKFILE
# Generate a new dhparam in the background in a low priority and reload nginx when finished (grep removes the progress indicator).
(
(
nice -n +5 openssl dhparam -out $DHPARAM_FILE.tmp $DHPARAM_BITS 2>&1 \
&& mv $DHPARAM_FILE.tmp $DHPARAM_FILE \
&& echo "dhparam generation complete, reloading nginx" \
&& nginx -s reload
) | grep -vE '^[\.+]+'
rm $GEN_LOCKFILE
) &disown

View File

@ -3,5 +3,4 @@ allow 127.0.0.0/8;
allow 10.0.0.0/8; allow 10.0.0.0/8;
allow 192.168.0.0/16; allow 192.168.0.0/16;
allow 172.16.0.0/12; allow 172.16.0.0/12;
allow fc00::/7; # IPv6 local address range
deny all; deny all;

1263
nginx.tmpl

File diff suppressed because it is too large Load Diff

View File

@ -4,18 +4,25 @@ Nginx proxy test suite
Install requirements Install requirements
-------------------- --------------------
You need [Docker Compose v2](https://docs.docker.com/compose/install/linux/), [python 3.9](https://www.python.org/) and [pip](https://pip.pypa.io/en/stable/installation/) installed. Then run the commands: You need [python 2.7](https://www.python.org/) and [pip](https://pip.pypa.io/en/stable/installing/) installed. Then run the commands:
requirements/build.sh
pip install -r requirements/python-requirements.txt pip install -r requirements/python-requirements.txt
If you can't install those requirements on your computer, you can alternatively use the _pytest.sh_ script which will run the tests from a Docker container which has those requirements.
Prepare the nginx-proxy test image Prepare the nginx-proxy test image
---------------------------------- ----------------------------------
make build-nginx-proxy-test-debian docker build -t jwilder/nginx-proxy:test ..
or if you want to test the alpine flavor: or if you want to test the alpine flavor:
make build-nginx-proxy-test-alpine docker build -t jwilder/nginx-proxy:test -f Dockerfile.alpine ..
make sure to tag that test image exactly `jwilder/nginx-proxy:test` or the test suite won't work.
Run the test suite Run the test suite
------------------ ------------------
@ -26,30 +33,17 @@ need more verbosity ?
pytest -s pytest -s
Note: By default this test suite relies on Docker Compose v2 with the command `docker compose`. It still supports Docker Compose v1 via the `DOCKER_COMPOSE` environment variable:
DOCKER_COMPOSE=docker-compose pytest
Run one single test module Run one single test module
-------------------------- --------------------------
pytest test_nominal.py pytest test_nominal.py
Run the test suite from a Docker container
------------------------------------------
If you cannot (or don't want to) install pytest and its requirements on your computer. You can use the nginx-proxy-tester docker image to run the test suite from a Docker container.
make test-debian
or if you want to test the alpine flavor:
make test-alpine
Write a test module Write a test module
------------------- -------------------
This test suite uses [pytest](http://doc.pytest.org/en/latest/). The [conftest.py](conftest.py) file will be automatically loaded by pytest and will provide you with two useful pytest [fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html): This test suite uses [pytest](http://doc.pytest.org/en/latest/). The [conftest.py](conftest.py) file will be automatically loaded by pytest and will provide you with two useful pytest [fixtures](http://doc.pytest.org/en/latest/fixture.html#fixture):
- docker_compose - docker_compose
- nginxproxy - nginxproxy
@ -57,50 +51,21 @@ This test suite uses [pytest](http://doc.pytest.org/en/latest/). The [conftest.p
### docker_compose fixture ### docker_compose fixture
When using the `docker_compose` fixture in a test, pytest will try to start the [Docker Compose](https://docs.docker.com/compose/) services corresponding to the current test module, based on the test module filename. When using the `docker_compose` fixture in a test, pytest will try to find a yml file named after your test module filename. For instance, if your test module is `test_example.py`, then the `docker_compose` fixture will try to load a `test_example.yml` [docker compose file](https://docs.docker.com/compose/compose-file/).
By default, if your test module file is `test/test_subdir/test_example.py`, then the `docker_compose` fixture will try to load the following files, [merging them](https://docs.docker.com/reference/compose-file/merge/) in this order: Once the docker compose file found, the fixture will remove all containers, run `docker-compose up`, and finally your test will be executed.
1. `test/compose.base.yml` The fixture will run the _docker-compose_ command with the `-f` option to load the given compose file. So you can test your docker compose file syntax by running it yourself with:
2. `test/test_subdir/compose.base.override.yml` (if it exists)
3. `test/test_subdir/test_example.yml`
The fixture will run the _docker compose_ command with the `-f` option to load the given compose files. So you can test your docker compose file syntax by running it yourself with: docker-compose -f test_example.yml up -d
docker compose -f test/compose.base.yml -f test/test_subdir/test_example.yml up -d
The first file contains the base configuration of the nginx-proxy container common to most tests:
```yaml
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:test
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
```
The second optional file allow you to override this base configuration for all test modules in a subfolder.
The third file contains the services and overrides specific to a given test module.
This automatic merge can be bypassed by using a file named `test_example.base.yml` (instead of `test_example.yml`). When this file exist, it will be the only one used by the test and no merge with other compose files will automatically occur.
The `docker_compose` fixture also set the `PYTEST_MODULE_PATH` environment variable to the absolute path of the current test module directory, so it can be used to mount files or directory relatives to the current test.
In the case you are running pytest from within a docker container, the `docker_compose` fixture will make sure the container running pytest is attached to all docker networks. That way, your test will be able to reach any of them. In the case you are running pytest from within a docker container, the `docker_compose` fixture will make sure the container running pytest is attached to all docker networks. That way, your test will be able to reach any of them.
In your tests, you can use the `docker_compose` variable to query and command the docker daemon as it provides you with a [client from the docker python module](https://docker-py.readthedocs.io/en/4.4.4/client.html#client-reference). In your tests, you can use the `docker_compose` variable to query and command the docker daemon as it provides you with a [client from the docker python module](https://docker-py.readthedocs.io/en/2.0.2/client.html#client-reference).
Also this fixture alters the way the python interpreter resolves domain names to IP addresses in the following ways: Also this fixture alters the way the python interpreter resolves domain names to IP addresses in the following ways:
Any domain name containing the substring `nginx-proxy` will resolve to `127.0.0.1` if the tests are executed on a Darwin (macOS) system, otherwise the IP address of the container that was created from the `nginxproxy/nginx-proxy:test` image. Any domain name containing the substring `nginx-proxy` will resolve to the IP address of the container that was created from the `jwilder/nginx-proxy:test` image. So all the following domain names will resolve to the nginx-proxy container in tests:
So, in tests, all the following domain names will resolve to either localhost or the nginx-proxy container's IP:
- `nginx-proxy` - `nginx-proxy`
- `nginx-proxy.com` - `nginx-proxy.com`
- `www.nginx-proxy.com` - `www.nginx-proxy.com`
@ -109,16 +74,14 @@ So, in tests, all the following domain names will resolve to either localhost or
- `whatever.nginx-proxyooooooo` - `whatever.nginx-proxyooooooo`
- ... - ...
Any domain name ending with `XXX.container.docker` will resolve to `127.0.0.1` if the tests are executed on a Darwin (macOS) system, otherwise the IP address of the container named `XXX`. Any domain name ending with `XXX.container.docker` will resolve to the IP address of the XXX container.
So, on a non-Darwin system:
- `web1.container.docker` will resolve to the IP address of the `web1` container - `web1.container.docker` will resolve to the IP address of the `web1` container
- `f00.web1.container.docker` will resolve to the IP address of the `web1` container - `f00.web1.container.docker` will resolve to the IP address of the `web1` container
- `anything.whatever.web2.container.docker` will resolve to the IP address of the `web2` container - `anything.whatever.web2.container.docker` will resolve to the IP address of the `web2` container
Otherwise, domain names are resoved as usual using your system DNS resolver. Otherwise, domain names are resoved as usual using your system DNS resolver.
### nginxproxy fixture ### nginxproxy fixture
The `nginxproxy` fixture will provide you with a replacement for the python [requests](https://pypi.python.org/pypi/requests/) module. This replacement will just repeat up to 30 times a requests if it receives the HTTP error 404 or 502. This error occurs when you try to send queries to nginx-proxy too early after the container creation. The `nginxproxy` fixture will provide you with a replacement for the python [requests](https://pypi.python.org/pypi/requests/) module. This replacement will just repeat up to 30 times a requests if it receives the HTTP error 404 or 502. This error occurs when you try to send queries to nginx-proxy too early after the container creation.
@ -136,7 +99,8 @@ Furthermore, the nginxproxy methods accept an additional keyword parameter: `ipv
### The web docker image ### The web docker image
When you run the `make build-webserver` command, you built a [`web`](requirements/README.md) docker image which is convenient for running a small web server in a container. This image can produce containers that listens on multiple ports at the same time. When you ran the `requirements/build.sh` script earlier, you built a [`web`](requirements/README.md) docker image which is convenient for running a small web server in a container. This image can produce containers that listens on multiple ports at the same time.
### Testing TLS ### Testing TLS

View File

@ -24,7 +24,7 @@ fi
# Create a nginx container (which conveniently provides the `openssl` command) # Create a nginx container (which conveniently provides the `openssl` command)
############################################################################### ###############################################################################
CONTAINER=$(docker run -d -v $DIR:/work -w /work -e SAN="$ALTERNATE_DOMAINS" nginx:1.27.3) CONTAINER=$(docker run -d -v $DIR:/work -w /work -e SAN="$ALTERNATE_DOMAINS" nginx:1.14.1)
# Configure openssl # Configure openssl
docker exec $CONTAINER bash -c ' docker exec $CONTAINER bash -c '
mkdir -p /ca/{certs,crl,private,newcerts} 2>/dev/null mkdir -p /ca/{certs,crl,private,newcerts} 2>/dev/null

View File

@ -1,9 +0,0 @@
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:test
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"

View File

@ -1,45 +1,32 @@
from __future__ import print_function
import contextlib import contextlib
import logging import logging
import os import os
import pathlib
import platform
import re
import shlex import shlex
import socket import socket
import subprocess import subprocess
import time import time
from io import StringIO import re
from typing import Iterator, List, Optional
import backoff import backoff
import docker.errors import docker
import pytest import pytest
import requests import requests
from _pytest.fixtures import FixtureRequest from _pytest._code.code import ReprExceptionInfo
from docker import DockerClient from requests.packages.urllib3.util.connection import HAS_IPV6
from docker.models.containers import Container
from docker.models.networks import Network
from packaging.version import Version
from requests import Response
from urllib3.util.connection import HAS_IPV6
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
logging.getLogger('backoff').setLevel(logging.INFO) logging.getLogger('backoff').setLevel(logging.INFO)
logging.getLogger('DNS').setLevel(logging.DEBUG) logging.getLogger('DNS').setLevel(logging.DEBUG)
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(logging.WARN) logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(logging.WARN)
CA_ROOT_CERTIFICATE = pathlib.Path(__file__).parent.joinpath("certs/ca-root.crt") CA_ROOT_CERTIFICATE = os.path.join(os.path.dirname(__file__), 'certs/ca-root.crt')
PYTEST_RUNNING_IN_CONTAINER = os.environ.get('PYTEST_RUNNING_IN_CONTAINER') == "1" I_AM_RUNNING_INSIDE_A_DOCKER_CONTAINER = os.path.isfile("/.dockerenv")
FORCE_CONTAINER_IPV6 = False # ugly global state to consider containers' IPv6 address instead of IPv4 FORCE_CONTAINER_IPV6 = False # ugly global state to consider containers' IPv6 address instead of IPv4
DOCKER_COMPOSE = os.environ.get('DOCKER_COMPOSE', 'docker compose')
docker_client = docker.from_env() docker_client = docker.from_env()
# Name of pytest container to reference if it's being used for running tests
test_container = 'nginx-proxy-pytest'
############################################################################### ###############################################################################
# #
@ -47,17 +34,16 @@ test_container = 'nginx-proxy-pytest'
# #
############################################################################### ###############################################################################
@contextlib.contextmanager @contextlib.contextmanager
def ipv6(force_ipv6: bool = True): def ipv6(force_ipv6=True):
""" """
Meant to be used as a context manager to force IPv6 sockets: Meant to be used as a context manager to force IPv6 sockets:
with ipv6(): with ipv6():
nginxproxy.get("http://something.nginx-proxy.example") # force use of IPv6 nginxproxy.get("http://something.nginx-proxy.local") # force use of IPv6
with ipv6(False): with ipv6(False):
nginxproxy.get("http://something.nginx-proxy.example") # legacy behavior nginxproxy.get("http://something.nginx-proxy.local") # legacy behavior
""" """
@ -67,90 +53,75 @@ def ipv6(force_ipv6: bool = True):
FORCE_CONTAINER_IPV6 = False FORCE_CONTAINER_IPV6 = False
class RequestsForDocker: class requests_for_docker(object):
""" """
Proxy for calling methods of the requests module. Proxy for calling methods of the requests module.
When an HTTP response failed due to HTTP Error 404 or 502, retry a few times. When a HTTP response failed due to HTTP Error 404 or 502, retry a few times.
Provides method `get_conf` to extract the nginx-proxy configuration content. Provides method `get_conf` to extract the nginx-proxy configuration content.
""" """
def __init__(self): def __init__(self):
self.session = requests.Session() self.session = requests.Session()
if CA_ROOT_CERTIFICATE.is_file(): if os.path.isfile(CA_ROOT_CERTIFICATE):
self.session.verify = CA_ROOT_CERTIFICATE.as_posix() self.session.verify = CA_ROOT_CERTIFICATE
@staticmethod def get_conf(self):
def get_nginx_proxy_container() -> Container:
"""
Return list of containers
"""
nginx_proxy_containers = docker_client.containers.list(filters={"ancestor": "nginxproxy/nginx-proxy:test"})
if len(nginx_proxy_containers) > 1:
pytest.fail("Too many running nginxproxy/nginx-proxy:test containers", pytrace=False)
elif len(nginx_proxy_containers) == 0:
pytest.fail("No running nginxproxy/nginx-proxy:test container", pytrace=False)
return nginx_proxy_containers.pop()
def get_conf(self) -> bytes:
""" """
Return the nginx config file Return the nginx config file
""" """
nginx_proxy_container = self.get_nginx_proxy_container() nginx_proxy_containers = docker_client.containers.list(filters={"ancestor": "jwilder/nginx-proxy:test"})
return get_nginx_conf_from_container(nginx_proxy_container) if len(nginx_proxy_containers) > 1:
pytest.fail("Too many running jwilder/nginx-proxy:test containers", pytrace=False)
elif len(nginx_proxy_containers) == 0:
pytest.fail("No running jwilder/nginx-proxy:test container", pytrace=False)
return get_nginx_conf_from_container(nginx_proxy_containers[0])
def get_ip(self) -> str: def get(self, *args, **kwargs):
"""
Return the nginx container ip address
"""
nginx_proxy_container = self.get_nginx_proxy_container()
return container_ip(nginx_proxy_container)
def get(self, *args, **kwargs) -> Response:
with ipv6(kwargs.pop('ipv6', False)): with ipv6(kwargs.pop('ipv6', False)):
@backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None) @backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None)
def _get(*_args, **_kwargs): def _get(*args, **kwargs):
return self.session.get(*_args, **_kwargs) return self.session.get(*args, **kwargs)
return _get(*args, **kwargs) return _get(*args, **kwargs)
def post(self, *args, **kwargs) -> Response: def post(self, *args, **kwargs):
with ipv6(kwargs.pop('ipv6', False)): with ipv6(kwargs.pop('ipv6', False)):
@backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None) @backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None)
def _post(*_args, **_kwargs): def _post(*args, **kwargs):
return self.session.post(*_args, **_kwargs) return self.session.post(*args, **kwargs)
return _post(*args, **kwargs) return _post(*args, **kwargs)
def put(self, *args, **kwargs) -> Response: def put(self, *args, **kwargs):
with ipv6(kwargs.pop('ipv6', False)): with ipv6(kwargs.pop('ipv6', False)):
@backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None) @backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None)
def _put(*_args, **_kwargs): def _put(*args, **kwargs):
return self.session.put(*_args, **_kwargs) return self.session.put(*args, **kwargs)
return _put(*args, **kwargs) return _put(*args, **kwargs)
def head(self, *args, **kwargs) -> Response: def head(self, *args, **kwargs):
with ipv6(kwargs.pop('ipv6', False)): with ipv6(kwargs.pop('ipv6', False)):
@backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None) @backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None)
def _head(*_args, **_kwargs): def _head(*args, **kwargs):
return self.session.head(*_args, **_kwargs) return self.session.head(*args, **kwargs)
return _head(*args, **kwargs) return _head(*args, **kwargs)
def delete(self, *args, **kwargs) -> Response: def delete(self, *args, **kwargs):
with ipv6(kwargs.pop('ipv6', False)): with ipv6(kwargs.pop('ipv6', False)):
@backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None) @backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None)
def _delete(*_args, **_kwargs): def _delete(*args, **kwargs):
return self.session.delete(*_args, **_kwargs) return self.session.delete(*args, **kwargs)
return _delete(*args, **kwargs) return _delete(*args, **kwargs)
def options(self, *args, **kwargs) -> Response: def options(self, *args, **kwargs):
with ipv6(kwargs.pop('ipv6', False)): with ipv6(kwargs.pop('ipv6', False)):
@backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None) @backoff.on_predicate(backoff.constant, lambda r: r.status_code in (404, 502), interval=.3, max_tries=30, jitter=None)
def _options(*_args, **_kwargs): def _options(*args, **kwargs):
return self.session.options(*_args, **_kwargs) return self.session.options(*args, **kwargs)
return _options(*args, **kwargs) return _options(*args, **kwargs)
def __getattr__(self, name): def __getattr__(self, name):
return getattr(requests, name) return getattr(requests, name)
def container_ip(container: Container) -> str: def container_ip(container):
""" """
return the IP address of a container. return the IP address of a container.
@ -162,7 +133,7 @@ def container_ip(container: Container) -> str:
pytest.skip("This system does not support IPv6") pytest.skip("This system does not support IPv6")
ip = container_ipv6(container) ip = container_ipv6(container)
if ip == '': if ip == '':
pytest.skip(f"Container {container.name} has no IPv6 address") pytest.skip("Container %s has no IPv6 address" % container.name)
else: else:
return ip return ip
else: else:
@ -170,16 +141,12 @@ def container_ip(container: Container) -> str:
if "bridge" in net_info: if "bridge" in net_info:
return net_info["bridge"]["IPAddress"] return net_info["bridge"]["IPAddress"]
# container is running in host network mode
if "host" in net_info:
return "127.0.0.1"
# not default bridge network, fallback on first network defined # not default bridge network, fallback on first network defined
network_name = list(net_info.keys())[0] network_name = net_info.keys()[0]
return net_info[network_name]["IPAddress"] return net_info[network_name]["IPAddress"]
def container_ipv6(container: Container) -> str: def container_ipv6(container):
""" """
return the IPv6 address of a container. return the IPv6 address of a container.
""" """
@ -187,64 +154,56 @@ def container_ipv6(container: Container) -> str:
if "bridge" in net_info: if "bridge" in net_info:
return net_info["bridge"]["GlobalIPv6Address"] return net_info["bridge"]["GlobalIPv6Address"]
# container is running in host network mode
if "host" in net_info:
return "::1"
# not default bridge network, fallback on first network defined # not default bridge network, fallback on first network defined
network_name = list(net_info.keys())[0] network_name = net_info.keys()[0]
return net_info[network_name]["GlobalIPv6Address"] return net_info[network_name]["GlobalIPv6Address"]
def nginx_proxy_dns_resolver(domain_name: str) -> Optional[str]: def nginx_proxy_dns_resolver(domain_name):
""" """
if "nginx-proxy" if found in host, return the ip address of the docker container if "nginx-proxy" if found in host, return the ip address of the docker container
issued from the docker image nginxproxy/nginx-proxy:test. issued from the docker image jwilder/nginx-proxy:test.
:return: IP or None :return: IP or None
""" """
log = logging.getLogger('DNS') log = logging.getLogger('DNS')
log.debug(f"nginx_proxy_dns_resolver({domain_name!r})") log.debug("nginx_proxy_dns_resolver(%r)" % domain_name)
if 'nginx-proxy' in domain_name: if 'nginx-proxy' in domain_name:
nginxproxy_containers = docker_client.containers.list(filters={"status": "running", "ancestor": "nginxproxy/nginx-proxy:test"}) nginxproxy_containers = docker_client.containers.list(filters={"status": "running", "ancestor": "jwilder/nginx-proxy:test"})
if len(nginxproxy_containers) == 0: if len(nginxproxy_containers) == 0:
log.warning(f"no container found from image nginxproxy/nginx-proxy:test while resolving {domain_name!r}") log.warn("no container found from image jwilder/nginx-proxy:test while resolving %r", domain_name)
exited_nginxproxy_containers = docker_client.containers.list(filters={"status": "exited", "ancestor": "nginxproxy/nginx-proxy:test"}) return
if len(exited_nginxproxy_containers) > 0:
exited_nginxproxy_container_logs = exited_nginxproxy_containers[0].logs()
log.warning(f"nginxproxy/nginx-proxy:test container might have exited unexpectedly. Container logs: " + "\n" + exited_nginxproxy_container_logs.decode())
return None
nginxproxy_container = nginxproxy_containers[0] nginxproxy_container = nginxproxy_containers[0]
ip = container_ip(nginxproxy_container) ip = container_ip(nginxproxy_container)
log.info(f"resolving domain name {domain_name!r} as IP address {ip} of nginx-proxy container {nginxproxy_container.name}") log.info("resolving domain name %r as IP address %s of nginx-proxy container %s" % (domain_name, ip, nginxproxy_container.name))
return ip return ip
def docker_container_dns_resolver(domain_name: str) -> Optional[str]: def docker_container_dns_resolver(domain_name):
""" """
if domain name is of the form "XXX.container.docker" or "anything.XXX.container.docker", if domain name is of the form "XXX.container.docker" or "anything.XXX.container.docker", return the ip address of the docker container
return the ip address of the docker container named XXX. named XXX.
:return: IP or None :return: IP or None
""" """
log = logging.getLogger('DNS') log = logging.getLogger('DNS')
log.debug(f"docker_container_dns_resolver({domain_name!r})") log.debug("docker_container_dns_resolver(%r)" % domain_name)
match = re.search(r'(^|.+\.)(?P<container>[^.]+)\.container\.docker$', domain_name) match = re.search('(^|.+\.)(?P<container>[^.]+)\.container\.docker$', domain_name)
if not match: if not match:
log.debug(f"{domain_name!r} does not match") log.debug("%r does not match" % domain_name)
return None return
container_name = match.group('container') container_name = match.group('container')
log.debug(f"looking for container {container_name!r}") log.debug("looking for container %r" % container_name)
try: try:
container = docker_client.containers.get(container_name) container = docker_client.containers.get(container_name)
except docker.errors.NotFound: except docker.errors.NotFound:
log.warning(f"container named {container_name!r} not found while resolving {domain_name!r}") log.warn("container named %r not found while resolving %r" % (container_name, domain_name))
return None return
log.debug(f"container {container.name!r} found ({container.short_id})") log.debug("container %r found (%s)" % (container.name, container.short_id))
ip = container_ip(container) ip = container_ip(container)
log.info(f"resolving domain name {domain_name!r} as IP address {ip} of container {container.name}") log.info("resolving domain name %r as IP address %s of container %s" % (domain_name, ip, container.name))
return ip return ip
@ -252,28 +211,15 @@ def monkey_patch_urllib_dns_resolver():
""" """
Alter the behavior of the urllib DNS resolver so that any domain name Alter the behavior of the urllib DNS resolver so that any domain name
containing substring 'nginx-proxy' will resolve to the IP address containing substring 'nginx-proxy' will resolve to the IP address
of the container created from image 'nginxproxy/nginx-proxy:test', of the container created from image 'jwilder/nginx-proxy:test'.
or to 127.0.0.1 on Darwin.
see https://docs.docker.com/desktop/features/networking/#i-want-to-connect-to-a-container-from-the-host
""" """
prv_getaddrinfo = socket.getaddrinfo prv_getaddrinfo = socket.getaddrinfo
dns_cache = {} dns_cache = {}
def new_getaddrinfo(*args): def new_getaddrinfo(*args):
logging.getLogger('DNS').debug(f"resolving domain name {repr(args)}") logging.getLogger('DNS').debug("resolving domain name %s" % repr(args))
_args = list(args) _args = list(args)
# Fail early when querying IP directly, and it is forced ipv6 when not supported,
# Otherwise a pytest container not using the host network fails to pass `test_raw-ip-vhost`.
if FORCE_CONTAINER_IPV6 and not HAS_IPV6:
pytest.skip("This system does not support IPv6")
# custom DNS resolvers # custom DNS resolvers
ip = None
# Docker Desktop can't route traffic directly to Linux containers.
if platform.system() == "Darwin":
ip = "127.0.0.1"
if ip is None:
ip = nginx_proxy_dns_resolver(args[0]) ip = nginx_proxy_dns_resolver(args[0])
if ip is None: if ip is None:
ip = docker_container_dns_resolver(args[0]) ip = docker_container_dns_resolver(args[0])
@ -290,275 +236,188 @@ def monkey_patch_urllib_dns_resolver():
socket.getaddrinfo = new_getaddrinfo socket.getaddrinfo = new_getaddrinfo
return prv_getaddrinfo return prv_getaddrinfo
def restore_urllib_dns_resolver(getaddrinfo_func): def restore_urllib_dns_resolver(getaddrinfo_func):
socket.getaddrinfo = getaddrinfo_func socket.getaddrinfo = getaddrinfo_func
def get_nginx_conf_from_container(container: Container) -> bytes: def remove_all_containers():
for container in docker_client.containers.list(all=True):
if I_AM_RUNNING_INSIDE_A_DOCKER_CONTAINER and container.id.startswith(socket.gethostname()):
continue # pytest is running within a Docker container, so we do not want to remove that particular container
logging.info("removing container %s" % container.name)
container.remove(v=True, force=True)
def get_nginx_conf_from_container(container):
""" """
return the nginx /etc/nginx/conf.d/default.conf file content from a container return the nginx /etc/nginx/conf.d/default.conf file content from a container
""" """
import tarfile import tarfile
from io import BytesIO from cStringIO import StringIO
strm, stat = container.get_archive('/etc/nginx/conf.d/default.conf')
strm_generator, stat = container.get_archive('/etc/nginx/conf.d/default.conf') with tarfile.open(fileobj=StringIO(strm.read())) as tf:
strm_fileobj = BytesIO(b"".join(strm_generator))
with tarfile.open(fileobj=strm_fileobj) as tf:
conffile = tf.extractfile('default.conf') conffile = tf.extractfile('default.conf')
return conffile.read() return conffile.read()
def __prepare_and_execute_compose_cmd(compose_files: List[str], project_name: str, cmd: str): def docker_compose_up(compose_file='docker-compose.yml'):
""" logging.info('docker-compose -f %s up -d' % compose_file)
Prepare and execute the Docker Compose command with the provided compose files and project name.
"""
compose_cmd = StringIO()
compose_cmd.write(DOCKER_COMPOSE)
compose_cmd.write(f" --project-name {project_name}")
for compose_file in compose_files:
compose_cmd.write(f" --file {compose_file}")
compose_cmd.write(f" {cmd}")
logging.info(compose_cmd.getvalue())
try: try:
subprocess.check_output(shlex.split(compose_cmd.getvalue()), stderr=subprocess.STDOUT) subprocess.check_output(shlex.split('docker-compose -f %s up -d' % compose_file), stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError, e:
pytest.fail(f"Error while running '{compose_cmd.getvalue()}':\n{e.output}", pytrace=False) pytest.fail("Error while runninng 'docker-compose -f %s up -d':\n%s" % (compose_file, e.output), pytrace=False)
def docker_compose_up(compose_files: List[str], project_name: str): def docker_compose_down(compose_file='docker-compose.yml'):
""" logging.info('docker-compose -f %s down' % compose_file)
Execute compose up --detach with the provided compose files and project name. try:
""" subprocess.check_output(shlex.split('docker-compose -f %s down' % compose_file), stderr=subprocess.STDOUT)
if compose_files is None or len(compose_files) == 0: except subprocess.CalledProcessError, e:
pytest.fail(f"No compose file passed to docker_compose_up", pytrace=False) pytest.fail("Error while runninng 'docker-compose -f %s down':\n%s" % (compose_file, e.output), pytrace=False)
__prepare_and_execute_compose_cmd(compose_files, project_name, cmd="up --detach")
def docker_compose_down(compose_files: List[str], project_name: str):
"""
Execute compose down --volumes with the provided compose files and project name.
"""
if compose_files is None or len(compose_files) == 0:
pytest.fail(f"No compose file passed to docker_compose_up", pytrace=False)
__prepare_and_execute_compose_cmd(compose_files, project_name, cmd="down --volumes")
def wait_for_nginxproxy_to_be_ready(): def wait_for_nginxproxy_to_be_ready():
""" """
If one (and only one) container started from image nginxproxy/nginx-proxy:test is found, If one (and only one) container started from image jwilder/nginx-proxy:test is found,
wait for its log to contain substring "Watching docker events" wait for its log to contain substring "Watching docker events"
""" """
containers = docker_client.containers.list(filters={"ancestor": "nginxproxy/nginx-proxy:test"}) containers = docker_client.containers.list(filters={"ancestor": "jwilder/nginx-proxy:test"})
if len(containers) != 1: if len(containers) != 1:
return return
container = containers[0] container = containers[0]
for line in container.logs(stream=True): for line in container.logs(stream=True):
if b"Watching docker events" in line: if "Watching docker events" in line:
logging.debug("nginx-proxy ready") logging.debug("nginx-proxy ready")
break break
def find_docker_compose_file(request):
@pytest.fixture
def docker_compose_files(request: FixtureRequest) -> List[str]:
"""Fixture returning the docker compose files to consider:
If a YAML file exists with the same name as the test module (with the `.py` extension
replaced with `.base.yml`, ie `test_foo.py`-> `test_foo.base.yml`) and in the same
directory as the test module, use only that file.
Otherwise, merge the following files in this order:
- the `compose.base.yml` file in the parent `test` directory.
- if present in the same directory as the test module, the `compose.base.override.yml` file.
- the YAML file named after the current test module (ie `test_foo.py`-> `test_foo.yml`)
Tests can override this fixture to specify a custom location.
""" """
compose_files: List[str] = [] helper for fixture functions to figure out the name of the docker-compose file to consider.
test_module_path = pathlib.Path(request.module.__file__).parent
module_base_file = test_module_path.joinpath(f"{request.module.__name__}.base.yml") - if the test module provides a `docker_compose_file` variable, take that
if module_base_file.is_file(): - else, if a yaml file exists with the same name as the test module (but for the `.yml` extension), use that
return [module_base_file.as_posix()] - otherwise use `docker-compose.yml`.
"""
test_module_dir = os.path.dirname(request.module.__file__)
yml_file = os.path.join(test_module_dir, request.module.__name__ + '.yml')
yaml_file = os.path.join(test_module_dir, request.module.__name__ + '.yaml')
default_file = os.path.join(test_module_dir, 'docker-compose.yml')
global_base_file = test_module_path.parent.joinpath("compose.base.yml") docker_compose_file_module_variable = getattr(request.module, "docker_compose_file", None)
if global_base_file.is_file(): if docker_compose_file_module_variable is not None:
compose_files.append(global_base_file.as_posix()) docker_compose_file = os.path.join( test_module_dir, docker_compose_file_module_variable)
if not os.path.isfile(docker_compose_file):
raise ValueError("docker compose file %r could not be found. Check your test module `docker_compose_file` variable value." % docker_compose_file)
else:
if os.path.isfile(yml_file):
docker_compose_file = yml_file
elif os.path.isfile(yaml_file):
docker_compose_file = yaml_file
else:
docker_compose_file = default_file
module_base_override_file = test_module_path.joinpath("compose.base.override.yml") if not os.path.isfile(docker_compose_file):
if module_base_override_file.is_file(): logging.error("Could not find any docker-compose file named either '{0}.yml', '{0}.yaml' or 'docker-compose.yml'".format(request.module.__name__))
compose_files.append(module_base_override_file.as_posix())
module_compose_file = test_module_path.joinpath(f"{request.module.__name__}.yml") logging.debug("using docker compose file %s" % docker_compose_file)
if module_compose_file.is_file(): return docker_compose_file
compose_files.append(module_compose_file.as_posix())
if not module_base_file.is_file() and not module_compose_file.is_file():
logging.error(
f"Could not find any docker compose file named '{module_base_file.name}' or '{module_compose_file.name}'"
)
logging.debug(f"using docker compose files {compose_files}")
return compose_files
def connect_to_network(network: Network) -> Optional[Network]: def connect_to_network(network):
""" """
If we are running from a container, connect our container to the given network If we are running from a container, connect our container to the given network
:return: the name of the network we were connected to, or None :return: the name of the network we were connected to, or None
""" """
if PYTEST_RUNNING_IN_CONTAINER: if I_AM_RUNNING_INSIDE_A_DOCKER_CONTAINER:
try: try:
my_container = docker_client.containers.get(test_container) my_container = docker_client.containers.get(socket.gethostname())
except docker.errors.NotFound: except docker.errors.NotFound:
logging.warning(f"container {test_container} not found") logging.warn("container %r not found" % socket.gethostname())
return None return
# figure out our container networks # figure out our container networks
my_networks = list(my_container.attrs["NetworkSettings"]["Networks"].keys()) my_networks = my_container.attrs["NetworkSettings"]["Networks"].keys()
# If the pytest container is using host networking, it cannot connect to container networks (not required with host network) # make sure our container is connected to the nginx-proxy's network
if 'host' in my_networks: if network not in my_networks:
return None logging.info("Connecting to docker network: %s" % network.name)
# Make sure our container is connected to the nginx-proxy's network,
# but avoid connecting to `none` network (not valid) with `test_server-down` tests
if network.name not in my_networks and network.name != 'none':
logging.info(f"Connecting to docker network: {network.name}")
network.connect(my_container) network.connect(my_container)
return network return network
def disconnect_from_network(network: Network = None): def disconnect_from_network(network=None):
""" """
If we are running from a container, disconnect our container from the given network. If we are running from a container, disconnect our container from the given network.
:param network: name of a docker network to disconnect from :param network: name of a docker network to disconnect from
""" """
if PYTEST_RUNNING_IN_CONTAINER and network is not None: if I_AM_RUNNING_INSIDE_A_DOCKER_CONTAINER and network is not None:
try: try:
my_container = docker_client.containers.get(test_container) my_container = docker_client.containers.get(socket.gethostname())
except docker.errors.NotFound: except docker.errors.NotFound:
logging.warning(f"container {test_container} not found") logging.warn("container %r not found" % socket.gethostname())
return return
# figure out our container networks # figure out our container networks
my_networks_names = list(my_container.attrs["NetworkSettings"]["Networks"].keys()) my_networks_names = my_container.attrs["NetworkSettings"]["Networks"].keys()
# disconnect our container from the given network # disconnect our container from the given network
if network.name in my_networks_names: if network.name in my_networks_names:
logging.info(f"Disconnecting from network {network.name}") logging.info("Disconnecting from network %s" % network.name)
network.disconnect(my_container) network.disconnect(my_container)
def connect_to_all_networks() -> List[Network]: def connect_to_all_networks():
""" """
If we are running from a container, connect our container to all current docker networks. If we are running from a container, connect our container to all current docker networks.
:return: a list of networks we connected to :return: a list of networks we connected to
""" """
if not PYTEST_RUNNING_IN_CONTAINER: if not I_AM_RUNNING_INSIDE_A_DOCKER_CONTAINER:
return [] return []
else: else:
# find the list of docker networks # find the list of docker networks
networks = [network for network in docker_client.networks.list(greedy=True) if len(network.containers) > 0 and network.name != 'bridge'] networks = filter(lambda network: len(network.containers) > 0 and network.name != 'bridge', docker_client.networks.list())
return [connect_to_network(network) for network in networks] return [connect_to_network(network) for network in networks]
class DockerComposer(contextlib.AbstractContextManager):
def __init__(self):
self._networks = None
self._docker_compose_files = None
self._project_name = None
def __exit__(self, *exc_info):
self._down()
def _down(self):
if self._docker_compose_files is None:
return
for network in self._networks:
disconnect_from_network(network)
docker_compose_down(self._docker_compose_files, self._project_name)
self._docker_compose_file = None
self._project_name = None
def compose(self, docker_compose_files: List[str], project_name: str):
if docker_compose_files == self._docker_compose_files and project_name == self._project_name:
return
self._down()
if docker_compose_files is None or project_name is None:
return
docker_compose_up(docker_compose_files, project_name)
self._networks = connect_to_all_networks()
wait_for_nginxproxy_to_be_ready()
time.sleep(3) # give time to containers to be ready
self._docker_compose_files = docker_compose_files
self._project_name = project_name
############################################################################### ###############################################################################
# #
# Py.test fixtures # Py.test fixtures
# #
############################################################################### ###############################################################################
@pytest.yield_fixture(scope="module")
def docker_compose(request):
"""
pytest fixture providing containers described in a docker compose file. After the tests, remove the created containers
@pytest.fixture(scope="module") A custom docker compose file name can be defined in a variable named `docker_compose_file`.
def docker_composer() -> Iterator[DockerComposer]:
with DockerComposer() as d:
yield d
Also, in the case where pytest is running from a docker container, this fixture makes sure
@pytest.fixture our container will be attached to all the docker networks.
def ca_root_certificate() -> str: """
return CA_ROOT_CERTIFICATE.as_posix() docker_compose_file = find_docker_compose_file(request)
@pytest.fixture
def monkey_patched_dns():
original_dns_resolver = monkey_patch_urllib_dns_resolver() original_dns_resolver = monkey_patch_urllib_dns_resolver()
yield remove_all_containers()
docker_compose_up(docker_compose_file)
networks = connect_to_all_networks()
wait_for_nginxproxy_to_be_ready()
time.sleep(3) # give time to containers to be ready
yield docker_client
for network in networks:
disconnect_from_network(network)
docker_compose_down(docker_compose_file)
restore_urllib_dns_resolver(original_dns_resolver) restore_urllib_dns_resolver(original_dns_resolver)
@pytest.fixture @pytest.yield_fixture()
def docker_compose( def nginxproxy():
request: FixtureRequest,
monkeypatch,
monkey_patched_dns,
docker_composer,
docker_compose_files
) -> Iterator[DockerClient]:
"""
Ensures containers necessary for the test module are started in a compose project,
and set the environment variable `PYTEST_MODULE_PATH` to the test module's parent folder.
A list of custom docker compose files path can be specified by overriding
the `docker_compose_file` fixture.
Also, in the case where pytest is running from a docker container, this fixture
makes sure our container will be attached to all the docker networks.
"""
pytest_module_path = pathlib.Path(request.module.__file__).parent
monkeypatch.setenv("PYTEST_MODULE_PATH", pytest_module_path.as_posix())
project_name = request.module.__name__
docker_composer.compose(docker_compose_files, project_name)
yield docker_client
@pytest.fixture
def nginxproxy() -> Iterator[RequestsForDocker]:
""" """
Provides the `nginxproxy` object that can be used in the same way the requests module is: Provides the `nginxproxy` object that can be used in the same way the requests module is:
r = nginxproxy.get("https://foo.com") r = nginxproxy.get("http://foo.com")
The difference is that in case an HTTP requests has status code 404 or 502 (which mostly The difference is that in case an HTTP requests has status code 404 or 502 (which mostly
indicates that nginx has just reloaded), we retry up to 30 times the query. indicates that nginx has just reloaded), we retry up to 30 times the query.
@ -567,29 +426,23 @@ def nginxproxy() -> Iterator[RequestsForDocker]:
made against containers to use the containers IPv6 address when set to `True`. If IPv6 is not made against containers to use the containers IPv6 address when set to `True`. If IPv6 is not
supported by the system or docker, that particular test will be skipped. supported by the system or docker, that particular test will be skipped.
""" """
yield RequestsForDocker() yield requests_for_docker()
@pytest.fixture
def acme_challenge_path() -> str:
"""
Provides fake Let's Encrypt ACME challenge path used in certain tests
"""
return ".well-known/acme-challenge/test-filename"
############################################################################### ###############################################################################
# #
# Py.test hooks # Py.test hooks
# #
############################################################################### ###############################################################################
# pytest hook to display additional stuff in test report # pytest hook to display additionnal stuff in test report
def pytest_runtest_logreport(report): def pytest_runtest_logreport(report):
if report.failed: if report.failed:
test_containers = docker_client.containers.list(all=True, filters={"ancestor": "nginxproxy/nginx-proxy:test"}) if isinstance(report.longrepr, ReprExceptionInfo):
test_containers = docker_client.containers.list(all=True, filters={"ancestor": "jwilder/nginx-proxy:test"})
for container in test_containers: for container in test_containers:
report.longrepr.addsection('nginx-proxy logs', container.logs().decode()) report.longrepr.addsection('nginx-proxy logs', container.logs())
report.longrepr.addsection('nginx-proxy conf', get_nginx_conf_from_container(container).decode()) report.longrepr.addsection('nginx-proxy conf', get_nginx_conf_from_container(container))
# Py.test `incremental` marker, see http://stackoverflow.com/a/12579625/107049 # Py.test `incremental` marker, see http://stackoverflow.com/a/12579625/107049
@ -603,7 +456,7 @@ def pytest_runtest_makereport(item, call):
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
previousfailed = getattr(item.parent, "_previousfailed", None) previousfailed = getattr(item.parent, "_previousfailed", None)
if previousfailed is not None: if previousfailed is not None:
pytest.xfail(f"previous test failed ({previousfailed.name})") pytest.xfail("previous test failed (%s)" % previousfailed.name)
############################################################################### ###############################################################################
# #
@ -612,9 +465,9 @@ def pytest_runtest_setup(item):
############################################################################### ###############################################################################
try: try:
docker_client.images.get('nginxproxy/nginx-proxy:test') docker_client.images.get('jwilder/nginx-proxy:test')
except docker.errors.ImageNotFound: except docker.errors.ImageNotFound:
pytest.exit("The docker image 'nginxproxy/nginx-proxy:test' is missing") pytest.exit("The docker image 'jwilder/nginx-proxy:test' is missing")
if Version(docker.__version__) < Version("7.0.0"): if docker.__version__ != "2.1.0":
pytest.exit("This test suite is meant to work with the python docker module v7.0.0 or later") pytest.exit("This test suite is meant to work with the python docker module v2.1.0")

8
test/lib/ssl/dhparam.pem Normal file
View File

@ -0,0 +1,8 @@
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA1cae6HqPSgicEuAuSCf6Ii3d6qMX9Ta8lnwoX0JQ0CWK7mzaiiIi
dY7oHmc4cq0S3SH+g0tdLP9yqygFS9hdUGINwS2VV6poj2/vdL/dUshegyxpEH58
nofCPnFDeKkcPDMYAlGS8zjp60TsBkRJKcrxxwnjod1Q5mWuMN5KH3sxs842udKH
0nHFE9kKW/NfXb+EGsjpocGpf786cGuCO2d00THsoItOEcM9/aI8DX1QcyxAHR6D
HaYTFJnyyx8Q44u27M15idI4pbNoKORlotiuOwCTGYCfbN14aOV+Ict7aSF8FWpP
48j9SMNuIu2DlF9pNLo6fsrOjYY3c9X12wIBAg==
-----END DH PARAMETERS-----

View File

@ -1,5 +1,3 @@
[pytest] [pytest]
# disable the creation of the `.cache` folders # disable the creation of the `.cache` folders
addopts = -p no:cacheprovider --ignore=requirements --ignore=certs --color=yes -v addopts = -p no:cacheprovider --ignore=requirements --ignore=certs -r s -v
markers =
incremental: mark a test as incremental.

View File

@ -1,28 +1,24 @@
#!/bin/sh #!/bin/bash
############################################################################### ###############################################################################
# # # #
# This script is meant to run the test suite from a Docker container. # # This script is meant to run the test suite from a Docker container. #
# # # #
# This is useful when you want to run the test suite from Mac or # # This is usefull when you want to run the test suite from Mac or #
# Docker Toolbox. # # Docker Toolbox. #
# # # #
############################################################################### ###############################################################################
# Returns the absolute directory path to this script DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
TESTDIR=$(cd "${0%/*}" && pwd) || exit 1 ARGS="$@"
DIR=$(cd "${TESTDIR}/.." && pwd) || exit 1
# check requirements # check requirements
echo "> Building nginx-proxy-tester image..." echo "> Building nginx-proxy-tester image..."
docker build --pull -t nginx-proxy-tester \ docker build -t nginx-proxy-tester -f $DIR/requirements/Dockerfile-nginx-proxy-tester $DIR/requirements
-f "${TESTDIR}/requirements/Dockerfile-nginx-proxy-tester" \
"${TESTDIR}/requirements" \
|| exit 1
# run the nginx-proxy-tester container setting the correct value for the working dir # run the nginx-proxy-tester container setting the correct value for the working dir in order for
# in order for docker compose to work properly when run from within that container. # docker-compose to work properly when run from within that container.
exec docker run --rm -it --name "nginx-proxy-pytest" \ exec docker run --rm -it \
--volume "/var/run/docker.sock:/var/run/docker.sock" \ -v ${DIR}:/${DIR} \
--volume "${DIR}:${DIR}" \ -w ${DIR} \
--workdir "${TESTDIR}" \ -v /var/run/docker.sock:/var/run/docker.sock \
nginx-proxy-tester "$@" nginx-proxy-tester ${ARGS}

View File

@ -1,35 +1,10 @@
FROM python:3.12 FROM python:2.7-alpine
ENV PYTEST_RUNNING_IN_CONTAINER=1 # Note: we're using alpine because it has openssl 1.0.2, which we need for testing
RUN apk add --update bash openssl curl && rm -rf /var/cache/apk/*
COPY python-requirements.txt /requirements.txt COPY python-requirements.txt /requirements.txt
RUN pip install -r /requirements.txt RUN pip install -r /requirements.txt
# Add Docker's official GPG key
RUN apt-get update \
&& apt-get install -y \
ca-certificates \
curl \
&& install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
&& chmod a+r /etc/apt/keyrings/docker.asc
# Add the Docker repository to Apt sources
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install docker-ce-cli and docker-compose-plugin requirements for Pytest docker_compose fixture
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
docker-ce-cli \
docker-compose-plugin \
&& apt-get clean \
&& rm -r /var/lib/apt/lists/*
# Check if docker compose is available
RUN docker compose version
WORKDIR /test WORKDIR /test
ENTRYPOINT ["pytest"] ENTRYPOINT ["pytest"]

View File

@ -2,7 +2,7 @@ This directory contains resources to build Docker images tests depend on
# Build images # Build images
make build-webserver ./build.sh
# python-requirements.txt # python-requirements.txt

6
test/requirements/build.sh Executable file
View File

@ -0,0 +1,6 @@
#!/bin/bash
set -e
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
docker build -t web $DIR/web

View File

@ -1,6 +1,5 @@
backoff==2.2.1 backoff==1.3.2
docker==7.1.0 docker-compose==1.11.2
packaging==24.2 docker==2.1.0
pytest==8.3.4 pytest==3.0.5
requests==2.32.3 requests==2.11.1
urllib3==2.3.0

View File

@ -1,7 +1,6 @@
# Docker Image running one (or multiple) webservers listening on all given ports from WEB_PORTS environment variable # Docker Image running one (or multiple) webservers listening on all given ports from WEB_PORTS environment variable
FROM python:3-alpine FROM python:3
RUN apk add --no-cache bash
COPY ./webserver.py / COPY ./webserver.py /
COPY ./entrypoint.sh / COPY ./entrypoint.sh /
WORKDIR /opt WORKDIR /opt

View File

@ -5,11 +5,11 @@ trap '[ ${#PIDS[@]} -gt 0 ] && kill -TERM ${PIDS[@]}' TERM
declare -a PIDS declare -a PIDS
for port in $WEB_PORTS; do for port in $WEB_PORTS; do
echo starting a web server listening on port "$port"; echo starting a web server listening on port $port;
/webserver.py "$port" & /webserver.py $port &
PIDS+=($!) PIDS+=($!)
done done
wait "${PIDS[@]}" wait ${PIDS[@]}
trap - TERM trap - TERM
wait "${PIDS[@]}" wait ${PIDS[@]}

View File

@ -13,13 +13,13 @@ class Handler(http.server.SimpleHTTPRequestHandler):
if self.path == "/headers": if self.path == "/headers":
response_body += self.headers.as_string() response_body += self.headers.as_string()
elif self.path == "/port": elif self.path == "/port":
response_body += f"answer from port {PORT}\n" response_body += "answer from port %s\n" % PORT
elif re.match(r"/status/(\d+)", self.path): elif re.match("/status/(\d+)", self.path):
result = re.match(r"/status/(\d+)", self.path) result = re.match("/status/(\d+)", self.path)
response_code = int(result.group(1)) response_code = int(result.group(1))
response_body += f"answer with response code {response_code}\n" response_body += "answer with response code %s\n" % response_code
elif self.path == "/": elif self.path == "/":
response_body += f"I'm {os.environ['HOSTNAME']}\n" response_body += "I'm %s\n" % os.environ['HOSTNAME']
else: else:
response_body += "No route for this path!\n" response_body += "No route for this path!\n"
response_code = 404 response_code = 404
@ -28,7 +28,7 @@ class Handler(http.server.SimpleHTTPRequestHandler):
self.send_header("Content-Type", "text/plain") self.send_header("Content-Type", "text/plain")
self.end_headers() self.end_headers()
if len(response_body): if (len(response_body)):
self.wfile.write(response_body.encode()) self.wfile.write(response_body.encode())
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -0,0 +1 @@
This directory contains tests that showcase scenarios known to break the expected behavior of nginx-proxy.

View File

@ -0,0 +1,5 @@
Test the behavior of nginx-proxy when restarted after deleting a certificate file is was using.
1. nginx-proxy is created with a virtual host having a certificate
1. while nginx-proxy is running, the certificate file is deleted
1. nginx-proxy is then restarted (without removing the container)

View File

@ -0,0 +1,70 @@
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4096 (0x1000)
Signature Algorithm: sha256WithRSAEncryption
Issuer: O=nginx-proxy test suite, CN=www.nginx-proxy.tld
Validity
Not Before: Feb 17 23:20:54 2017 GMT
Not After : Jul 5 23:20:54 2044 GMT
Subject: CN=web.nginx-proxy
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:b6:27:63:a5:c6:e8:f4:7a:94:0e:cc:a2:62:76:
6d:5d:33:6f:cf:19:fc:e7:e5:bb:0e:0e:d0:7c:4f:
73:4c:48:2b:17:d1:4d:d5:9f:42:08:73:84:54:8c:
86:d2:c5:da:59:01:3f:42:22:e0:36:f0:dc:ab:de:
0a:bd:26:2b:22:13:87:a6:1f:23:ef:0e:99:27:8b:
15:4a:1b:ef:93:c9:6b:91:de:a0:02:0c:62:bb:cc:
56:37:e8:25:92:c3:1f:f1:69:d8:7c:a8:33:e0:89:
ce:14:67:a0:39:77:88:91:e6:a3:07:97:90:22:88:
d0:79:18:63:fb:6f:7e:ee:2b:42:7e:23:f5:e7:da:
e9:ee:6a:fa:96:65:9f:e1:2b:15:49:c8:cd:2d:ce:
86:4f:2c:2a:67:79:bf:41:30:14:cc:f6:0f:14:74:
9e:b6:d3:d0:3b:f0:1b:b8:e8:19:2a:fd:d6:fd:dc:
4b:4e:65:7d:9b:bf:37:7e:2d:35:22:2e:74:90:ce:
41:35:3d:41:a0:99:db:97:1f:bf:3e:18:3c:48:fb:
da:df:c6:4e:4e:b9:67:b8:10:d5:a5:13:03:c4:b7:
65:e7:aa:f0:14:4b:d3:4d:ea:fe:8f:69:cf:50:21:
63:27:cf:9e:4c:67:15:7b:3f:3b:da:cb:17:80:61:
1e:25
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:web.nginx-proxy
Signature Algorithm: sha256WithRSAEncryption
09:31:be:db:4e:b0:b6:68:da:ae:5b:16:51:29:fc:9f:61:b6:
5a:2f:3c:35:ef:67:76:97:b0:34:4e:3b:b4:d6:88:19:4f:84:
2e:73:d3:c0:3a:4c:41:54:6c:bb:67:89:67:ad:25:55:d7:d4:
80:fe:a7:3f:3d:9e:f1:34:96:d8:da:5a:78:51:c0:63:f1:52:
29:35:55:f4:7d:70:1c:d3:96:62:7f:64:86:81:52:27:c4:c6:
10:13:c6:73:56:4d:32:d0:b3:c3:c8:2c:25:83:e4:2b:1d:d4:
74:30:e5:85:af:2d:b6:a5:6b:fe:5d:d3:3c:00:58:94:f4:6a:
f5:a6:1d:cf:f9:ed:d5:27:ed:13:24:b2:4f:2b:f3:b8:e4:af:
0c:1d:fe:e0:6a:01:5e:a2:44:ff:3e:96:fa:6c:39:a3:51:37:
f3:72:55:d8:2d:29:6e:de:95:b9:d8:e3:1e:65:a5:9c:0d:79:
2d:39:ab:c7:ac:16:b6:a5:71:4b:35:a4:6c:72:47:1b:72:9c:
67:58:c1:fc:f6:7f:a7:73:50:7b:d6:27:57:74:a1:31:38:a7:
31:e3:b9:d4:c9:45:33:ec:ed:16:cf:c5:bd:d0:03:b1:45:3f:
68:0d:91:5c:26:4e:37:05:74:ed:3e:75:5e:ca:5e:ee:e2:51:
4b:da:08:99
-----BEGIN CERTIFICATE-----
MIIC8zCCAdugAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzEfMB0GA1UECgwWbmdp
bngtcHJveHkgdGVzdCBzdWl0ZTEcMBoGA1UEAwwTd3d3Lm5naW54LXByb3h5LnRs
ZDAeFw0xNzAyMTcyMzIwNTRaFw00NDA3MDUyMzIwNTRaMBoxGDAWBgNVBAMMD3dl
Yi5uZ2lueC1wcm94eTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALYn
Y6XG6PR6lA7MomJ2bV0zb88Z/Ofluw4O0HxPc0xIKxfRTdWfQghzhFSMhtLF2lkB
P0Ii4Dbw3KveCr0mKyITh6YfI+8OmSeLFUob75PJa5HeoAIMYrvMVjfoJZLDH/Fp
2HyoM+CJzhRnoDl3iJHmoweXkCKI0HkYY/tvfu4rQn4j9efa6e5q+pZln+ErFUnI
zS3Ohk8sKmd5v0EwFMz2DxR0nrbT0DvwG7joGSr91v3cS05lfZu/N34tNSIudJDO
QTU9QaCZ25cfvz4YPEj72t/GTk65Z7gQ1aUTA8S3Zeeq8BRL003q/o9pz1AhYyfP
nkxnFXs/O9rLF4BhHiUCAwEAAaMeMBwwGgYDVR0RBBMwEYIPd2ViLm5naW54LXBy
b3h5MA0GCSqGSIb3DQEBCwUAA4IBAQAJMb7bTrC2aNquWxZRKfyfYbZaLzw172d2
l7A0Tju01ogZT4Quc9PAOkxBVGy7Z4lnrSVV19SA/qc/PZ7xNJbY2lp4UcBj8VIp
NVX0fXAc05Zif2SGgVInxMYQE8ZzVk0y0LPDyCwlg+QrHdR0MOWFry22pWv+XdM8
AFiU9Gr1ph3P+e3VJ+0TJLJPK/O45K8MHf7gagFeokT/Ppb6bDmjUTfzclXYLSlu
3pW52OMeZaWcDXktOavHrBa2pXFLNaRsckcbcpxnWMH89n+nc1B71idXdKExOKcx
47nUyUUz7O0Wz8W90AOxRT9oDZFcJk43BXTtPnVeyl7u4lFL2giZ
-----END CERTIFICATE-----

View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAtidjpcbo9HqUDsyiYnZtXTNvzxn85+W7Dg7QfE9zTEgrF9FN
1Z9CCHOEVIyG0sXaWQE/QiLgNvDcq94KvSYrIhOHph8j7w6ZJ4sVShvvk8lrkd6g
Agxiu8xWN+glksMf8WnYfKgz4InOFGegOXeIkeajB5eQIojQeRhj+29+7itCfiP1
59rp7mr6lmWf4SsVScjNLc6GTywqZ3m/QTAUzPYPFHSettPQO/AbuOgZKv3W/dxL
TmV9m783fi01Ii50kM5BNT1BoJnblx+/Phg8SPva38ZOTrlnuBDVpRMDxLdl56rw
FEvTTer+j2nPUCFjJ8+eTGcVez872ssXgGEeJQIDAQABAoIBAGQCMFW+ZfyEqHGP
rMA+oUEAkqy0agSwPwky3QjDXlxNa0uCYSeebtTRB6CcHxHuCzm+04puN4gyqhW6
rU64fAoTivCMPGBuNWxekmvD9r+/YM4P2u4E+th9EgFT9f0kII+dO30FpKXtQzY0
xuWGWXcxl+T9M+eiEkPKPmq4BoqgTDo5ty7qDv0ZqksGotKFmdYbtSvgBAueJdwu
VWJvenI9F42ExBRKOW1aldiRiaYBCLiCVPKJtOg9iuOP9RHUL1SE8xy5I5mm78g3
a13ji3BNq3yS+VhGjQ7zDy1V1jGupLoJw4I7OThu8hy+B8Vt8EN/iqakufOkjlTN
xTJ33CkCgYEA5Iymg0NTjWk6aEkFa9pERjfUWqdVp9sWSpFFZZgi55n7LOx6ohi3
vuLim3is/gYfK2kU/kHGZZLPnT0Rdx0MbOB4XK0CAUlqtUd0IyO4jMZ06g4/kn3N
e2jLdCCIBoEQuLk4ELxj2mHsLQhEvDrg7nzU2WpTHHhvJbIbDWOAxhsCgYEAzAgv
rKpanF+QDf4yeKHxAj2rrwRksTw4Pe7ZK/bog/i+HIVDA70vMapqftHbual/IRrB
JL7hxskoJ/h9c1w4xkWDjqkSKz8/Ihr4dyPfWyGINWbx/rarT/m5MU5SarScoK7o
Xgb25x+W+61rtI+2JhVRGO86+JiAeT4LkAX88L8CgYAwHHug/jdEeXZWJakCfzwI
HBCT1M3vO+uBXvtg25ndb0i0uENIhDOJ93EEkW65Osis9r34mBgPocwaqZRXosHO
2aH8wF6/rpjL+HK2QvrCh7Rs4Pr494qeA/1wQLjhxaGjgToQK9hJTHvPLwJpLWvU
SGr2Ka+9Oo0LPmb7dorRKQKBgQCLsNcjOodLJMp2KiHYIdfmlt6itzlRd09yZ8Nc
rHHJWVagJEUbnD1hnbHIHlp3pSqbObwfMmlWNoc9xo3tm6hrZ1CJLgx4e5b3/Ms8
ltznge/F0DPDFsH3wZwfu+YFlJ7gDKCfL9l/qEsxCS0CtJobPOEHV1NivNbJK8ey
1ca19QKBgDTdMOUsobAmDEkPQIpxfK1iqYAB7hpRLi79OOhLp23NKeyRNu8FH9fo
G3DZ4xUi6hP2bwiYugMXDyLKfvxbsXwQC84kGF8j+bGazKNhHqEC1OpYwmaTB3kg
qL9cHbjWySeRdIsRY/eWmiKjUwmiO54eAe1HWUdcsuz8yM3xf636
-----END RSA PRIVATE KEY-----

View File

@ -0,0 +1,17 @@
web:
image: web
expose:
- "81"
environment:
WEB_PORTS: 81
VIRTUAL_HOST: web.nginx-proxy
reverseproxy:
image: jwilder/nginx-proxy:test
container_name: reverseproxy
environment:
DEBUG: "true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./tmp_certs:/etc/nginx/certs:ro

View File

@ -0,0 +1,73 @@
import logging
import os
from os.path import join, isfile
from shutil import copy
from time import sleep
import pytest
from requests import ConnectionError
script_dir = os.path.dirname(__file__)
pytestmark = pytest.mark.xfail() # TODO delete this marker once those issues are fixed
@pytest.yield_fixture(scope="module", autouse=True)
def certs():
"""
pytest fixture that provides cert and key files into the tmp_certs directory
"""
file_names = ("web.nginx-proxy.crt", "web.nginx-proxy.key")
logging.info("copying server cert and key files into tmp_certs")
for f_name in file_names:
copy(join(script_dir, "certs", f_name), join(script_dir, "tmp_certs"))
yield
logging.info("cleaning up the tmp_cert directory")
for f_name in file_names:
if isfile(join(script_dir, "tmp_certs", f_name)):
os.remove(join(script_dir, "tmp_certs", f_name))
###############################################################################
def test_unknown_virtual_host_is_503(docker_compose, nginxproxy):
r = nginxproxy.get("http://foo.nginx-proxy/")
assert r.status_code == 503
def test_http_web_is_301(docker_compose, nginxproxy):
r = nginxproxy.get("http://web.nginx-proxy/port", allow_redirects=False)
assert r.status_code == 301
def test_https_web_is_200(docker_compose, nginxproxy):
r = nginxproxy.get("https://web.nginx-proxy/port")
assert r.status_code == 200
assert 'answer from port 81\n' in r.text
@pytest.mark.incremental
def test_delete_cert_and_restart_reverseproxy(docker_compose):
os.remove(join(script_dir, "tmp_certs", "web.nginx-proxy.crt"))
docker_compose.containers.get("reverseproxy").restart()
sleep(3) # give time for the container to initialize
assert "running" == docker_compose.containers.get("reverseproxy").status
@pytest.mark.incremental
def test_unknown_virtual_host_is_still_503(nginxproxy):
r = nginxproxy.get("http://foo.nginx-proxy/")
assert r.status_code == 503
@pytest.mark.incremental
def test_http_web_is_now_200(nginxproxy):
r = nginxproxy.get("http://web.nginx-proxy/port", allow_redirects=False)
assert r.status_code == 200
assert "answer from port 81\n" == r.text
@pytest.mark.incremental
def test_https_web_is_now_broken_since_there_is_no_cert(nginxproxy):
with pytest.raises(ConnectionError):
nginxproxy.get("https://web.nginx-proxy/port")

View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -6,7 +6,7 @@ Furthermore, if the nginx-proxy in such state is restarted, the nginx process wi
In the generated nginx config file, we can notice the presence of an empty `upstream {}` block. In the generated nginx config file, we can notice the presence of an empty `upstream {}` block.
This can be fixed by merging [PR-585](https://github.com/nginx-proxy/nginx-proxy/pull/585). This can be fixed by merging [PR-585](https://github.com/jwilder/nginx-proxy/pull/585).
## How to reproduce ## How to reproduce

View File

@ -1,12 +1,17 @@
version: "2"
networks: networks:
netA: netA:
netB: netB:
services: services:
nginx-proxy: reverseproxy:
container_name: reverseproxy container_name: reverseproxy
networks: networks:
- netA - netA
image: jwilder/nginx-proxy:test
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
webA: webA:
networks: networks:
@ -15,7 +20,7 @@ services:
expose: expose:
- 81 - 81
environment: environment:
WEB_PORTS: "81" WEB_PORTS: 81
VIRTUAL_HOST: webA.nginx-proxy VIRTUAL_HOST: webA.nginx-proxy
webB: webB:
@ -25,5 +30,6 @@ services:
expose: expose:
- 82 - 82
environment: environment:
WEB_PORTS: "82" WEB_PORTS: 82
VIRTUAL_HOST: webB.nginx-proxy VIRTUAL_HOST: webB.nginx-proxy

View File

@ -3,6 +3,8 @@ from time import sleep
import pytest import pytest
import requests import requests
pytestmark = pytest.mark.xfail() # TODO delete this marker once #585 is merged
def test_default_nginx_welcome_page_should_not_be_served(docker_compose, nginxproxy): def test_default_nginx_welcome_page_should_not_be_served(docker_compose, nginxproxy):
r = nginxproxy.get("http://whatever.nginx-proxy/", allow_redirects=False) r = nginxproxy.get("http://whatever.nginx-proxy/", allow_redirects=False)

View File

@ -1,3 +1,5 @@
import pytest
def test_unknown_virtual_host(docker_compose, nginxproxy): def test_unknown_virtual_host(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/port") r = nginxproxy.get("http://nginx-proxy/port")
assert r.status_code == 503 assert r.status_code == 503

View File

@ -0,0 +1,24 @@
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: 81
VIRTUAL_HOST: web1.nginx-proxy.tld
web2:
image: web
expose:
- "82"
environment:
WEB_PORTS: 82
VIRTUAL_HOST: web2.nginx-proxy.tld
sut:
image: jwilder/nginx-proxy:test
volumes:
- /var/run/docker.sock:/f00.sock:ro
- ./lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
environment:
DOCKER_HOST: unix:///f00.sock

View File

@ -1,70 +0,0 @@
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4096 (0x1000)
Signature Algorithm: sha256WithRSAEncryption
Issuer: O=nginx-proxy test suite, CN=www.nginx-proxy.tld
Validity
Not Before: Jan 10 00:08:52 2017 GMT
Not After : May 28 00:08:52 2044 GMT
Subject: CN=*.nginx-proxy.tld
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:cb:45:f4:14:9b:fe:64:85:79:4a:36:8d:3d:d1:
27:d0:7c:36:28:30:e6:73:80:6f:7c:49:23:d0:6c:
17:e4:44:c0:77:4d:9a:c2:bc:24:84:e3:a5:4d:ba:
d2:da:51:7b:a1:2a:12:d4:c0:19:55:69:2c:22:27:
2d:1a:f6:fc:4b:7f:e9:cb:a8:3c:e8:69:b8:d2:4f:
de:4e:50:e2:d0:74:30:7c:42:5a:ae:aa:85:a5:b1:
71:4d:c9:7e:86:8b:62:8c:3e:0d:e3:3b:c3:f5:81:
0b:8c:68:79:fe:bf:10:fb:ae:ec:11:49:6d:64:5e:
1a:7d:b3:92:93:4e:96:19:3a:98:04:a7:66:b2:74:
61:2d:41:13:0c:a4:54:0d:2c:78:fd:b4:a3:e8:37:
78:9a:de:fa:bc:2e:a8:0f:67:14:58:ce:c3:87:d5:
14:0e:8b:29:7d:48:19:b2:a9:f5:b4:e8:af:32:21:
67:15:7e:43:52:8b:20:cf:9f:38:43:bf:fd:c8:24:
7f:52:a3:88:f2:f1:4a:14:91:2a:6e:91:6f:fb:7d:
6a:78:c6:6d:2e:dd:1e:4c:2b:63:bb:3a:43:9c:91:
f9:df:d3:08:13:63:86:7d:ce:e8:46:cf:f1:6c:1f:
ca:f7:4c:de:d8:4b:e0:da:bc:06:d9:87:0f:ff:96:
45:85
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:*.nginx-proxy.tld
Signature Algorithm: sha256WithRSAEncryption
6e:a5:0e:e4:d3:cc:d5:b7:fc:34:75:89:4e:98:8c:e7:08:06:
a8:5b:ec:13:7d:83:99:a2:61:b8:d5:12:6e:c5:b4:53:4e:9a:
22:cd:ad:14:30:6a:7d:58:d7:23:d9:a4:2a:96:a0:40:9e:50:
9f:ce:f2:fe:8c:dd:9a:ac:99:39:5b:89:2d:ca:e5:3e:c3:bc:
03:04:1c:12:d9:6e:b8:9f:f0:3a:be:12:44:7e:a4:21:86:73:
af:d5:00:51:3f:2c:56:70:34:8f:26:b0:7f:b0:cf:cf:7f:f9:
40:6f:00:29:c4:cf:c3:b7:c2:49:3d:3f:b0:26:78:87:b9:c7:
6c:1b:aa:6a:1a:dd:c5:eb:f2:69:ba:6d:46:0b:92:49:b5:11:
3c:eb:48:c7:2f:fb:33:a6:6a:82:a2:ab:f8:1e:5f:7d:e3:b7:
f2:fd:f5:88:a5:09:4d:a0:bc:f4:3b:cd:d2:8b:d7:57:1f:86:
3b:d2:3e:a4:92:21:b0:02:0b:e9:e0:c4:1c:f1:78:e2:58:a7:
26:5f:4c:29:c8:23:f0:6e:12:3f:bd:ad:44:7b:0b:bd:db:ba:
63:8d:07:c6:9d:dc:46:cc:63:40:ba:5e:45:82:dd:9a:e5:50:
e8:e7:d7:27:88:fc:6f:1d:8a:e7:5c:49:28:aa:10:29:75:28:
c7:52:de:f9
-----BEGIN CERTIFICATE-----
MIIC9zCCAd+gAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzEfMB0GA1UECgwWbmdp
bngtcHJveHkgdGVzdCBzdWl0ZTEcMBoGA1UEAwwTd3d3Lm5naW54LXByb3h5LnRs
ZDAeFw0xNzAxMTAwMDA4NTJaFw00NDA1MjgwMDA4NTJaMBwxGjAYBgNVBAMMESou
bmdpbngtcHJveHkudGxkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
y0X0FJv+ZIV5SjaNPdEn0Hw2KDDmc4BvfEkj0GwX5ETAd02awrwkhOOlTbrS2lF7
oSoS1MAZVWksIictGvb8S3/py6g86Gm40k/eTlDi0HQwfEJarqqFpbFxTcl+hoti
jD4N4zvD9YELjGh5/r8Q+67sEUltZF4afbOSk06WGTqYBKdmsnRhLUETDKRUDSx4
/bSj6Dd4mt76vC6oD2cUWM7Dh9UUDospfUgZsqn1tOivMiFnFX5DUosgz584Q7/9
yCR/UqOI8vFKFJEqbpFv+31qeMZtLt0eTCtjuzpDnJH539MIE2OGfc7oRs/xbB/K
90ze2Evg2rwG2YcP/5ZFhQIDAQABoyAwHjAcBgNVHREEFTATghEqLm5naW54LXBy
b3h5LnRsZDANBgkqhkiG9w0BAQsFAAOCAQEAbqUO5NPM1bf8NHWJTpiM5wgGqFvs
E32DmaJhuNUSbsW0U06aIs2tFDBqfVjXI9mkKpagQJ5Qn87y/ozdmqyZOVuJLcrl
PsO8AwQcEtluuJ/wOr4SRH6kIYZzr9UAUT8sVnA0jyawf7DPz3/5QG8AKcTPw7fC
ST0/sCZ4h7nHbBuqahrdxevyabptRguSSbURPOtIxy/7M6ZqgqKr+B5ffeO38v31
iKUJTaC89DvN0ovXVx+GO9I+pJIhsAIL6eDEHPF44linJl9MKcgj8G4SP72tRHsL
vdu6Y40Hxp3cRsxjQLpeRYLdmuVQ6OfXJ4j8bx2K51xJKKoQKXUox1Le+Q==
-----END CERTIFICATE-----

View File

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAy0X0FJv+ZIV5SjaNPdEn0Hw2KDDmc4BvfEkj0GwX5ETAd02a
wrwkhOOlTbrS2lF7oSoS1MAZVWksIictGvb8S3/py6g86Gm40k/eTlDi0HQwfEJa
rqqFpbFxTcl+hotijD4N4zvD9YELjGh5/r8Q+67sEUltZF4afbOSk06WGTqYBKdm
snRhLUETDKRUDSx4/bSj6Dd4mt76vC6oD2cUWM7Dh9UUDospfUgZsqn1tOivMiFn
FX5DUosgz584Q7/9yCR/UqOI8vFKFJEqbpFv+31qeMZtLt0eTCtjuzpDnJH539MI
E2OGfc7oRs/xbB/K90ze2Evg2rwG2YcP/5ZFhQIDAQABAoIBAQCjAro2PNLJMfCO
fyjNRgmzu6iCmpR0U68T8GN0JPsT576g7e8J828l0pkhuIyW33lRSThIvLSUNf9a
dChL032H3lBTLduKVh4NKleQXnVFzaeEPoISSFVdButiAhAhPW4OIUVp0OfY3V+x
fac3j2nDLAfL5SKAtqZv363Py9m66EBYm5BmGTQqT/frQWeCEBvlErQef5RIaU8p
e2zMWgSNNojVai8U3nKNRvYHWeWXM6Ck7lCvkHhMF+RpbmCZuqhbEARVnehU/Jdn
QHJ3nxeA2OWpoWKXvAHtSnno49yxq1UIstiQvY+ng5C5i56UlB60UiU2NJ6doZkB
uQ7/1MaBAoGBAORdcFtgdgRALjXngFWhpCp0CseyUehn1KhxDCG+D1pJ142/ymcf
oJOzKJPMRNDdDUBMnR1GBfy7rmwvYevI/SMNy2Qs7ofcXPbdtwwvTCToZ1V9/54k
VfuPBFT+3QzWRvG1tjTV3E4L2VV3nrl2qNPhE5DlfIaU3nQq5Fl0HprJAoGBAOPf
MWOTGev61CdODO5KN3pLAoamiPs5lEUlz3kM3L1Q52YLITxNDjRj9hWBUATJZOS2
pLOoYRwmhD7vrnimMc41+NuuFX+4T7hWPc8uSuOxX0VijYtULyNRK57mncG1Fq9M
RMLbOJ7FD+8jdXNsSMqpQ+pxLJRX/A10O2fOQnbdAoGAL5hV4YWSM0KZHvz332EI
ER0MXiCJN7HkPZMKH0I4eu3m8hEmAyYxVndBnsQ1F37q0xrkqAQ/HTSUntGlS/og
4Bxw5pkCwegoq/77tpto+ExDtSrEitYx4XMmSPyxX4qNULU5m3tzJgUML+b1etwD
Rd2kMU/TC02dq4KBAy/TbRkCgYAl1xN5iJz+XenLGR/2liZ+TWR+/bqzlU006mF4
pZUmbv/uJxz+yYD5XDwqOA4UrWjuvhG9r9FoflDprp2XdWnB556KxG7XhcDfSJr9
A5/2DadXe1Ur9O/a+oi2228JEsxQkea9QPA3FVxfBtFjOHEiDlez39VaUP4PMeUH
iO3qlQKBgFQhdTb7HeYnApYIDHLmd1PvjRvp8XKR1CpEN0nkw8HpHcT1q1MUjQCr
iT6FQupULEvGmO3frQsgVeRIQDbEdZK3C5xCtn6qOw70sYATVf361BbTtidmU9yV
THFxwDSVLiVZgFryoY/NtAc27sVdJnGsPRjjaeVgALAsLbmZ1K/H
-----END RSA PRIVATE KEY-----

View File

@ -1,6 +0,0 @@
services:
nginx-proxy:
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/certs:/etc/nginx/certs:ro
- ${PYTEST_MODULE_PATH}/acme_root:/usr/share/nginx/html:ro

View File

@ -1,27 +0,0 @@
def test_redirect_acme_challenge_location_disabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web1.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 301
def test_redirect_acme_challenge_location_enabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web2.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 200
def test_noredirect_acme_challenge_location_disabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web3.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 404
def test_noredirect_acme_challenge_location_enabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web4.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 200

View File

@ -1,40 +0,0 @@
services:
nginx-proxy:
environment:
ACME_HTTP_CHALLENGE_LOCATION: "false"
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: "web1.nginx-proxy.tld"
web2:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST: "web2.nginx-proxy.tld"
ACME_HTTP_CHALLENGE_LOCATION: "true"
web3:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: "web3.nginx-proxy.tld"
HTTPS_METHOD: noredirect
web4:
image: web
expose:
- "84"
environment:
WEB_PORTS: "84"
VIRTUAL_HOST: "web4.nginx-proxy.tld"
HTTPS_METHOD: noredirect
ACME_HTTP_CHALLENGE_LOCATION: "true"

View File

@ -1,27 +0,0 @@
def test_redirect_acme_challenge_location_enabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web1.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 200
def test_redirect_acme_challenge_location_disabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web2.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 301
def test_noredirect_acme_challenge_location_enabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web3.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 200
def test_noredirect_acme_challenge_location_disabled(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web4.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 404

View File

@ -1,36 +0,0 @@
services:
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: "web1.nginx-proxy.tld"
web2:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST: "web2.nginx-proxy.tld"
ACME_HTTP_CHALLENGE_LOCATION: "false"
web3:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: "web3.nginx-proxy.tld"
HTTPS_METHOD: noredirect
web4:
image: web
expose:
- "84"
environment:
WEB_PORTS: "84"
VIRTUAL_HOST: "web4.nginx-proxy.tld"
HTTPS_METHOD: noredirect
ACME_HTTP_CHALLENGE_LOCATION: "false"

View File

@ -1,13 +0,0 @@
def test_redirect_acme_challenge_location_legacy(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web1.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 200
def test_noredirect_acme_challenge_location_legacy(docker_compose, nginxproxy, acme_challenge_path):
r = nginxproxy.get(
f"http://web2.nginx-proxy.tld/{acme_challenge_path}",
allow_redirects=False
)
assert r.status_code == 404

View File

@ -1,21 +0,0 @@
services:
nginx-proxy:
environment:
ACME_HTTP_CHALLENGE_LOCATION: "legacy"
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: "web1.nginx-proxy.tld"
web2:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST: "web2.nginx-proxy.tld"
HTTPS_METHOD: noredirect

View File

@ -1,66 +0,0 @@
"""
Test that nginx-proxy-tester can build successfully
"""
import pathlib
import re
import docker
import pytest
client = docker.from_env()
@pytest.fixture(scope = "session")
def docker_build(request):
# Define Dockerfile path
current_file_path = pathlib.Path(__file__)
dockerfile_path = current_file_path.parent.parent.joinpath("requirements")
dockerfile_name = "Dockerfile-nginx-proxy-tester"
# Build the Docker image
image, logs = client.images.build(
path = dockerfile_path.as_posix(),
dockerfile = dockerfile_name,
rm = True, # Remove intermediate containers
tag = "nginx-proxy-tester-ci", # Tag for the built image
)
# Check for build success
for log in logs:
if "stream" in log:
print(log["stream"].strip())
if "error" in log:
raise Exception(log["error"])
def teardown():
# Clean up after teardown
client.images.remove(image.id, force=True)
request.addfinalizer(teardown)
# Return the image name
return "nginx-proxy-tester-ci"
def test_build_nginx_proxy_tester(docker_build):
assert docker_build == "nginx-proxy-tester-ci"
def test_run_nginx_proxy_tester(docker_build):
# Run the container with 'pytest -v' command to output version info
container = client.containers.run("nginx-proxy-tester-ci",
command = "pytest -V",
detach = True,
)
# Wait for the container to finish and get the exit code
result = container.wait()
exit_code = result.get("StatusCode", 1) # Default to 1 (error) if not found
# Get the output logs from the container
output = container.logs().decode("utf-8").strip()
# Clean up: Remove the container
container.remove()
# Assertions
assert exit_code == 0, "Container exited with a non-zero exit code"
assert re.search(r"pytest\s\d+\.\d+\.\d+", output)

10
test/test_composev2.py Normal file
View File

@ -0,0 +1,10 @@
import pytest
def test_unknown_virtual_host(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503
def test_forwards_to_whoami(docker_compose, nginxproxy):
r = nginxproxy.get("http://web.nginx-proxy.local/port")
assert r.status_code == 200
assert r.text == "answer from port 81\n"

15
test/test_composev2.yml Normal file
View File

@ -0,0 +1,15 @@
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy:test
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
web:
image: web
expose:
- "81"
environment:
WEB_PORTS: 81
VIRTUAL_HOST: web.nginx-proxy.local

View File

@ -1,23 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>Maintenance</title>
<style>
html {
color-scheme: light dark;
}
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Damn, there's some maintenance in progress.</h1>
<p>
Our apologies for this temporary inconvenience. Regular service
performance will be re-established shortly.
</p>
</body>
</html>

View File

@ -1,7 +0,0 @@
import re
def test_custom_error_page(docker_compose, nginxproxy):
r = nginxproxy.get("http://unknown.nginx-proxy.tld")
assert r.status_code == 503
assert re.search(r"Damn, there's some maintenance in progress.", r.text)

View File

@ -1,5 +0,0 @@
services:
nginx-proxy:
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/50x.html:/usr/share/nginx/html/errors/50x.html:ro

View File

@ -1,17 +1,19 @@
import pytest
def test_custom_default_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy): def test_custom_default_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/") r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503 assert r.status_code == 503
assert "X-test" not in r.headers assert "X-test" not in r.headers
def test_custom_default_conf_applies_to_web1(docker_compose, nginxproxy): def test_custom_default_conf_applies_to_web1(docker_compose, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy.example/port") r = nginxproxy.get("http://web1.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 81\n" assert r.text == "answer from port 81\n"
assert "X-test" in r.headers assert "X-test" in r.headers
assert "f00" == r.headers["X-test"] assert "f00" == r.headers["X-test"]
def test_custom_default_conf_applies_to_web2(docker_compose, nginxproxy): def test_custom_default_conf_applies_to_web2(docker_compose, nginxproxy):
r = nginxproxy.get("http://web2.nginx-proxy.example/port") r = nginxproxy.get("http://web2.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 82\n" assert r.text == "answer from port 82\n"
assert "X-test" in r.headers assert "X-test" in r.headers
@ -19,7 +21,7 @@ def test_custom_default_conf_applies_to_web2(docker_compose, nginxproxy):
def test_custom_default_conf_is_overriden_for_web3(docker_compose, nginxproxy): def test_custom_default_conf_is_overriden_for_web3(docker_compose, nginxproxy):
r = nginxproxy.get("http://web3.nginx-proxy.example/port") r = nginxproxy.get("http://web3.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 83\n" assert r.text == "answer from port 83\n"
assert "X-test" in r.headers assert "X-test" in r.headers

View File

@ -1,30 +1,31 @@
services: nginx-proxy:
nginx-proxy: image: jwilder/nginx-proxy:test
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_f00.conf:/etc/nginx/vhost.d/default_location:ro - ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_bar.conf:/etc/nginx/vhost.d/web3.nginx-proxy.example_location:ro - ./my_custom_proxy_settings.conf:/etc/nginx/vhost.d/default_location:ro
- ./my_custom_proxy_settings_bar.conf:/etc/nginx/vhost.d/web3.nginx-proxy.local_location:ro
web1: web1:
image: web image: web
expose: expose:
- "81" - "81"
environment: environment:
WEB_PORTS: "81" WEB_PORTS: 81
VIRTUAL_HOST: web1.nginx-proxy.example VIRTUAL_HOST: web1.nginx-proxy.local
web2: web2:
image: web image: web
expose: expose:
- "82" - "82"
environment: environment:
WEB_PORTS: "82" WEB_PORTS: 82
VIRTUAL_HOST: web2.nginx-proxy.example VIRTUAL_HOST: web2.nginx-proxy.local
web3: web3:
image: web image: web
expose: expose:
- "83" - "83"
environment: environment:
WEB_PORTS: "83" WEB_PORTS: 83
VIRTUAL_HOST: web3.nginx-proxy.example VIRTUAL_HOST: web3.nginx-proxy.local

View File

@ -1,17 +1,19 @@
import pytest
def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy): def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/") r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503 assert r.status_code == 503
assert "X-test" not in r.headers assert "X-test" not in r.headers
def test_custom_conf_applies_to_web1(docker_compose, nginxproxy): def test_custom_conf_applies_to_web1(docker_compose, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy.example/port") r = nginxproxy.get("http://web1.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 81\n" assert r.text == "answer from port 81\n"
assert "X-test" in r.headers assert "X-test" in r.headers
assert "f00" == r.headers["X-test"] assert "f00" == r.headers["X-test"]
def test_custom_conf_applies_to_web2(docker_compose, nginxproxy): def test_custom_conf_applies_to_web2(docker_compose, nginxproxy):
r = nginxproxy.get("http://web2.nginx-proxy.example/port") r = nginxproxy.get("http://web2.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 82\n" assert r.text == "answer from port 82\n"
assert "X-test" in r.headers assert "X-test" in r.headers

View File

@ -1,21 +1,24 @@
version: '2'
services: services:
nginx-proxy: nginx-proxy:
image: jwilder/nginx-proxy:test
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_f00.conf:/etc/nginx/proxy.conf:ro - ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
- ./my_custom_proxy_settings.conf:/etc/nginx/proxy.conf:ro
web1: web1:
image: web image: web
expose: expose:
- "81" - "81"
environment: environment:
WEB_PORTS: "81" WEB_PORTS: 81
VIRTUAL_HOST: web1.nginx-proxy.example VIRTUAL_HOST: web1.nginx-proxy.local
web2: web2:
image: web image: web
expose: expose:
- "82" - "82"
environment: environment:
WEB_PORTS: "82" WEB_PORTS: 82
VIRTUAL_HOST: web2.nginx-proxy.example VIRTUAL_HOST: web2.nginx-proxy.local

View File

@ -1,27 +1,22 @@
import pytest
def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy): def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/") r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503 assert r.status_code == 503
assert "X-test" not in r.headers assert "X-test" not in r.headers
def test_custom_conf_applies_to_web1(docker_compose, nginxproxy): def test_custom_conf_applies_to_web1(docker_compose, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy.example/port") r = nginxproxy.get("http://web1.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 81\n" assert r.text == "answer from port 81\n"
assert "X-test" in r.headers assert "X-test" in r.headers
assert "f00" == r.headers["X-test"] assert "f00" == r.headers["X-test"]
def test_custom_conf_applies_to_regex(docker_compose, nginxproxy):
r = nginxproxy.get("http://regex.foo.nginx-proxy.example/port")
assert r.status_code == 200
assert r.text == "answer from port 83\n"
assert "X-test" in r.headers
assert "bar" == r.headers["X-test"]
def test_custom_conf_does_not_apply_to_web2(docker_compose, nginxproxy): def test_custom_conf_does_not_apply_to_web2(docker_compose, nginxproxy):
r = nginxproxy.get("http://web2.nginx-proxy.example/port") r = nginxproxy.get("http://web2.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 82\n" assert r.text == "answer from port 82\n"
assert "X-test" not in r.headers assert "X-test" not in r.headers
def test_custom_block_is_present_in_nginx_generated_conf(docker_compose, nginxproxy): def test_custom_block_is_present_in_nginx_generated_conf(docker_compose, nginxproxy):
assert b"include /etc/nginx/vhost.d/web1.nginx-proxy.example_location;" in nginxproxy.get_conf() assert "include /etc/nginx/vhost.d/web1.nginx-proxy.local_location;" in nginxproxy.get_conf()

View File

@ -1,30 +1,24 @@
version: '2'
services: services:
nginx-proxy: nginx-proxy:
image: jwilder/nginx-proxy:test
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_f00.conf:/etc/nginx/vhost.d/web1.nginx-proxy.example_location:ro - ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_bar.conf:/etc/nginx/vhost.d/561032515ede3ab3a015edfb244608b72409c430_location:ro - ./my_custom_proxy_settings.conf:/etc/nginx/vhost.d/web1.nginx-proxy.local_location:ro
web1: web1:
image: web image: web
expose: expose:
- "81" - "81"
environment: environment:
WEB_PORTS: "81" WEB_PORTS: 81
VIRTUAL_HOST: web1.nginx-proxy.example VIRTUAL_HOST: web1.nginx-proxy.local
web2: web2:
image: web image: web
expose: expose:
- "82" - "82"
environment: environment:
WEB_PORTS: "82" WEB_PORTS: 82
VIRTUAL_HOST: web2.nginx-proxy.example VIRTUAL_HOST: web2.nginx-proxy.local
regex:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: ~^regex.*\.nginx-proxy\.example$

View File

@ -1,24 +1,19 @@
import pytest
def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy): def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/") r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503 assert r.status_code == 503
assert "X-test" not in r.headers assert "X-test" not in r.headers
def test_custom_conf_applies_to_web1(docker_compose, nginxproxy): def test_custom_conf_applies_to_web1(docker_compose, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy.example/port") r = nginxproxy.get("http://web1.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 81\n" assert r.text == "answer from port 81\n"
assert "X-test" in r.headers assert "X-test" in r.headers
assert "f00" == r.headers["X-test"] assert "f00" == r.headers["X-test"]
def test_custom_conf_applies_to_regex(docker_compose, nginxproxy):
r = nginxproxy.get("http://regex.foo.nginx-proxy.example/port")
assert r.status_code == 200
assert r.text == "answer from port 83\n"
assert "X-test" in r.headers
assert "bar" == r.headers["X-test"]
def test_custom_conf_does_not_apply_to_web2(docker_compose, nginxproxy): def test_custom_conf_does_not_apply_to_web2(docker_compose, nginxproxy):
r = nginxproxy.get("http://web2.nginx-proxy.example/port") r = nginxproxy.get("http://web2.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 82\n" assert r.text == "answer from port 82\n"
assert "X-test" not in r.headers assert "X-test" not in r.headers

View File

@ -1,30 +1,24 @@
version: '2'
services: services:
nginx-proxy: nginx-proxy:
image: jwilder/nginx-proxy:test
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_f00.conf:/etc/nginx/vhost.d/web1.nginx-proxy.example:ro - ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_bar.conf:/etc/nginx/vhost.d/561032515ede3ab3a015edfb244608b72409c430:ro - ./my_custom_proxy_settings.conf:/etc/nginx/vhost.d/web1.nginx-proxy.local:ro
web1: web1:
image: web image: web
expose: expose:
- "81" - "81"
environment: environment:
WEB_PORTS: "81" WEB_PORTS: 81
VIRTUAL_HOST: web1.nginx-proxy.example VIRTUAL_HOST: web1.nginx-proxy.local
web2: web2:
image: web image: web
expose: expose:
- "82" - "82"
environment: environment:
WEB_PORTS: "82" WEB_PORTS: 82
VIRTUAL_HOST: web2.nginx-proxy.example VIRTUAL_HOST: web2.nginx-proxy.local
regex:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: ~^regex.*\.nginx-proxy\.example$

View File

@ -1,17 +1,19 @@
import pytest
def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy): def test_custom_conf_does_not_apply_to_unknown_vhost(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/") r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503 assert r.status_code == 503
assert "X-test" not in r.headers assert "X-test" not in r.headers
def test_custom_conf_applies_to_web1(docker_compose, nginxproxy): def test_custom_conf_applies_to_web1(docker_compose, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy.example/port") r = nginxproxy.get("http://web1.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 81\n" assert r.text == "answer from port 81\n"
assert "X-test" in r.headers assert "X-test" in r.headers
assert "f00" == r.headers["X-test"] assert "f00" == r.headers["X-test"]
def test_custom_conf_applies_to_web2(docker_compose, nginxproxy): def test_custom_conf_applies_to_web2(docker_compose, nginxproxy):
r = nginxproxy.get("http://web2.nginx-proxy.example/port") r = nginxproxy.get("http://web2.nginx-proxy.local/port")
assert r.status_code == 200 assert r.status_code == 200
assert r.text == "answer from port 82\n" assert r.text == "answer from port 82\n"
assert "X-test" in r.headers assert "X-test" in r.headers

View File

@ -1,21 +1,24 @@
version: '2'
services: services:
nginx-proxy: nginx-proxy:
image: jwilder/nginx-proxy:test
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ${PYTEST_MODULE_PATH}/my_custom_proxy_settings_f00.conf:/etc/nginx/conf.d/my_custom_proxy_settings_f00.conf:ro - ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
- ./my_custom_proxy_settings.conf:/etc/nginx/conf.d/my_custom_proxy_settings.conf:ro
web1: web1:
image: web image: web
expose: expose:
- "81" - "81"
environment: environment:
WEB_PORTS: "81" WEB_PORTS: 81
VIRTUAL_HOST: web1.nginx-proxy.example VIRTUAL_HOST: web1.nginx-proxy.local
web2: web2:
image: web image: web
expose: expose:
- "82" - "82"
environment: environment:
WEB_PORTS: "82" WEB_PORTS: 82
VIRTUAL_HOST: web2.nginx-proxy.example VIRTUAL_HOST: web2.nginx-proxy.local

View File

@ -1,48 +0,0 @@
import json
import pytest
def test_debug_endpoint_is_enabled_globally(docker_compose, nginxproxy):
r = nginxproxy.get("http://enabled.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
r = nginxproxy.get("http://stripped.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
def test_debug_endpoint_response_contains_expected_values(docker_compose, nginxproxy):
r = nginxproxy.get("http://enabled.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
try:
jsonResponse = json.loads(r.text)
except ValueError as err:
pytest.fail("Failed to parse debug endpoint response as JSON: %s" % err, pytrace=False)
assert jsonResponse["global"]["enable_debug_endpoint"] == "true"
assert jsonResponse["vhost"]["enable_debug_endpoint"] == True
def test_debug_endpoint_paths_stripped_if_response_too_long(docker_compose, nginxproxy):
r = nginxproxy.get("http://stripped.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
try:
jsonResponse = json.loads(r.text)
except ValueError as err:
pytest.fail("Failed to parse debug endpoint response as JSON: %s" % err, pytrace=False)
if "paths" in jsonResponse["vhost"]:
pytest.fail("Expected paths to be stripped from debug endpoint response", pytrace=False)
assert jsonResponse["warning"] == "Virtual paths configuration for this hostname is too large and has been stripped from response."
def test_debug_endpoint_hostname_replaced_by_warning_if_regexp(docker_compose, nginxproxy):
r = nginxproxy.get("http://regexp.foo.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
try:
jsonResponse = json.loads(r.text)
except ValueError as err:
pytest.fail("Failed to parse debug endpoint response as JSON: %s" % err, pytrace=False)
assert jsonResponse["vhost"]["hostname"] == "Hostname is a regexp and unsafe to include in the debug response."
def test_debug_endpoint_is_disabled_per_container(docker_compose, nginxproxy):
r = nginxproxy.get("http://disabled.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 404

View File

@ -1,59 +0,0 @@
services:
nginx-proxy:
environment:
DEBUG_ENDPOINT: "true"
debug_enabled:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: enabled.debug.nginx-proxy.example
debug_stripped:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST_MULTIPORTS: |-
stripped.debug.nginx-proxy.example:
"/1":
"/2":
"/3":
"/4":
"/5":
"/6":
"/7":
"/8":
"/9":
"/10":
"/11":
"/12":
"/13":
"/14":
"/15":
"/16":
"/17":
"/18":
"/19":
"/20":
debug_regexp:
image: web
expose:
- "84"
environment:
WEB_PORTS: "84"
VIRTUAL_HOST: ~^regexp.*\.debug.nginx-proxy.example
debug_disabled:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: disabled.debug.nginx-proxy.example
labels:
com.github.nginx-proxy.nginx-proxy.debug-endpoint: "false"

View File

@ -1,26 +0,0 @@
import json
import pytest
def test_debug_endpoint_is_disabled_globally(docker_compose, nginxproxy):
r = nginxproxy.get("http://disabled1.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 404
r = nginxproxy.get("http://disabled2.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 404
def test_debug_endpoint_is_enabled_per_container(docker_compose, nginxproxy):
r = nginxproxy.get("http://enabled.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
def test_debug_endpoint_response_contains_expected_values(docker_compose, nginxproxy):
r = nginxproxy.get("http://enabled.debug.nginx-proxy.example/nginx-proxy-debug")
assert r.status_code == 200
try:
jsonResponse = json.loads(r.text)
except ValueError as err:
pytest.fail("Failed to parse debug endpoint response as JSON:: %s" % err, pytrace=False)
assert jsonResponse["global"]["enable_debug_endpoint"] == "false"
assert jsonResponse["vhost"]["enable_debug_endpoint"] == True

View File

@ -1,27 +0,0 @@
services:
debug_disabled1:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: disabled1.debug.nginx-proxy.example
debug_disabled2:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST: disabled2.debug.nginx-proxy.example
debug_enabled:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: enabled.debug.nginx-proxy.example
labels:
com.github.nginx-proxy.nginx-proxy.debug-endpoint: "true"

View File

@ -1,3 +1,6 @@
import pytest
def test_fallback_on_default(docker_compose, nginxproxy): def test_fallback_on_default(docker_compose, nginxproxy):
r = nginxproxy.get("http://unknown.nginx-proxy.tld/port") r = nginxproxy.get("http://unknown.nginx-proxy.tld/port")
assert r.status_code == 200 assert r.status_code == 200

View File

@ -0,0 +1,18 @@
# GIVEN a webserver with VIRTUAL_HOST set to web1.tld
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: 81
VIRTUAL_HOST: web1.tld
# WHEN nginx-proxy runs with DEFAULT_HOST set to web1.tld
sut:
image: jwilder/nginx-proxy:test
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
environment:
DEFAULT_HOST: web1.tld

View File

@ -1,12 +0,0 @@
services:
nginx-proxy:
environment:
DEFAULT_HOST: web1.tld
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: web1.tld

View File

@ -1,22 +0,0 @@
services:
nginx-proxy:
volumes:
- /var/run/docker.sock:/f00.sock:ro
environment:
DOCKER_HOST: unix:///f00.sock
web1:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: web1.nginx-proxy.tld
web2:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST: web2.nginx-proxy.tld

1
test/test_dockergen/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
nginx.tmpl

View File

@ -1,27 +0,0 @@
import docker
import pytest
from packaging.version import Version
raw_version = docker.from_env().version()["Version"]
pytestmark = pytest.mark.skipif(
Version(raw_version) < Version("1.13"),
reason="Docker compose syntax v3 requires docker engine v1.13 or later (got {raw_version})"
)
def test_unknown_virtual_host_is_503(docker_compose, nginxproxy):
r = nginxproxy.get("http://unknown.nginx.container.docker/")
assert r.status_code == 503
def test_forwards_to_whoami(docker_compose, nginxproxy):
r = nginxproxy.get("http://whoami.nginx.container.docker/")
assert r.status_code == 200
whoami_container = docker_compose.containers.get("whoami")
assert r.text == f"I'm {whoami_container.id[:12]}\n"
if __name__ == "__main__":
import doctest
doctest.testmod()

View File

@ -0,0 +1,38 @@
import os
import docker
import logging
import pytest
@pytest.yield_fixture(scope="module")
def nginx_tmpl():
"""
pytest fixture which extracts the the nginx config template from
the jwilder/nginx-proxy:test image
"""
script_dir = os.path.dirname(__file__)
logging.info("extracting nginx.tmpl from jwilder/nginx-proxy:test")
docker_client = docker.from_env()
print(docker_client.containers.run(
image='jwilder/nginx-proxy:test',
remove=True,
volumes=['{current_dir}:{current_dir}'.format(current_dir=script_dir)],
entrypoint='sh',
command='-xc "cp /app/nginx.tmpl {current_dir} && chmod 777 {current_dir}/nginx.tmpl"'.format(
current_dir=script_dir),
stderr=True))
yield
logging.info("removing nginx.tmpl")
os.remove(os.path.join(script_dir, "nginx.tmpl"))
def test_unknown_virtual_host_is_503(nginx_tmpl, docker_compose, nginxproxy):
r = nginxproxy.get("http://unknown.nginx.container.docker/")
assert r.status_code == 503
def test_forwards_to_whoami(nginx_tmpl, docker_compose, nginxproxy):
r = nginxproxy.get("http://whoami.nginx.container.docker/")
assert r.status_code == 200
whoami_container = docker_compose.containers.get("whoami")
assert r.text == "I'm %s\n" % whoami_container.id[:12]

View File

@ -0,0 +1,27 @@
version: '2'
services:
nginx:
image: nginx
container_name: nginx
volumes:
- /etc/nginx/conf.d
- ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
dockergen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl
web:
image: web
container_name: whoami
expose:
- "80"
environment:
WEB_PORTS: 80
VIRTUAL_HOST: whoami.nginx.container.docker

View File

@ -0,0 +1,66 @@
import os
import docker
import logging
import pytest
import re
def versiontuple(v):
"""
>>> versiontuple("1.12.3")
(1, 12, 3)
>>> versiontuple("1.13.0")
(1, 13, 0)
>>> versiontuple("17.03.0-ce")
(17, 3, 0)
>>> versiontuple("17.03.0-ce") < (1, 13)
False
"""
return tuple(map(int, (v.split('-')[0].split("."))))
raw_version = docker.from_env().version()['Version']
pytestmark = pytest.mark.skipif(
versiontuple(raw_version) < (1, 13),
reason="Docker compose syntax v3 requires docker engine v1.13 or later (got %s)" % raw_version)
@pytest.yield_fixture(scope="module")
def nginx_tmpl():
"""
pytest fixture which extracts the the nginx config template from
the jwilder/nginx-proxy:test image
"""
script_dir = os.path.dirname(__file__)
logging.info("extracting nginx.tmpl from jwilder/nginx-proxy:test")
docker_client = docker.from_env()
print(docker_client.containers.run(
image='jwilder/nginx-proxy:test',
remove=True,
volumes=['{current_dir}:{current_dir}'.format(current_dir=script_dir)],
entrypoint='sh',
command='-xc "cp /app/nginx.tmpl {current_dir} && chmod 777 {current_dir}/nginx.tmpl"'.format(
current_dir=script_dir),
stderr=True))
yield
logging.info("removing nginx.tmpl")
os.remove(os.path.join(script_dir, "nginx.tmpl"))
def test_unknown_virtual_host_is_503(nginx_tmpl, docker_compose, nginxproxy):
r = nginxproxy.get("http://unknown.nginx.container.docker/")
assert r.status_code == 503
def test_forwards_to_whoami(nginx_tmpl, docker_compose, nginxproxy):
r = nginxproxy.get("http://whoami.nginx.container.docker/")
assert r.status_code == 200
whoami_container = docker_compose.containers.get("whoami")
assert r.text == "I'm %s\n" % whoami_container.id[:12]
if __name__ == '__main__':
import doctest
doctest.testmod()

View File

@ -1,23 +1,18 @@
volumes: version: '3'
nginx_conf:
services: services:
nginx-proxy-nginx: nginx:
image: nginx image: nginx
container_name: nginx container_name: nginx
volumes: volumes:
- nginx_conf:/etc/nginx/conf.d:ro - nginx_conf:/etc/nginx/conf.d
ports: - ../lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro
- "80:80"
- "443:443"
nginx-proxy-dockergen: dockergen:
image: nginxproxy/docker-gen image: jwilder/docker-gen
command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf command: -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
volumes: volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
- ../../nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl - ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl
- nginx_conf:/etc/nginx/conf.d - nginx_conf:/etc/nginx/conf.d
web: web:
@ -26,5 +21,8 @@ services:
expose: expose:
- "80" - "80"
environment: environment:
WEB_PORTS: "80" WEB_PORTS: 80
VIRTUAL_HOST: whoami.nginx.container.docker VIRTUAL_HOST: whoami.nginx.container.docker
volumes:
nginx_conf: {}

View File

@ -1,15 +0,0 @@
def test_nohttp_missing_cert_disabled(docker_compose, nginxproxy):
r = nginxproxy.get("http://nohttp-missing-cert-disabled.nginx-proxy.tld/", allow_redirects=False)
assert r.status_code == 503
def test_nohttp_missing_cert_enabled(docker_compose, nginxproxy):
r = nginxproxy.get("http://nohttp-missing-cert-enabled.nginx-proxy.tld/", allow_redirects=False)
assert r.status_code == 200
def test_redirect_missing_cert_disabled(docker_compose, nginxproxy):
r = nginxproxy.get("http://redirect-missing-cert-disabled.nginx-proxy.tld/", allow_redirects=False)
assert r.status_code == 301
def test_redirect_missing_cert_enabled(docker_compose, nginxproxy):
r = nginxproxy.get("http://redirect-missing-cert-enabled.nginx-proxy.tld/", allow_redirects=False)
assert r.status_code == 200

View File

@ -1,40 +0,0 @@
services:
nginx-proxy:
environment:
ENABLE_HTTP_ON_MISSING_CERT: "false"
nohttp-missing-cert-disabled:
image: web
expose:
- "81"
environment:
WEB_PORTS: "81"
VIRTUAL_HOST: nohttp-missing-cert-disabled.nginx-proxy.tld
HTTPS_METHOD: nohttp
nohttp-missing-cert-enabled:
image: web
expose:
- "82"
environment:
WEB_PORTS: "82"
VIRTUAL_HOST: nohttp-missing-cert-enabled.nginx-proxy.tld
HTTPS_METHOD: nohttp
ENABLE_HTTP_ON_MISSING_CERT: "true"
redirect-missing-cert-disabled:
image: web
expose:
- "83"
environment:
WEB_PORTS: "83"
VIRTUAL_HOST: redirect-missing-cert-disabled.nginx-proxy.tld
redirect-missing-cert-enabled:
image: web
expose:
- "84"
environment:
WEB_PORTS: "84"
VIRTUAL_HOST: redirect-missing-cert-enabled.nginx-proxy.tld
ENABLE_HTTP_ON_MISSING_CERT: "true"

46
test/test_events.py Normal file
View File

@ -0,0 +1,46 @@
"""
Test that nginx-proxy detects new containers
"""
from time import sleep
import pytest
from docker.errors import NotFound
@pytest.yield_fixture()
def web1(docker_compose):
"""
pytest fixture creating a web container with `VIRTUAL_HOST=web1.nginx-proxy` listening on port 81.
"""
container = docker_compose.containers.run(
name="web1",
image="web",
detach=True,
environment={
"WEB_PORTS": "81",
"VIRTUAL_HOST": "web1.nginx-proxy"
},
ports={"81/tcp": None}
)
sleep(2) # give it some time to initialize and for docker-gen to detect it
yield container
try:
docker_compose.containers.get("web1").remove(force=True)
except NotFound:
pass
def test_nginx_proxy_behavior_when_alone(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503
def test_new_container_is_detected(web1, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy/port")
assert r.status_code == 200
assert "answer from port 81\n" == r.text
web1.remove(force=True)
sleep(2)
r = nginxproxy.get("http://web1.nginx-proxy/port")
assert r.status_code == 503

5
test/test_events.yml Normal file
View File

@ -0,0 +1,5 @@
nginxproxy:
image: jwilder/nginx-proxy:test
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./lib/ssl/dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro

View File

@ -1,84 +0,0 @@
"""
Test that nginx-proxy detects new containers
"""
from time import sleep
import pytest
from docker.errors import NotFound
@pytest.fixture
def web1(docker_compose):
"""
pytest fixture creating a web container with `VIRTUAL_HOST=web1.nginx-proxy` listening on port 81.
"""
container = docker_compose.containers.run(
name="web1",
image="web",
detach=True,
environment={
"WEB_PORTS": "81",
"VIRTUAL_HOST": "web1.nginx-proxy"
},
ports={"81/tcp": None}
)
docker_compose.networks.get("test_events-net").connect(container)
sleep(2) # give it some time to initialize and for docker-gen to detect it
yield container
try:
docker_compose.containers.get("web1").remove(force=True)
except NotFound:
pass
@pytest.fixture
def web2(docker_compose):
"""
pytest fixture creating a web container with `VIRTUAL_HOST=nginx-proxy`, `VIRTUAL_PATH=/web2/` and `VIRTUAL_DEST=/` listening on port 82.
"""
container = docker_compose.containers.run(
name="web2",
image="web",
detach=True,
environment={
"WEB_PORTS": "82",
"VIRTUAL_HOST": "nginx-proxy",
"VIRTUAL_PATH": "/web2/",
"VIRTUAL_DEST": "/",
},
ports={"82/tcp": None}
)
docker_compose.networks.get("test_events-net").connect(container)
sleep(2) # give it some time to initialize and for docker-gen to detect it
yield container
try:
docker_compose.containers.get("web2").remove(force=True)
except NotFound:
pass
def test_nginx_proxy_behavior_when_alone(docker_compose, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/")
assert r.status_code == 503
def test_new_container_is_detected_vhost(web1, nginxproxy):
r = nginxproxy.get("http://web1.nginx-proxy/port")
assert r.status_code == 200
assert "answer from port 81\n" == r.text
web1.remove(force=True)
sleep(2)
r = nginxproxy.get("http://web1.nginx-proxy/port")
assert r.status_code == 503
def test_new_container_is_detected_vpath(web2, nginxproxy):
r = nginxproxy.get("http://nginx-proxy/web2/port")
assert r.status_code == 200
assert "answer from port 82\n" == r.text
r = nginxproxy.get("http://nginx-proxy/port")
assert r.status_code in [404, 503]
web2.remove(force=True)
sleep(2)
r = nginxproxy.get("http://nginx-proxy/web2/port")
assert r.status_code == 503

Some files were not shown because too many files have changed in this diff Show More