* build: bump nginx 1.27.0 to 1.27.1
* Update README.md
Link to nginx changelog
Co-authored-by: Nicolas Duchon <nicolas.duchon@gmail.com>
---------
Co-authored-by: Nicolas Duchon <nicolas.duchon@gmail.com>
Say we have two containers:
- `app1` with `HTTPS_METHOD=redirect`
- `app2` with `HTTPS_METHOD=nohttps`
Without this change the fallback answer on an HTTPS request to an unknown
server would change depending on whether `app1` is up (503) or not
(connection refused). This is not wanted.
In case someone doesn't want HTTPS at all, they just have to not bind
port 443.
Values:
* `legacy` (default): generate location blocks for ACME HTP Challenge
excepted when `HTTPS_METHOD=noredirect` or there is no certificate for
the domain
* `true`: generate location blocks for ACME HTP Challenge in all cases
* `false`: do not generate location blocks for ACME HTP Challenge
This feature is currently needed because acme-companion may generate
the HTTP Challenge configuration while it was done already by nginx-proxy
(see #2465#issuecomment-2136361373).
Also sometimes a hardcoded ACME challenge location is not wanted because
the challenge validation is not done with acme-companion / Let's Encrypt,
and with a challenge location setup differently.
* chore/doc: explicit policy on missing certificate
This doesn't change the current nginx-proxy behavior, but makes explicit
the current HTTPS_METHOD policy on missing certificate.
* fix: bad wording about missing certificate
Co-authored-by: Nicolas Duchon <nicolas.duchon@gmail.com>
* docs: typo in suggestion
---------
Co-authored-by: Nicolas Duchon <nicolas.duchon@gmail.com>
Without this fix the response of nohttp sites to HTTP requests changes
depending on the existence of at least one HTTP enabled site:
* no HTTP enabled sites -> connection refused
* at least one HTTP enabled site -> 503
This fix ensures the response is always 503.
So that there is no need anymore for the Let's Encrypt companion to fiddle
with vhosts nginx configuration.
When `HTTPS_METHOD=nohttp` and the certificate is missing, enforce nohttp
instead of switching to `HTTPS_METHOD=redirect`.
For containers grouped by identical VIRTUAL_HOST,
those with no VIRTUAL_PATH variable were silently discarded
when at least one container with VIRTUAL_PATH existed.
(See nginx-proxy/nginx-proxy#1504)
Using variable VIRTUAL_HOST_MULTIPORTS as a dictionnary:
key: hostname
value: dictionnary:
key: path
value: struct
port
dest
When the dictionnary associated with a hostname is empty, default values
apply:
path = "/"
port = default port
dest = ""
For each path entry, port and dest are optionnal and are assigned default
values when missing.
Example:
VIRTUAL_HOST_MULTIPORTS: |
host1.example.org:
"/":
port: 8000
"/somewhere":
port: 9000
dest: "/elsewhere"
host2.example.org:
host3.example.org:
"/inner/path":
* Revert "build: update docs and tests with nginx 1.25.4"
This reverts commit d9b1751f97.
* Revert "build: bump nginx from 1.25.3 to 1.25.4"
This reverts commit 1aef017df2.
Introduces 3 new environmental variables:
- `LOG_FORMAT`
- `LOG_FORMAT_ESCAPE`
- `LOG_JSON`
`LOG_FORMAT` and `LOG_FORMAT_ESCAPE` default to standard values.
When `LOG_JSON` is set, defaults are changed to: `{"time_local":"$time_iso8601","client_ip":"$http_x_forwarded_for","remote_addr":"$remote_addr","request":"$request","status":"$status","body_bytes_sent":"$body_bytes_sent","request_time":"$request_time","upstream_response_time":"$upstream_response_time","upstream_addr":"$upstream_addr","http_referrer":"$http_referer","http_user_agent":"$http_user_agent","request_id":"$request_id"}`and `json`.
See `nginx.tmpl` and https://nginx.org/en/docs/http/ngx_http_log_module.html#log_format for details
Enable overriding default 'docker compose' command with environment variable
'DOCKER_COMPOSE'. This way docker compose v1 is still supported with:
$ DOCKER_COMPOSE=docker-compose pytest
This is important because people using the Debian packaged docker compose
are stuck to v1.
Add initial tests
Newlines
Remove unused variable
Co-authored-by: Richard Hansen <rhansen@rhansen.org>
Change comment value
Co-authored-by: Richard Hansen <rhansen@rhansen.org>
add missing services line
Co-authored-by: Richard Hansen <rhansen@rhansen.org>
Use deploy.replicas
Remove details about choosing a load balancing method
Feedback note
Co-authored-by: Nicolas Duchon <nicolas.duchon@gmail.com>
This makes it possible to bring up different compose files for
different tests in the same test module.
This change does not negatively affect performance because the fixture
is a no-op if the docker compose filename is unchanged between tests.
This partially reverts commit 2494e20784
by ignoring any network named "ingress" when searching for a
container's IP address.
That commit was technically a backwards-incompatible change: Some
users use nginx-proxy with Swarm mode even though it is not fully
supported. In such cases nginx-proxy should ignore the `ingress`
network, otherwise nginx will not be able to reach the
server (container-to-container traffic apparently doesn't work over
the Swarm `ingress` network).
The parts of that commit that examine the `SwarmNode` structure are
not reverted here because docker-gen does not currently populate that
structure -- not even when both docker-gen and the service task
container are running on the same manager node.
Before, a fallback http server was created to handle requests for
unknown virtual hosts even when `HTTPS_METHOD=nohttp`. (In this case,
all http vhosts would be unknown.) Likewise, a catch-all fallback
https server was still created even if `HTTPS_METHOD=nohttps`.
Now the fallback servers are created only if needed. This brings the
behavior in line with the documentation and user expectation. It will
also make it easier to implement a planned feature: different servers
on different ports.
Before, if neither the vhost-specific cert nor `default.crt` existed,
nginx-proxy would not create the https vhost. This resulted in nginx
either refusing the connection or serving the wrong vhost depending on
whether there was another https vhost with a certificate.
Now nginx-proxy always creates an https server for a vhost, even if
the vhost-specific certificate and the default certificate are both
missing. When both certs are missing, nginx is given empty
certificate data to make it possible for it to start up without an
error. The empty certificate data causes the user to see a TLS error,
which is much easier to troubleshoot than a connection refused error
or serving the wrong vhost.
In particular, reduce the nesting depth to make it easier to
understand what the code is doing by:
* converting an $O(nm)$ nested loop into two serial $O(n)+O(m)$
loops, and
* consolidating similar nested `if` cases.
to ensure that the version is always the first output line.
Also, always output `# nginx-proxy`, even if the version isn't known.
This makes it easier to find the start of the generated config in the
output of `nginx -T`.
Rationale for eliminating the check to see if the `DEBUG` environment
variable holds a true value:
* The `DEBUG` environment variable might be set on a container (for
purposes specific to that container, not `nginx-proxy`) to a value
that cannot be parsed as a bool, which would break `nginx-proxy`.
* It simplifies the template.
* It eliminates a cold code path.
* It avoids heisenbugs.
* It makes debugging easier for users.
Also delete the debug info tests, as they are fragile and they provide
limited value.
Alternatively, we could avoid collision with the container's use of
the `DEBUG` environment variable by using a container label [1] such
as `com.google.nginx-proxy.nginx-proxy.debug`. I think doing so has
dubious value, especially if we want to attempt backwards
compatibility with the `DEBUG` environment variable.
Fixes#2139
[1] https://docs.docker.com/engine/reference/commandline/run/#-set-metadata-on-container--l---label---label-file
Co-authored-by: Nicolas Duchon <nicolas.duchon@gmail.com>
If header values from a malicious client are passed to the backend
server unchecked and unchanged, the client may be able to subvert
security checks done by the backend server.
Move required files but 'nginx.tmpl' into a local 'app' folder and copy the
folder content into the image.
'nginx.tmpl' should be moved as well, but this is a breaking change for
configuration with a separate 'docker-gen' container.
This features allows the custom location blocks to be added to the
virtual path based routing. The custom config can be specified for each
container individually.
This commit removes the automatic path stripping and replaces it with a
user configurable environment variable. This can be set individually for
each container.
A test on raw IP addresses doesn't reach the existing IPv6 skip logic, added that to avoid a test failing when only IPv4 is available (eg: standard docker container networks).
Additionally some other tests set the `none` network and connecting to this fails as it's not allowed? Preventing that from happening resolves the final failing tests within containerized pytest.
The `network` object would never be in a list of network names (strings), and without `greedy=True` arg as the `docker-py` API docs note, the containers will not be part of the results, thus always returning an empty list which was not intended..
Now the network will properly match the current networks for pytest container, avoiding duplicate connect attempts, and the network list result will actually have containers to count when filtering by length.
When the container runs with host networking instead of the default bridge, the `$HOSTNAME` / `/etc/hostname` reflects that of the host instead of the container ID , which causes the pytest container to get removed accidentally.
Using a container name instead we can more reliably target the container to avoid removing it, should we need to run with host networking instead.
The original `/.dockerenv` approach is no longer valid, and context wise we're only using this for the test suite, so using an ENV in that container is a better solution.
This addition requires usage of `DEFAULT_HOST` on containers tested to ensure they don't accidentally use `web2` as their default fallback (due to no SNI / `-servername` requested in openssl queries), otherwise they would be testing against the incorrect DH params response.
They could alternatively request an FQDN explicitly as well, instead of relying on implicit fallback/default server selection behaviour.
---
`web2.nginx-proxy.tld.dhparam.pem` is a copy of `ffdhe2048.pem`.
- Use a DRY method instead.
- ENV test changed from 2048-bit to 3072-bit to avoid confusion in a future test that should not be mixed up accidentally with 2048-bit elsewhere.
- Custom DH file test comparison changed to match other comparisons for equality against the expected DH param content.
- Related comments revised, additional comment for context added by the test definition.
- Minor white-space adjustments.
Use YAML anchors for repeated values providing a single source of truth.
I would use `x-*` convention to store anchors above service containers, but this seems to require a compose config that defines the services (and version?) keys, which this test setup was failing to be compatible with for some reason..
Anonymous volumes are discouraged for reliable persistence.
Users should use named volumes or bind mounts instead. Potentially breaking change, users can also use explicit anonymous volumes instead of relying on implicit anonymous volumes.
`nginx-proxy` really should not be creating implicit anonymous volumes as in most cases it is undesirable.
`git blame` reveals this was added in 2014 by jwilder, with a message that implies implicit anonymous volumes was never intended..
- Added clarification comment of the DIR command
- Quoted `ARGS` usage required wrapping `ARGS` assignment in an array to properly expand. This wasn't broken before, but is required change to keep ShellCheck lint happy.
- Quote wrapped `DIR` usage, the volume target had an extra `/` before the `DIR` which seems unnecessary as `pwd` should return absolute path.
- Expanded `docker run` options to long-form.
As this project isn't exactly python focused apart from the test suite, I'll assume other contributors are probably not as experienced with python either. Since this is a rather technical test, the extra comments should help grok the functionality without floundering around with the docs.
When the subprocess raises an exception due to an issue with the command (_eg using `-CAfile` arg to `openssl` with an invalid path_), the tests would output large walls of text that wasn't particularly helpful in troubleshooting the issue. `stderr` was also leaking out inbetween the test case results in the terminal, this has been resolved by ensuring that output is caught and piped, which keeps it available to python when an exception is raised. Identifying the actual error cause and location is now much nicer.
Updated the output to be plain string content instead of byte strings, this works fine :)
Adds back the ability to avoid using DH params, provided no file was explicitly supplied.
This used to be `DHPARAM_GENERATION=false`, the equivalent is now `DHPARAM_SKIP=1` (default 0). Previous name was no longer appropriate.
Ensures that if a user has explicitly provided their own dhparam file to still output a warning instead of the skip message, since `DHPARAM_SKIP=1` doesn't disable the support in nginx.
- `dhparam_generation` tests are no longer necessary, dropped.
Modified the remaining `dhparam` test to use multiple `nginx-proxy` images to verify correct behavior for different configs.
Tests now cover:
- Default (ffdhe4096) is used.
- Alternative via ENV (ffdhe2048) works correctly.
- Invalid group via ENV (1024-bit) fails.
- Custom DH params provided via file mount works with warning emitted.
---
- `assert_log_contains`: added a `container_name` arg with `nginxproxy` as the default value. This allows multiple nginx-proxy containers to utilize this method instead.
- Extracted out the `openssl` test (_to `negotiate_cipher()`_) and modified it to be a bit more flexible. It now takes a container with optional extra args to pass to `openssl` command called, as well as the `grep` string to match. This made the original test redundant, so I've dropped it.
- Added two methods to use `negotiate_cipher()`, one verifies a DHE cipher suite was negotiated and checks that a DH emphermal key was also mentioned in the output. The other method verifies the expectation of failing to negotiate a valid cipher if DH params have not been set, while verifying that non-DHE cipher suites can be successfully negotiated.
- Added a `get_env()` method for extracting attached environments on a container. This is useful for verifying invalid `DHPARAM_BITS` values (eg `1024`-bit).
- The original `Server Temp Key` assertion was incorrect, it was expecting a value that is unrelated to DHE cipher suite support (_`X25519` is related to ECDHE_). This is due to TLS 1.3 being negotiated where you cannot use custom DH params, nor influence the negotiated cipher due to this mechanism changing from TLS 1.3. TLS 1.3 does support DH params, but it internally negotiates RFC 7919 group between server and client instead. Thus to verify expectations, the connection via `openssl` is made explicitly with TLS 1.2 instead.
- `DHPARAM_FILE` is a local var not intended for overriding via ENV. Clarified that with `local` declaration.
- `FFDHE_GROUP` var uses default assignment (_`:=4096` instead of only substitute `:-4096`_), so that `DHPARAM_BITS` retains the default 4096 value in subsequent references if no custom size was provided.
- Refactored the conditional statements to only handle early failure conditions. Shifting out the RFC7919 support that can run after all checks have passed.
- Revised comments.
Implicit URL is unnecessary (_and presently relies on Github redirecting from it's original mapped URL_).
Use an explicit URL instead to reduce the guesswork/trust of where the shortener was going to redirect to.
- `==` for string equality since we're using bash `[[ test ]]` already.
- Uppercase `socket_file` variable to be consistent with other internal variables used in the script.
- Convert `[ test ]` to `[[ test ]]` for consistency, improving maintenance. Double-bracket (_not posix compatible_) does not require quoted variables, ShellCheck lint knows this is safe too :)
- `-z` test for `$RESOLVERS` is native syntax to check for empty string value.
- Referenced variables should generally be wrapped like so `"${VAR}"`.
- Variable assignments with string values should be double quotes for content with variables, otherwise use single quotes (_no interpolation_).
- Converted my if statements to use the same style used in the rest of the file.
- While the anonymous VOLUME can be dropped from Dockerfile, the path needs to be valid at run-time, might as well ensure it's available by creating the dhparam folder at build.
- Generation logic no longer necessary, dropped.
- Standardized RFC 7919 groups added (2048, 3072, 4096), with 4096-bit remaining the default size. The DH logic can live in the entrypoint script as well.
- Third-party supplied pre-generated DH params removed as they're not considered trustworthy compared to RFC 7919 groups.
Check that when multiple containers have the same WIRTUAL_HOST and one of
them is unreachable, the resulting `upstream` block has no
`server 127.0.0.1 down;` entry.
The VIRTUAL_PORT environment variable should always be honored.
Even when the related port is not exposed.
Fix for nging-proxy/nginx-proxy#1132.
This commit also add the DEBUG environment variable which enables more
verbose comments in the nginx comfiguration file to help troubleshooting
unreachable containers.
Finaly it fixesnging-proxy/nginx-proxy#1105 as well by defining only one
fallback entry per upstream block.
PR #913 added `DHPARAM_GENERATION` as a positional argument to
generate-dhparam.sh. However, since it was the second positional argument,
`DHPARAM_BITS` would also have to be defined or `DHPARAM_GENERATION` would be
read into `DHPARAM_BITS`. This changes the arguments to be inherited variables
which do not depend on order, just declaration.
Also change instances of `GENERATE_DHPARAM` to `DHPARAM_GENERATION` since it's
unnecessary to have another variable. I think `GENERATE_DHPARAM` is actually a
better name (verb vs. noun), but `DHPARAM_GENERATION` is already defined and
may break someone if changed.
Addresses https://github.com/jwilder/nginx-proxy/pull/913#issuecomment-476014691
This commit updates both 'Dockerfile' and 'Dockerfile.alpine' to use
'nginx 1.19.3'. This change was implemented after feedback from @buchdag
to be able to use dependabot.
This commit updates both 'Dockerfile' and 'Dockerfile.alpine' to use
'go.15.10' when building the dependencies. This change was implemented
after feedback from @buchdag to be able to use dependabot.
Previously, the Dockerfile downloaded 'docker-gen' and 'forego' binaries
during build time. This caused a problem as it hard-coded the amd64
architecture for the images.
This commit updates both 'Dockerfile' and 'Dockerfile.alpine' to build
the `forego` and `docker-gen` executables from scratch instead of
downloading binaries directly.
This is achieved using multi-stage builds [1]. Two seperate stages first
build the binaries, and are then copied over to the final stage.
The advantage of this change is two-fold: First, it enables building
this image on architectures other than amd64. Secondly it adds trust by
not adding external binaries to the docker image.
This modified version passes the test both a linux desktop (amd64) as
well as a raspberry pi (armv7) with some caveats:
- On armv7, a modified version of the `jwilder/docker-gen` image is
required. See a seperate PR at [2].
- The 'test_dhparam_is_generated_if_missing' test fails. This also
doesn't currently pass on master.
[1] https://docs.docker.com/develop/develop-images/multistage-build/ [2]
https://github.com/jwilder/docker-gen/pull/327
Sometimes containers will not be assigned an IP (after reboot or due to misconfiguration). This leads to an incorrect "server <missing ip> down;" line in default.conf and crashes nginx.
@therealgambo provided a fix for this: https://github.com/jwilder/nginx-proxy/issues/845
* master:
Updated nginx to 1.11.6
Travis-CI's apt-get doesn't have --allow-downgrades yet, which is annoying because --force-yes is deprecated
Put --allow-downgrades in the right place
Upgrade docker-engine and allow downgrades
Clarified a couple parts in the README
Comment typo
Added httpoxy test
Updated README to reflect X-Forwarded-Port
Implemented more advanced webserver with routing and request header echoing, added header tests
Honor upstream forwarded port if available
add ssl_session_tickets to default site
Replace "replace" to "trimSuffix"
do not enable HSTS for subdomains
Remove proxy-tier network in favor of the default.
Added X-Forwarded-Port
Add docker-compose file for separate containers.
connect to uWSGI backends
If you have a question or want to request a feature, please **DO NOT SUBMIT** a new issue.
Instead please use the relevant Discussions section's category:
- 🙏 [Ask a question](https://github.com/nginx-proxy/nginx-proxy/discussions/categories/q-a)
- 💡 [Request a feature](https://github.com/nginx-proxy/nginx-proxy/discussions/categories/ideas)
## Bugs
If you are logging a bug, please search the current open issues first to see if there is already a bug opened.
For bugs, the easier you make it to reproduce the issue you see and the more initial information you provide, the easier and faster the bug can be identified and can get fixed.
Please at least provide:
- the exact nginx-proxy version you're using (if using `latest` please make sure it is up to date and provide the version number printed at container startup).
- complete configuration (compose file, command line, etc) of both your nginx-proxy container(s) and proxied containers. You should redact sensitive info if needed but please provide **full** configurations.
If you can provide a script or docker-compose file that reproduces the problems, that is very helpful.
## General advice about `latest`
Do not use the `latest` tag for production setups.
`latest` is nothing more than a convenient default used by Docker if no specific tag is provided, there isn't any strict convention on what goes into this tag over different projects, and it does not carry any promise of stability.
Using `latest` will most certainly put you at risk of experiencing uncontrolled updates to non backward compatible versions (or versions with breaking changes) and makes it harder for maintainers to track which exact version of the container you are experiencing an issue with.
This recommendation stands for pretty much every Docker image in existence, not just nginx-proxy's ones.
[](https://hub.docker.com/r/nginxproxy/nginx-proxy "Click to view the image on Docker Hub")
nginx-proxy sets up a container running nginx and [docker-gen](https://github.com/nginx-proxy/docker-gen). docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
nginx-proxy sets up a container running nginx and [docker-gen][1]. docker-gen generates reverseproxy configs for nginx and reloads nginx when containers are started and stopped.
See [Automated Nginx Reverse Proxy for Docker](http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/) for why you might want to use this.
See [Automated Nginx Reverse Proxy for Docker][2] for why you might want to use this.
### Usage
### Usage
To run it:
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
```console
docker run --detach \
Then start any containers you want proxied with an env var `VIRTUAL_HOST=subdomain.youdomain.com`
The containers being proxied must [expose](https://docs.docker.com/reference/run/#expose-incoming-ports) the port to be proxied, either by using the `EXPOSE` directive in their `Dockerfile` or by using the `--expose` flag to `docker run` or `docker create`.
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
### Docker Compose
```yaml
version:'2'
services:
nginx-proxy:
image:jwilder/nginx-proxy
container_name:nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image:jwilder/whoami
container_name:whoami
environment:
- VIRTUAL_HOST=whoami.local
```
```
```shell
Then start any containers (here an nginx container) you want proxied with an env var `VIRTUAL_HOST=subdomain.yourdomain.com`
$ docker-compose up
$ curl -H "Host: whoami.local" localhost
I'm 5b129ab83266
```
### Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, `foo.bar.com,baz.bar.com,bar.com` and each host will be setup the same.
### Wildcard Hosts
You can also use wildcards at the beginning and the end of host name, like `*.bar.com` or `foo.bar.*`. Or even a regular expression, which can be very useful in conjunction with a wildcard DNS service like [xip.io](http://xip.io), using `~^foo\.bar\..*\.xip\.io` will match `foo.bar.127.0.0.1.xip.io`, `foo.bar.10.0.2.2.xip.io` and all other given IPs. More information about this topic can be found in the nginx documentation about [`server_names`](http://nginx.org/en/docs/http/server_names.html).
### Multiple Networks
With the addition of [overlay networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) in Docker 1.9, your `nginx-proxy` container may need to connect to backend containers on multiple networks. By default, if you don't pass the `--net` flag when your `nginx-proxy` container is created, it will only be attached to the default `bridge` network. This means that it will not be able to connect to containers on networks other than `bridge`.
If you want your `nginx-proxy` container to be attached to a different network, you must pass the `--net=my-network` option in your `docker create` or `docker run` command. At the time of this writing, only a single network can be specified at container creation time. To attach to other networks, you can use the `docker network connect` command after your container is created:
```console
```console
$docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro \
In this example, the `my-nginx-proxy` container will be connected to `my-network` and `my-other-network` and will be able to proxy to other containers attached to those networks.
Provided your DNS is setup to resolve `foo.bar.com` to the host running nginx-proxy, a request to `http://foo.bar.com` will then be routed to a container with the `VIRTUAL_HOST` env var set to `foo.bar.com` (in this case, the **your-proxied-app** container).
### SSL Backends
The containers being proxied must :
If you would like the reverse proxy to connect to your backend using HTTPS instead of HTTP, set `VIRTUAL_PROTO=https` on the backend container.
- [expose](https://docs.docker.com/engine/reference/run/#expose-incoming-ports) the port to be proxied, either by using the `EXPOSE` directive in their `Dockerfile` or by using the `--expose` flag to `docker run` or `docker create`.
- share at least one Docker network with the nginx-proxy container: by default, if you don't pass the `--net` flag when your nginx-proxy container is created, it will only be attached to the default bridge network. This means that it will not be able to connect to containers on networks other than bridge.
### uWSGI Backends
Note: providing a port number in `VIRTUAL_HOST` isn't suported, please see [virtual ports](https://github.com/nginx-proxy/nginx-proxy/tree/main/docs#virtual-ports) or [custom external HTTP/HTTPS ports](https://github.com/nginx-proxy/nginx-proxy/tree/main/docs#custom-external-httphttps-ports) depending on what you want to achieve.
If you would like to connect to uWSGI backend, set `VIRTUAL_PROTO=uwsgi` on the
### Image variants
backend container. Your backend container should than listen on a port rather
than a socket and expose that port.
### Default Host
The nginx-proxy images are available in two flavors.
To set the default host for nginx use the env var `DEFAULT_HOST=foo.bar.com` for example
This image is based on the nginx:mainline image, itself based on the debian slim image.
### Separate Containers
nginx-proxy can also be run as two separate containers using the [jwilder/docker-gen](https://index.docker.io/u/jwilder/docker-gen/)
image and the official [nginx](https://registry.hub.docker.com/_/nginx/) image.
You may want to do this to prevent having the docker socket bound to a publicly exposed container service.
You can demo this pattern with docker-compose:
```console
```console
$docker-compose --file docker-compose-separate-containers.yml up
docker pull nginxproxy/nginx-proxy:1.6
$ curl -H "Host: whoami.local" localhost
I'm 5b129ab83266
```
```
To run nginx proxy as a separate container you'll need to have [nginx.tmpl](https://github.com/jwilder/nginx-proxy/blob/master/nginx.tmpl) on your host system.
Finally, start your containers with `VIRTUAL_HOST` environment variables.
> [!IMPORTANT]
>
> #### A note on `latest` and `alpine`:
>
> It is not recommended to use the `latest` (`nginxproxy/nginx-proxy`, `nginxproxy/nginx-proxy:latest`) or `alpine` (`nginxproxy/nginx-proxy:alpine`) tag for production setups.
>
> [Those tags point](https://hub.docker.com/r/nginxproxy/nginx-proxy/tags) to the latest commit in the `main` branch. They do not carry any promise of stability, and using them will probably put your nginx-proxy setup at risk of experiencing uncontrolled updates to non backward compatible versions (or versions with breaking changes). You should always specify the version you want to use explicitly to ensure your setup doesn't break when the image is updated.
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
### Additional documentation
### SSL Support
Please check the [docs section](https://github.com/nginx-proxy/nginx-proxy/tree/main/docs).
SSL is supported using single host, wildcard and SNI certificates using naming conventions for
### Powered by
certificates or optionally specifying a cert name (for SNI) as an environment variable.
The contents of `/path/to/certs` should contain the certificates and private keys for any virtual
hosts in use. The certificate and keys should be named after the virtual host with a `.crt` and
`.key` extension. For example, a container with `VIRTUAL_HOST=foo.bar.com` should have a
`foo.bar.com.crt` and `foo.bar.com.key` file in the certs directory.
If you are running the container in a virtualized environment (Hyper-V, VirtualBox, etc...),
/path/to/certs must exist in that environment or be made accessible to that environment.
By default, Docker is not able to mount directories on the host machine to containers running in a virtual machine.
#### Diffie-Hellman Groups
If you have Diffie-Hellman groups enabled, the files should be named after the virtual host with a
`dhparam` suffix and `.pem` extension. For example, a container with `VIRTUAL_HOST=foo.bar.com`
should have a `foo.bar.com.dhparam.pem` file in the certs directory.
#### Wildcard Certificates
Wildcard certificates and keys should be named after the domain name with a `.crt` and `.key` extension.
For example `VIRTUAL_HOST=foo.bar.com` would use cert name `bar.com.crt` and `bar.com.key`.
#### SNI
If your certificate(s) supports multiple domain names, you can start a container with `CERT_NAME=<name>`
to identify the certificate to be used. For example, a certificate for `*.foo.com` and `*.bar.com`
could be named `shared.crt` and `shared.key`. A container running with `VIRTUAL_HOST=foo.bar.com`
and `CERT_NAME=shared` will then use this shared cert.
#### How SSL Support Works
The SSL cipher configuration is based on [mozilla nginx intermediate profile](https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx) which
should provide compatibility with clients back to Firefox 1, Chrome 1, IE 7, Opera 5, Safari 1,
Windows XP IE8, Android 2.3, Java 7. The configuration also enables HSTS, and SSL
session caches.
The default behavior for the proxy when port 80 and 443 are exposed is as follows:
* If a container has a usable cert, port 80 will redirect to 443 for that container so that HTTPS
is always preferred when available.
* If the container does not have a usable cert, a 503 will be returned.
Note that in the latter case, a browser may get an connection error as no certificate is available
to establish a connection. A self-signed or generic cert named `default.crt` and `default.key`
will allow a client browser to make a SSL connection (likely w/ a warning) and subsequently receive
a 503.
To serve traffic in both SSL and non-SSL modes without redirecting to SSL, you can include the
environment variable `HTTPS_METHOD=noredirect` (the default is `HTTPS_METHOD=redirect`). You can also
disable the non-SSL site entirely with `HTTPS_METHOD=nohttp`. `HTTPS_METHOD` must be specified
on each container for which you want to override the default behavior. If `HTTPS_METHOD=noredirect` is
used, Strict Transport Security (HSTS) is disabled to prevent HTTPS users from being redirected by the
client. If you cannot get to the HTTP site after changing this setting, your browser has probably cached
the HSTS policy and is automatically redirecting you back to HTTPS. You will need to clear your browser's
HSTS cache or use an incognito window / different browser.
### Basic Authentication Support
In order to be able to secure your virtual host, you have to create a file named as its equivalent VIRTUAL_HOST variable on directory
/etc/nginx/htpasswd/$VIRTUAL_HOST
```
$ docker run -d -p 80:80 -p 443:443 \
-v /path/to/htpasswd:/etc/nginx/htpasswd \
-v /path/to/certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
```
You'll need apache2-utils on the machine where you plan to create the htpasswd file. Follow these [instructions](http://httpd.apache.org/docs/2.2/programs/htpasswd.html)
### Custom Nginx Configuration
If you need to configure Nginx beyond what is possible using environment variables, you can provide custom configuration files on either a proxy-wide or per-`VIRTUAL_HOST` basis.
#### Replacing default proxy settings
If you want to replace the default proxy settings for the nginx container, add a configuration file at `/etc/nginx/proxy.conf`. A file with the default settings would
# Mitigate httpoxy attack (see README for details)
proxy_set_headerProxy"";
```
***NOTE***: If you provide this file it will replace the defaults; you may want to check the .tmpl file to make sure you have all of the needed options.
***NOTE***: The default configuration blocks the `Proxy` HTTP request header from being sent to downstream servers. This prevents attackers from using the so-called [httpoxy attack](http://httpoxy.org). There is no legitimate reason for a client to send this header, and there are many vulnerable languages / platforms (`CVE-2016-5385`, `CVE-2016-5386`, `CVE-2016-5387`, `CVE-2016-5388`, `CVE-2016-1000109`, `CVE-2016-1000110`, `CERT-VU#797896`).
#### Proxy-wide
To add settings on a proxy-wide basis, add your configuration file under `/etc/nginx/conf.d` using a name ending in `.conf`.
This can be done in a derived image by creating the file in a `RUN` command or by `COPY`ing the file into `conf.d`:
```Dockerfile
FROM jwilder/nginx-proxy
RUN{\
echo'server_tokens off;';\
echo'client_max_body_size 100m;';\
} > /etc/nginx/conf.d/my_proxy.conf
```
Or it can be done by mounting in your custom configuration in your `docker run` command:
To add settings on a per-`VIRTUAL_HOST` basis, add your configuration file under `/etc/nginx/vhost.d`. Unlike in the proxy-wide case, which allows multiple config files with any name ending in `.conf`, the per-`VIRTUAL_HOST` file must be named exactly after the `VIRTUAL_HOST`.
In order to allow virtual hosts to be dynamically configured as backends are added and removed, it makes the most sense to mount an external directory as `/etc/nginx/vhost.d` as opposed to using derived images or mounting individual configuration files.
For example, if you have a virtual host named `app.example.com`, you could provide a custom configuration for that host as follows:
If you are using multiple hostnames for a single container (e.g. `VIRTUAL_HOST=example.com,www.example.com`), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
If you want most of your virtual hosts to use a default single configuration and then override on a few specific ones, add those settings to the `/etc/nginx/vhost.d/default` file. This file
will be used on any virtual host which does not have a `/etc/nginx/vhost.d/{VIRTUAL_HOST}` file associated with it.
#### Per-VIRTUAL_HOST location configuration
To add settings to the "location" block on a per-`VIRTUAL_HOST` basis, add your configuration file under `/etc/nginx/vhost.d`
just like the previous section except with the suffix `_location`.
For example, if you have a virtual host named `app.example.com` and you have configured a proxy_cache `my-cache` in another custom file, you could tell it to use a proxy cache as follows:
If you are using multiple hostnames for a single container (e.g. `VIRTUAL_HOST=example.com,www.example.com`), the virtual host configuration file must exist for each hostname. If you would like to use the same configuration for multiple virtual host names, you can use a symlink:
If you want most of your virtual hosts to use a default single `location` block configuration and then override on a few specific ones, add those settings to the `/etc/nginx/vhost.d/default_location` file. This file
will be used on any virtual host which does not have a `/etc/nginx/vhost.d/{VIRTUAL_HOST}` file associated with it.
### Contributing
Before submitting pull requests or issues, please check github to make sure an existing issue or pull request is not already open.
#### Running Tests Locally
To run tests, you'll need to install [bats 0.4.0](https://github.com/sstephenson/bats).
# Run the init logic if the default CMD was provided
if[[$*=='forego start -r']];then
_print_version
_check_unix_socket
_resolvers
_setup_dhparam
if[ -z "${TRUST_DOWNSTREAM_PROXY}"];then
cat >&2<<-EOT
Warning: TRUST_DOWNSTREAM_PROXY is not set; defaulting to "true". For security, you should explicitly set TRUST_DOWNSTREAM_PROXY to "false" if there is not a trusted reverse proxy in front of this proxy.
Warning: The default value of TRUST_DOWNSTREAM_PROXY might change to "false" in a future version of nginx-proxy. If you require TRUST_DOWNSTREAM_PROXY to be enabled, explicitly set it to "true".
This test suite is implemented on top of the [Bats](https://github.com/sstephenson/bats/blob/master/README.md) test framework.
Install requirements
--------------------
It is intended to verify the correct behavior of the Docker image `jwilder/nginx-proxy:bats`.
You need [Docker Compose v2](https://docs.docker.com/compose/install/linux/), [python 3.9](https://www.python.org/) and [pip](https://pip.pypa.io/en/stable/installation/) installed. Then run the commands:
Note: By default this test suite relies on Docker Compose v2 with the command `docker compose`. It still supports Docker Compose v1 via the `DOCKER_COMPOSE` environment variable:
DOCKER_COMPOSE=docker-compose pytest
Run one single test module
--------------------------
pytest test_nominal.py
Run the test suite from a Docker container
------------------------------------------
If you cannot (or don't want to) install pytest and its requirements on your computer. You can use the nginx-proxy-tester docker image to run the test suite from a Docker container.
make test-debian
or if you want to test the alpine flavor:
make test-alpine
Write a test module
-------------------
This test suite uses [pytest](http://doc.pytest.org/en/latest/). The [conftest.py](conftest.py) file will be automatically loaded by pytest and will provide you with two useful pytest [fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html):
- docker_compose
- nginxproxy
### docker_compose fixture
When using the `docker_compose` fixture in a test, pytest will try to start the [Docker Compose](https://docs.docker.com/compose/) services corresponding to the current test module, based on the test module filename.
By default, if your test module file is `test/test_subdir/test_example.py`, then the `docker_compose` fixture will try to load the following files, [merging them](https://docs.docker.com/reference/compose-file/merge/) in this order:
1.`test/compose.base.yml`
2.`test/test_subdir/compose.base.override.yml` (if it exists)
3.`test/test_subdir/test_example.yml`
The fixture will run the _docker compose_ command with the `-f` option to load the given compose files. So you can test your docker compose file syntax by running it yourself with:
docker compose -f test/compose.base.yml -f test/test_subdir/test_example.yml up -d
The first file contains the base configuration of the nginx-proxy container common to most tests:
```yaml
services:
nginx-proxy:
image:nginxproxy/nginx-proxy:test
container_name:nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
```
The second optional file allow you to override this base configuration for all test modules in a subfolder.
The third file contains the services and overrides specific to a given test module.
This automatic merge can be bypassed by using a file named `test_example.base.yml` (instead of `test_example.yml`). When this file exist, it will be the only one used by the test and no merge with other compose files will automatically occur.
The `docker_compose` fixture also set the `PYTEST_MODULE_PATH` environment variable to the absolute path of the current test module directory, so it can be used to mount files or directory relatives to the current test.
In the case you are running pytest from within a docker container, the `docker_compose` fixture will make sure the container running pytest is attached to all docker networks. That way, your test will be able to reach any of them.
In your tests, you can use the `docker_compose` variable to query and command the docker daemon as it provides you with a [client from the docker python module](https://docker-py.readthedocs.io/en/4.4.4/client.html#client-reference).
Also this fixture alters the way the python interpreter resolves domain names to IP addresses in the following ways:
Any domain name containing the substring `nginx-proxy` will resolve to `127.0.0.1` if the tests are executed on a Darwin (macOS) system, otherwise the IP address of the container that was created from the `nginxproxy/nginx-proxy:test` image.
So, in tests, all the following domain names will resolve to either localhost or the nginx-proxy container's IP:
-`nginx-proxy`
-`nginx-proxy.com`
-`www.nginx-proxy.com`
-`www.nginx-proxy.test`
-`www.nginx-proxy`
-`whatever.nginx-proxyooooooo`
- ...
Any domain name ending with `XXX.container.docker` will resolve to `127.0.0.1` if the tests are executed on a Darwin (macOS) system, otherwise the IP address of the container named `XXX`.
So, on a non-Darwin system:
-`web1.container.docker` will resolve to the IP address of the `web1` container
-`f00.web1.container.docker` will resolve to the IP address of the `web1` container
-`anything.whatever.web2.container.docker` will resolve to the IP address of the `web2` container
Otherwise, domain names are resoved as usual using your system DNS resolver.
### nginxproxy fixture
The `nginxproxy` fixture will provide you with a replacement for the python [requests](https://pypi.python.org/pypi/requests/) module. This replacement will just repeat up to 30 times a requests if it receives the HTTP error 404 or 502. This error occurs when you try to send queries to nginx-proxy too early after the container creation.
Also this requests replacement is preconfigured to use the Certificate Authority root certificate [certs/ca-root.crt](certs/) to validate https connections.
Furthermore, the nginxproxy methods accept an additional keyword parameter: `ipv6` which forces requests made against containers to use the containers IPv6 address when set to `True`. If IPv6 is not supported by the system or docker, that particular test will be skipped.
r = nginxproxy.get("http://web1.nginx-proxy.tld/port", ipv6=True)
assert r.status_code == 200
assert r.text == "answer from port 81\n"
### The web docker image
When you run the `make build-webserver` command, you built a [`web`](requirements/README.md) docker image which is convenient for running a small web server in a container. This image can produce containers that listens on multiple ports at the same time.
### Testing TLS
If you need to create server certificates, use the [`certs/create_server_certificate.sh`](certs/) script. Pytest will be able to validate any certificate issued from this script.
`create_server_certificate.sh` is a script helping with issuing server certificates that can be used to provide TLS on web servers.
It also creates a Certificate Authority (CA) root key and certificate. This CA root certificate can be used to validate the server certificates it generates.
Create a server certificate for wildcard domain `*.example.com`:
./create_server_certificate.sh "*.example.com"
Note that you need to use quotes around the domain string or the shell would expand `*`.
Will produce:
-`*.example.com.key`
-`*.example.com.crt`
Again, to prevent your shell from expanding `*`, use quotes. i.e.: `cat "*.example.com.crt"`.
Such a server certificate would be valid for domains:
-`foo.example.com`
-`bar.example.com`
but not for domains:
-`example.com`
-`foo.bar.example.com`
### Wildcard domain on multiple levels
While you can technically create a server certificate for wildcard domain `*.example.com` and alternative name `*.*.example.com`, client implementations generally do not support multiple wildcards in a domain name.
For instance, a python script using urllib3 would fail to validate domain `foo.bar.example.com` presenting a certificate with name `*.*.example.com`. It is advised to stay away from producing such certificates.
This directory contains resources to build Docker images tests depend on
# Build images
make build-webserver
# python-requirements.txt
If you want to run the test suite from your computer, you need python and a few python modules.
The _python-requirements.txt_ file describes the python modules required. To install them, use
pip:
pip install -r python-requirements.txt
If you don't want to run the test from your computer, you can run the tests from a docker container, see the _pytest.sh_ script.
# Images
## web
This container will run one or many webservers, each of them listening on a single port.
Ports are specified using the `WEB_PORTS` environment variable:
docker run -d -e WEB_PORTS=80 web # will create a container running one webserver listening on port 80
docker run -d -e WEB_PORTS="80 81" web # will create a container running two webservers, one listening on port 80 and a second one listening on port 81
The webserver answers on two paths:
-`/headers`
-`/port`
```
$ docker run -d -e WEB_PORTS=80 -p 80:80 web
$ curl http://127.0.0.1:80/headers
Host: 127.0.0.1
User-Agent: curl/7.47.0
Accept: */*
$ curl http://127.0.0.1:80/port
answer from port 80
```
## nginx-proxy-tester
This is an optional requirement which is usefull if you cannot (or don't want to) install pytest and its requirements on your computer. In this case, you can use the `nginx-proxy-tester` docker image to run the test suite from a Docker container.
To use this image, it is mandatory to run the container using the `pytest.sh` shell script. The script will build the image and run a container from it with the appropriate volumes and settings.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.