NGINX REVERSE PROXY ARCHITECTURE Client A Browser / API Client B Mobile App Client C curl / CLI HTTPS :443 Nginx SSL Termination Load Balancing Caching Compression Rate Limiting proxy_pass HTTP Node.js :3000 app.knowledgexchange.xyz Python :8000 api.knowledgexchange.xyz Docker :9000 admin.knowledgexchange.xyz upstream backend servers Nginx routes client requests to the appropriate backend server

Nginx is one of the most widely deployed reverse proxies in the world. Whether you are routing traffic to a Node.js backend, load balancing across multiple application servers, or terminating SSL in front of a Docker container, Nginx handles it all with minimal resource usage and exceptional performance. This guide covers everything from basic proxy_pass directives to advanced configurations including WebSocket support, caching, and health checks.

What Is a Reverse Proxy?

A reverse proxy sits between clients (browsers) and your backend servers. Instead of clients connecting directly to your application, they connect to Nginx, which forwards the request to the appropriate backend.

Benefits of using a reverse proxy include:

  • SSL termination — Handle HTTPS at the proxy level so backend apps use plain HTTP
  • Load balancing — Distribute traffic across multiple backend instances
  • Caching — Serve frequently requested content without hitting the backend
  • Security — Hide backend server details and filter malicious requests
  • Compression — Compress responses before sending them to clients
  • Centralized logging — Aggregate access logs in one place

Basic Reverse Proxy Configuration

The simplest reverse proxy configuration forwards all requests to a backend application:

server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

This tells Nginx to forward all requests arriving at app.knowledgexchange.xyz to a backend running on port 3000.

Important: Always include proxy_set_header Host $host; — without it, the backend receives the upstream address instead of the original hostname, which breaks virtual hosting and many web frameworks.

Preserving Client IP Information

When Nginx proxies a request, the backend sees Nginx’s IP address as the client. To preserve the real client IP, set these headers:

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;

Configuring Your Backend to Trust Proxy Headers

Your application must be configured to trust these headers. For example, in Express.js:

app.set('trust proxy', '127.0.0.1');

In Django:

SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
USE_X_FORWARDED_HOST = True

Warning: Only trust proxy headers from known IP addresses. If your application blindly trusts X-Forwarded-For, an attacker can spoof their IP by setting the header themselves.

SSL Termination

Handle HTTPS at the Nginx level so your backend applications do not need to manage certificates:

server {
    listen 443 ssl http2;
    server_name app.knowledgexchange.xyz;

    ssl_certificate /etc/letsencrypt/live/app.knowledgexchange.xyz/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/app.knowledgexchange.xyz/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # HSTS header
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Redirect HTTP to HTTPS
server {
    listen 80;
    server_name app.knowledgexchange.xyz;
    return 301 https://$host$request_uri;
}

This configuration terminates SSL at Nginx and forwards plain HTTP to the backend. The backend only needs to listen on 127.0.0.1:3000 — it never handles TLS directly. For a deeper dive into SSL configuration, see our guide on SSL certificates and key lengths.

WebSocket Proxy

WebSocket connections require special handling because they upgrade from HTTP to a persistent bidirectional connection:

location /ws/ {
    proxy_pass http://127.0.0.1:3000;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    # Prevent timeout on idle WebSocket connections
    proxy_read_timeout 86400s;
    proxy_send_timeout 86400s;
}

The critical headers are Upgrade and Connection — they tell Nginx to pass the WebSocket upgrade request through to the backend. The extended timeouts prevent Nginx from closing idle WebSocket connections (the default is 60 seconds).

If your application uses WebSocket and HTTP on the same path, use a map to conditionally set the Connection header:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
    }
}

Load Balancing

Distribute traffic across multiple backend servers using the upstream block:

upstream backend_pool {
    server 10.0.1.10:3000;
    server 10.0.1.11:3000;
    server 10.0.1.12:3000;
}

server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    location / {
        proxy_pass http://backend_pool;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Load Balancing Algorithms

Nginx supports several load balancing methods:

Round Robin (default): Requests are distributed evenly across servers in order.

upstream backend_pool {
    server 10.0.1.10:3000;
    server 10.0.1.11:3000;
    server 10.0.1.12:3000;
}

Least Connections: Sends requests to the server with the fewest active connections. Best for backends with varying response times.

upstream backend_pool {
    least_conn;
    server 10.0.1.10:3000;
    server 10.0.1.11:3000;
    server 10.0.1.12:3000;
}

IP Hash: Routes requests from the same client IP to the same backend server. Useful for session persistence.

upstream backend_pool {
    ip_hash;
    server 10.0.1.10:3000;
    server 10.0.1.11:3000;
    server 10.0.1.12:3000;
}

Weighted Distribution: Assign weights to send more traffic to more powerful servers.

upstream backend_pool {
    server 10.0.1.10:3000 weight=5;
    server 10.0.1.11:3000 weight=3;
    server 10.0.1.12:3000 weight=2;
}

Health Checks and Failover

Configure how Nginx handles failed backends:

upstream backend_pool {
    server 10.0.1.10:3000 max_fails=3 fail_timeout=30s;
    server 10.0.1.11:3000 max_fails=3 fail_timeout=30s;
    server 10.0.1.12:3000 backup;  # Only used when all primary servers are down
}
  • max_fails=3 — Mark the server as unavailable after 3 consecutive failures
  • fail_timeout=30s — Wait 30 seconds before trying the failed server again
  • backup — Only route traffic here when all primary servers are down

Caching Proxy Responses

Cache backend responses to reduce load and improve response times:

# Define the cache zone (place in http block)
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
                 max_size=1g inactive=60m use_temp_path=off;

server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_cache app_cache;
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503;

        # Add cache status header for debugging
        add_header X-Cache-Status $upstream_cache_status;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    # Bypass cache for authenticated requests
    location /api/ {
        proxy_pass http://127.0.0.1:3000;
        proxy_cache app_cache;
        proxy_cache_bypass $http_authorization $cookie_session;
        proxy_no_cache $http_authorization $cookie_session;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

The X-Cache-Status header helps with debugging: HIT means the response was served from cache, MISS means it was fetched from the backend, and BYPASS means caching was skipped.

Tip: Use proxy_cache_use_stale to serve stale cached content when the backend is down. This provides a safety net during backend outages.

Rate Limiting

Protect your backend from abuse and DDoS attacks:

# Define rate limit zone (place in http block)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        limit_req_status 429;

        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

    location /auth/login {
        limit_req zone=login_limit burst=5;
        limit_req_status 429;

        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

This allows 10 requests per second to the API with a burst buffer of 20, and limits login attempts to 1 per second with a burst of 5.

Proxy Buffering

Nginx buffers responses from the backend by default. This frees the backend connection quickly, even if the client is on a slow connection:

location / {
    proxy_pass http://127.0.0.1:3000;

    # Enable buffering (default)
    proxy_buffering on;
    proxy_buffer_size 4k;
    proxy_buffers 8 16k;
    proxy_busy_buffers_size 32k;

    # For large file downloads
    proxy_max_temp_file_size 1024m;

    proxy_set_header Host $host;
}

For real-time streaming or Server-Sent Events (SSE), disable buffering:

location /events/ {
    proxy_pass http://127.0.0.1:3000;
    proxy_buffering off;
    proxy_cache off;

    # SSE-specific settings
    proxy_set_header Connection '';
    proxy_http_version 1.1;
    chunked_transfer_encoding off;

    proxy_set_header Host $host;
}

Proxying to Docker Containers

When running applications in Docker, proxy to the container’s internal network:

# Option 1: Proxy to a published port
upstream webapp {
    server 127.0.0.1:8080;
}

# Option 2: Proxy to Docker's internal network
# (when Nginx is also in a Docker container on the same network)
upstream webapp {
    server app-container:3000;
}

server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    location / {
        proxy_pass http://webapp;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

If you are using Docker Compose, define both services on the same network:

# docker-compose.yml
services:
  nginx:
    image: nginx:latest
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./certs:/etc/nginx/certs:ro
    networks:
      - webnet

  app:
    build: .
    expose:
      - "3000"
    networks:
      - webnet

networks:
  webnet:

Notice that the app uses expose (internal only) rather than ports (published to host). Nginx handles all external-facing traffic.

Multiple Apps on Different Subdomains

Route traffic to different backend applications based on the subdomain:

# Main web application
server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# API server
server {
    listen 80;
    server_name api.knowledgexchange.xyz;

    location / {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Admin dashboard
server {
    listen 80;
    server_name admin.knowledgexchange.xyz;

    # Restrict access by IP
    allow 10.0.0.0/8;
    allow 192.168.1.0/24;
    deny all;

    location / {
        proxy_pass http://127.0.0.1:9000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# Monitoring tools (Grafana, Prometheus)
server {
    listen 80;
    server_name monitoring.knowledgexchange.xyz;

    location / {
        proxy_pass http://127.0.0.1:3001;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location /prometheus/ {
        proxy_pass http://127.0.0.1:9090/;
        proxy_set_header Host $host;
    }
}

Security Considerations

Harden your reverse proxy with these security headers and practices:

server {
    listen 443 ssl http2;
    server_name app.knowledgexchange.xyz;

    # Hide Nginx version
    server_tokens off;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Content-Security-Policy "default-src 'self'" always;

    # Limit request body size
    client_max_body_size 10m;

    # Timeouts to prevent slowloris attacks
    client_body_timeout 10s;
    client_header_timeout 10s;
    send_timeout 10s;

    # Block common exploit paths
    location ~* \.(git|env|bak|sql|log)$ {
        deny all;
        return 404;
    }

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Do not pass sensitive internal headers to the backend
        proxy_set_header X-Powered-By "";
    }
}

Tip: Use server_tokens off; in your http block to prevent Nginx from revealing its version number in error pages and response headers. This is a simple but effective hardening measure.

Performance Tuning

Optimize Nginx for high-traffic reverse proxy workloads:

# Place in the http block
# Connection pooling to upstream
upstream backend {
    server 127.0.0.1:3000;
    keepalive 64;
    keepalive_timeout 60s;
    keepalive_requests 1000;
}

server {
    listen 80;
    server_name app.knowledgexchange.xyz;

    # Enable gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_types
        text/plain
        text/css
        text/javascript
        application/json
        application/javascript
        application/xml
        image/svg+xml;

    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Enable keepalive to upstream
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # Connection timeouts
        proxy_connect_timeout 5s;
        proxy_read_timeout 30s;
        proxy_send_timeout 30s;
    }

    # Serve static files directly (bypass proxy)
    location /static/ {
        alias /var/www/app/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }
}

Key performance tips:

  • Keep-alive connections to backends reduce TCP handshake overhead
  • Serve static files directly from Nginx instead of proxying to the backend
  • Enable gzip for text-based responses to reduce bandwidth
  • Set appropriate timeouts to release connections from slow or dead backends quickly

Testing Your Configuration

Always validate your Nginx configuration before reloading:

# Test configuration syntax
sudo nginx -t

# Reload without downtime
sudo systemctl reload nginx

# Watch the error log for issues
sudo tail -f /var/log/nginx/error.log

To test proxy headers, use curl:

# Check response headers
curl -I https://app.knowledgexchange.xyz

# Verify X-Cache-Status for caching
curl -s -o /dev/null -w "%{http_code}" -H "Host: app.knowledgexchange.xyz" http://127.0.0.1

Summary

Nginx as a reverse proxy provides a reliable, high-performance layer between your users and your backend applications. Start with a basic proxy_pass configuration, then layer on SSL termination, caching, rate limiting, and load balancing as your needs grow. The configurations in this guide form a solid foundation for production deployments.

For more on Nginx configuration, explore our Nginx articles. To set up SSL certificates for your proxy, check out our guide on creating self-signed certificates or learn about Nginx build versions and modules.