Effective Routing in Nginx: A Developer's Cheat Sheet

Effective Routing in Nginx: A Developer's Cheat Sheet

Картинка к публикации: Effective Routing in Nginx: A Developer's Cheat Sheet

Introduction to Routing

Understanding the Basics

Nginx (pronounced "engine-x") has established itself in the market as one of the most powerful and flexible web servers and reverse proxy servers available. Designed to tackle the C10K problem—handling ten thousand simultaneous connections—Nginx has exceeded expectations by offering a high-performance, reliable, and scalable architecture. This is achieved through its asynchronous, event-driven request processing model, making it ideal for modern high-traffic web applications, including static content delivery, proxying, and load balancing.

When functioning as a web server, Nginx manages incoming requests from clients (such as web browsers) and serves them by delivering the requested content, such as HTML pages, images, and videos. In its role as a reverse proxy, Nginx accepts client requests, forwards them to one or more internal servers, processes the responses from these servers, and returns them to the client. This capability makes Nginx an excellent tool for enhancing the performance, reliability, and scalability of web applications.

Effective routing in Nginx is crucial for ensuring high performance and security of web applications. Routing impacts performance by managing how quickly and efficiently incoming requests are handled. For example, optimizing routes for static resources can significantly speed up page load times, enhancing the overall user experience.

From a security standpoint, effective routing helps protect applications and data from external threats. Configuring Nginx to block unwanted traffic, restrict access to certain resources, and redirect requests through secure channels contributes to maintaining data confidentiality and integrity. Additionally, using reverse proxying can conceal the internal network architecture from the outside world, further enhancing security.

Nginx Configuration Files

Nginx configurations are centrally managed through a set of files located in specific directories. The main configuration file, nginx.conf, is typically found in the /etc/nginx directory on most Linux distributions. This file contains global settings and defines the server's core operational parameters, including user, process, and logging settings.

Additionally, the /etc/nginx/conf.d directory is often used to store configuration files for individual sites or applications, which are automatically included in the main configuration via the include directive in nginx.conf. This approach simplifies configuration management by dividing it into logical sections and files based on their purpose.

An Nginx configuration file consists of directives grouped into contexts, which determine the scope of each directive. Nginx supports several types of contexts, including main, http, server, and location.

  • main (Global Context): Contains directives that apply to the entire server. Here, global parameters such as the user and group running Nginx, and logging settings are configured.
  • http: This context includes directives for handling HTTP(S) traffic. Within it, you can configure proxying, caching, compression, and other HTTP-related settings.
  • server: Defines the settings for a specific virtual server (or site). You can specify listening ports, server names, and paths to SSL certificates. A single http context can contain multiple server blocks, each configured independently.
  • location: Specifies how to handle particular requests based on the URI. Within a server context, there can be multiple location blocks, each configured to serve different URI paths or patterns.

Each directive within a context defines specific Nginx behavior. For example, the listen directive within a server context specifies the port on which the server should listen for incoming connections, while proxy_pass within a location context configures request forwarding to another server.

Failover Redirection

Automatic Failover to a Backup Server

Failover redirection in Nginx is a mechanism that allows automatic switching to a backup server or service in case of errors or the main server becoming unavailable. This feature is essential for ensuring high availability and reliability of web applications, as it minimizes downtime and maintains continuous service for users.

Setting up failover redirection in Nginx is done using the error_page directive in combination with named locations. The error_page directive lets you specify one or more error codes (e.g., 502 Bad Gateway) that should be intercepted and redirect the request to a specified URI or external URL when those errors occur. Named locations are used for more granular control over how redirected requests are handled.

For instance, imagine you have a primary web server handling most incoming requests and a backup server that takes over if the primary becomes unavailable. Here's how you can implement this in Nginx:

server {
    listen 80;
    server_name www.example.com;
    
    location / {
        proxy_pass http://main_server/;
        error_page 502 503 504 @fallback;
    }
    
    location @fallback {
        proxy_pass http://fallback_server/;
    }
}

In this example, the primary server uses a location block with proxy_pass directing requests to http://main_server/. The error_page directive specifies that in the event of errors 502, 503, or 504 (e.g., when the main server is down), requests should be redirected to the named location @fallback.

The named location @fallback is configured to forward requests to the backup server at http://fallback_server/. This setup ensures automatic switchover to the backup server if the primary server fails to respond, thereby reducing the downtime of the web application.

This approach provides flexibility and reliability in serving web applications, minimizing risks associated with individual infrastructure component failures. Utilizing failover redirection in Nginx is an effective way to ensure high availability and continuous operation of web services, which is critical for business operations and user experience.

Benefits for SEO and Availability

Website downtime significantly negatively impacts its ranking in search engines. Search engines like Google and Bing strive to provide users with access to quality and available content. When a website is unavailable, it degrades the user experience, which can lead to a drop in search rankings. Prolonged periods of unavailability are especially harmful as they can signal unreliability to search engines.

Failover redirection through Nginx helps minimize downtime by automatically redirecting traffic to a backup server or page when the main server is unavailable. This not only improves site availability but also maintains its SEO rankings by preventing traffic loss and avoiding negative perceptions from search engines.

Continuous access to web services is a key factor in ensuring high user satisfaction. In an era where users expect instant access to information and services, any delays or outages can lead to immediate user departure to other websites. Failover redirection ensures uninterrupted service operation even during technical failures, thereby maintaining user loyalty and satisfaction.

Moreover, continuous access to services upholds user and partner trust, which is particularly important for commercial websites and online services where trust is directly linked to brand reputation and financial success. Failover redirection helps maintain this trust by ensuring that services are always available, regardless of unforeseen circumstances.

Local Development and Testing

Setting Up a Development Subdomain

For convenient development and testing of new website features or web applications, a dedicated subdomain like dev.example.com is often used. This allows developers to work in an isolated environment without affecting the main site's operation. Creating a subdomain involves several steps:

  1. Registering the Subdomain: First, register the subdomain through your domain registrar's control panel or your hosting provider's management interface. For dev.example.com, you need to add the appropriate A or CNAME record in the DNS settings of example.com, pointing to the IP address of your development server.
  2. Configuring the Web Server: Next, on the web server that will serve the subdomain, set up a new server block (for Nginx) or a virtual host (for Apache) to listen for requests addressed to dev.example.com. In the Nginx configuration file, it might look like this:

    server {
        listen 80;
        server_name dev.example.com;
        
        location / {
            proxy_pass http://192.168.1.100:8080;
            # Other necessary directives...
        }
    }
    

    This example assumes that your development environment is running on the same server as Nginx and is accessible on port 8080. The proxy_pass directive forwards all requests to the subdomain to the local development environment.

Implementing HTTPS in Local Development

In modern web development, HTTPS is not a luxury but a necessity. This is especially important when working with webhooks, which require a reliable and secure method for data transmission between servers and applications. HTTPS helps protect transmitted data from interception and modification through encryption, ensuring data confidentiality and integrity.

In the context of development and testing, using HTTPS allows developers to build and verify applications in environments that closely mimic real-world deployment. This is particularly critical for webhooks, as many external services require HTTPS to ensure the security of transmitted data. Thus, developers can ensure that their systems can correctly handle encrypted traffic and verify the authenticity and source of incoming requests.

Let's Encrypt is a free, automated, and open certificate authority that provides SSL/TLS certificates to secure websites. While Let's Encrypt is traditionally used to protect internet-facing domains, it can also be used to obtain certificates for local development subdomains. This enables developers to test applications in production-like conditions, including HTTPS operations.

To use Let's Encrypt with local subdomains, ensure that the subdomain is accessible from the internet and that its DNS records correctly point to the development server or a tunneling service. After that, you can use Let's Encrypt clients like Certbot to automatically obtain and install certificates. We will explore this in more detail later.

Integration with Smart Home Systems

Configuring Routes for Device Management

Integrating a smart home with Nginx opens up exciting possibilities for centralized management of home devices over the internet. By using Nginx as a proxy server, you can set up specialized routes to control various smart home devices, such as lighting, smart plugs, vacuum cleaners, and thermostats. This is done by redirecting requests from an external interface to the internal addresses of devices or controllers, providing an additional layer of abstraction and security.

To start, you need to identify the internal IP addresses or hostnames of your smart devices. Then, using Nginx configuration, create routes that forward incoming requests to these devices. For example, to control smart lighting, you might set up the following route:

server {
    listen 80;
    server_name smart-home.example.com;
    
    location /lighting {
        proxy_pass http://192.168.1.99; # IP address of the lighting controller
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
    # Additional routes for other devices...
}

In this example, requests to smart-home.example.com/lighting are forwarded to the smart lighting device at 192.168.1.99. Similarly, you can configure routes for other devices by modifying the location path and the destination IP address.

Implementing such a configuration can greatly simplify interactions with your smart home, providing a single point of access to all devices. For instance:

  • Lighting Control: Imagine being able to turn your home lights on and off from anywhere in the world using just your smartphone or voice commands over the internet.
  • Vacuum Control: Schedule automated cleaning sessions or manually start the vacuum from work, ensuring you return to a clean home.
  • Climate Control: Adjust thermostats and air conditioners to optimal settings based on the time of day or external weather conditions, even when you're away from home.

Using Nginx to implement this integration not only offers a convenient and flexible way to manage a smart home but also adds an extra layer of security through the ability to configure SSL/TLS, authentication, and traffic encryption. This creates a reliable and secure management system that provides convenience and saves time in everyday life.

Security and Privacy

Given that smart home devices often control private and critical aspects of users' lives, it's essential to implement the following security measures:

Use HTTPS: Always use encrypted connections (HTTPS) to access and manage devices. This helps protect transmitted data from interception and ensures the confidentiality of communications. You can use certificates from trusted certificate authorities, including free options from Let's Encrypt.

Authentication and Authorization: Implement authentication and authorization mechanisms to control access to device management. This can include HTTP basic authentication, access tokens, or integration with identity management systems. Ensure that only authorized users can control smart home devices.

Access Restriction: Use firewalls and Access Control Lists (ACLs) to limit access to smart home devices only from specific IP addresses or network ranges. This minimizes the risk of unauthorized access.

Regular Updates: Ensure that the software on all smart home devices, including Nginx and any proxy servers or routers, is regularly updated. This helps protect the system from known vulnerabilities.

Protecting privacy and data in a smart home begins with recognizing that every device and communication channel can be a potential point of information leakage. Consider the following recommendations:

  • Data Encryption: Always use encryption for storing and transmitting sensitive data. This applies not only to communication between devices and servers but also to data storage on devices.
  • Principle of Least Privilege: Configure systems so that each component and user has only the privileges necessary to perform their functions. This reduces the risk of abuse and accidental access to sensitive functions.
  • Audit and Monitoring: Regularly audit configurations and monitor the system for unusual activities or unauthorized access attempts. This helps in timely detection and prevention of potential incidents.
  • Education and Awareness: Increase the awareness of all system users about the basics of cybersecurity and best practices for protecting privacy. Informed users can play a key role in safeguarding the system.

By adhering to these principles and recommendations, you can significantly enhance the security and privacy levels in integrating smart home systems with Nginx, ensuring a reliable and secure ecosystem for all devices and users.

Blocking Ports

Restricting Access to Unused Ports

One of the crucial steps in ensuring network security is blocking unused ports at the router level. This prevents unauthorized access to your internal network resources and reduces the risk of attacks. Here are some practical tips:

Inventory of Used Ports:
First, identify which ports are truly necessary for the operation of your applications and services. These might include web server ports (e.g., 22, 80, and 443), database ports if they need to be accessible externally, and any other specific ports used by your applications.

Configuring the Router's Firewall:
Use your router's firewall capabilities to block incoming and outgoing traffic on all ports except those that are explicitly allowed. Most modern routers provide a graphical interface for managing firewall rules. For example, configuring iptables to block all ports except 22, 80, and 443 on Ubuntu:

Steps to Configure iptables to Block All Ports Except 22 (SSH), 80 (HTTP), and 443 (HTTPS):

  1. Disable All Current Rules:
    Before starting the configuration, disable all current rules to avoid potential conflicts or inadvertently blocking access to the server.

    sudo iptables -P INPUT ACCEPT
    sudo iptables -P FORWARD ACCEPT
    sudo iptables -P OUTPUT ACCEPT
    
  2. Flush Existing iptables Rules:

    sudo iptables -F
    sudo iptables -X
    sudo iptables -t nat -F
    sudo iptables -t mangle -F
    
  3. Allow Incoming Traffic on Ports 22, 80, and 443:

    sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
    sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
    sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
    
  4. Allow Outgoing Traffic for Established Connections:
    This allows your server to send responses to permitted incoming requests.

    sudo iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    
  5. Allow Outgoing Traffic on Ports 22, 80, and 443 (Optional):
    If your server initiates connections on these ports.

    sudo iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT
    sudo iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT
    sudo iptables -A OUTPUT -p tcp --dport 443 -j ACCEPT
    sudo iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
    
  6. Allow Loopback Interface (lo):
    This is essential for the proper functioning of internal server processes.

    sudo iptables -A INPUT -i lo -j ACCEPT
    sudo iptables -A OUTPUT -o lo -j ACCEPT
    
  7. Restore Strict Security Policies:
    After configuration, it's important to reinstate the original strict rules to ensure system security.

    sudo iptables -P INPUT DROP
    sudo iptables -P FORWARD DROP
    sudo iptables -P OUTPUT DROP
    
  8. Save iptables Settings:
    iptables settings are not saved automatically after a reboot. To save them, use iptables-save or install iptables-persistent.

    sudo iptables-save | sudo tee /etc/iptables/rules.v4
    

    For Debian/Ubuntu, you can install iptables-persistent to automatically apply rules at startup:

    sudo apt-get install iptables-persistent
    
  9. Verify iptables Rules:
    Ensure that all rules have been applied correctly.

    sudo iptables -L -v
    

Implementing a Strict Policy:
By default, block all unwanted traffic and allow only necessary connections. This "deny all that is not explicitly allowed" approach is a best security practice. For example:

Restricting Rules to Specific IP Addresses:
To allow SSH access only from your IP address, use the following command, replacing your_ip_address with your actual IP address.

sudo iptables -A INPUT -p tcp --dport 22 -s your_ip_address -j ACCEPT

Regularly Update Firewall Rules: As your network needs change and new services are added, remember to update your firewall rules to reflect these changes.

Performance Considerations: While blocking ports directly does not increase server performance, it contributes to the system's stability and reliability. Reducing the number of active services listening on open ports decreases the likelihood of abuse that could deplete server resources such as memory and CPU time.

Additionally, restricting access to ports helps prevent the spread of malware and "insider" attacks, ensuring that even if one internal system is compromised, malware has a harder time spreading to other systems.

Access from the Local Network

Restricting access to internal services and management interfaces so that they are only accessible from the local network is an important security measure. This helps prevent unauthorized access attempts from external attackers. Here's how you can set up such access:

Using Internal IP Addresses: Configure services to accept connections only from internal IP addresses by setting them to listen exclusively on the internal network interface.

For an Nginx server to restrict access to a service only from the local network, you can use the following configuration:

server {
    listen 80;
    server_name internal-service.example.com;

    location / {
        allow 192.168.1.0/24; # Allow access from the local network
        deny all; # Deny access to everyone else
        proxy_pass http://localhost:8080;
    }
}

This example shows how to allow access to the service only for IP addresses within the local network range (192.168.1.0/24), blocking all other requests.

Configuring the Firewall: Use the firewall to block incoming traffic to service ports from outside the local network. This can be done by creating rules that only allow access from specific internal IP addresses.

For a Linux firewall (iptables), the following command sets up a rule that blocks all incoming traffic on port 8080 except traffic from the local network:

sudo iptables -A INPUT -p tcp --dport 8080 -s 192.168.1.0/24 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 8080 -j DROP

These commands add two rules: the first allows incoming traffic on port 8080 from the local network, and the second blocks all other incoming traffic on that port.

VPN for Remote Access: If remote access to services within the local network is necessary, consider using a VPN. This allows remote users to securely connect to the local network through an encrypted connection.

Implementing these measures significantly enhances security by restricting access to critical services and management interfaces to only trusted local networks. This eliminates a wide range of potential threats related to unauthorized access.

Optimizing Container Operations

Docker Configuration for Nginx

Let's take a detailed look at a docker-compose.yml file that configures Nginx within a container alongside Certbot for automatic certificate renewal and Minio as a data storage solution.

version: '3.9'
services:
  nginx:
    image: nginx:latest
    container_name: main_nginx
    command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
    volumes:
      - ./default.conf:/etc/nginx/conf.d/default.conf:ro
      - ./certbot/conf/:/etc/letsencrypt/:rw
      - ./certbot/www/:/var/lib/letsencrypt/:rw
    ports:
      - '80:80'
      - '443:443'
    network_mode: host

  certbot:
    image: certbot/certbot
    container_name: main_certbot
    restart: unless-stopped
    volumes:
      - ./certbot/conf/:/etc/letsencrypt/:rw
      - ./certbot/www/:/var/lib/letsencrypt/:rw
    entrypoint: '/bin/sh -c "trap exit TERM; while :; do certbot renew; sleep 48h & wait $${!}; done;"'
    depends_on:
      - nginx

  minio:
    restart: always
    image: minio/minio:latest
    container_name: main_minio
    command: server --console-address ":9001" /data
    ports:
      - '9000:9000'
      - '9090:9001'
    volumes:
      - upload_minio_volume:/data
    env_file:
      - ./.env
    depends_on:
      - nginx

volumes:
  upload_minio_volume:

Explanation:

  • Nginx Service:
    • Image: Uses the latest Nginx image.
    • Container Name: main_nginx.
    • Command: A shell command that reloads Nginx every six hours and keeps it running in the foreground.
    • Volumes:
      • default.conf: Nginx configuration file.
      • certbot/conf/ and certbot/www/: Directories for storing certificates and Certbot data.
    • Ports: Exposes ports 80 and 443 for HTTP and HTTPS traffic.
    • Network Mode: host allows the container to share the host's network stack, facilitating easier configuration and interaction with other services on the host and local network (Note: Use this with caution. Always monitor firewall rules!).
  • Certbot Service:
    • Image: Uses the Certbot image for managing SSL/TLS certificates.
    • Container Name: main_certbot.
    • Restart Policy: Restarts unless stopped.
    • Volumes: Shares the same certificate directories as Nginx for seamless certificate management.
    • Entrypoint: Runs an infinite loop that renews certificates every 48 hours, ensuring they remain up-to-date.
    • Dependencies: Depends on the Nginx service to be up and running.
  • Minio Service:
    • Image: Uses the latest Minio image, a high-performance object storage solution compatible with Amazon S3.
    • Container Name: main_minio.
    • Command: Starts the Minio server with the console accessible on port 9001 and stores data in /data.
    • Ports: Exposes ports 9000 (API access) and 9090 (console access).
    • Volumes: Uses a named volume upload_minio_volume for persistent data storage.
    • Environment Variables: Loaded from a .env file.
    • Dependencies: Depends on the Nginx service to be operational.

Automatic Configuration Updates for Nginx and SSL/TLS Certificates:
The Nginx container is configured to reload its configuration every six hours, while the Certbot service automatically renews SSL/TLS certificates every 48 hours. This setup ensures smooth and secure operation of web services with minimal downtime and manual intervention.

Benefits of Using Docker

Using Docker to deploy Nginx and related services is an infrastructure management approach that simplifies the processes of deploying, updating, and maintaining application security. This approach offers several key advantages:

Simplified Deployment and Updating of Nginx and Related Services:

  • Automation and Standardization:
    Docker automates the deployment and updating process of services using Dockerfiles and configurations like docker-compose.yml. This streamlines the preparation and launch of applications, making them reusable and easily scalable.
  • Rapid Updates:
    With Docker, you can effortlessly update and rollback service versions without affecting the main application's operation. This is particularly useful for implementing security updates and patches, minimizing downtime.
  • Environment Consistency:
    Docker ensures uniformity across development, testing, and production environments, reducing the "it works on my machine" syndrome. This guarantees that applications run consistently in any environment.

Ensuring Isolation and Application Security:

  • Resource Isolation:
    Each Docker container operates in isolation from others, with its own file system, network stack, and set of processes. This prevents conflicts between applications and reduces the risk of vulnerabilities affecting multiple applications.
  • Privilege Limitation:
    Docker allows you to restrict container privileges by running them with the minimal set of permissions required. This decreases the risk of compromising the host system or other containers if an application is breached.
  • Network Security:
    Docker provides flexible networking options, including creating isolated networks for containers, offering an additional layer of isolation and security.
  • Secret Management:
    Docker Secrets and other secret management tools enable secure storage and transmission of sensitive data, such as passwords, tokens, and certificates, without embedding them directly into images or application code.

Overall Benefits:
Using Docker to deploy Nginx and related services not only simplifies application and infrastructure management but also significantly enhances the security of deployed applications. It provides a reliable foundation for developing and operating web applications and services in today’s digital environment.

Data Storage with Minio

Storing Data with Minio

Minio is a high-performance, scalable, and Amazon S3-compatible object storage solution. This makes it an ideal choice for centralized data management, providing easy access to files, media, archives, and other types of data for applications and web services. Here's how you can set up and use Minio to meet your needs.

Installation and Launching Minio: The simplest way to run Minio is by using Docker. In the docker-compose.yml file, the Minio service is already initiated. This allows for easy deployment of Minio in a container, ensuring a quick start and convenient scalability. All that's left is to specify the login and password in the .env file, which we'll cover later.

Configuration and Bucket Creation: After launching Minio, use its web interface or the command-line tool (mc) to create buckets that will store your data. Buckets in Minio are similar to folders but within the context of cloud storage. You can configure access policies for each bucket, managing who can view or upload content.

Integration with Applications: Minio provides a RESTful API compatible with Amazon S3, making it easy to integrate with most modern applications and frameworks that support S3 operations. Configure your application to work with Minio by specifying the URL endpoint of your Minio instance and providing the necessary access credentials (access key and secret key). A dedicated article has been written on this topic [link to the article].

Ensuring Security: Data security is critical. Ensure that your Minio instance is secured with SSL/TLS to encrypt your data during transmission. Additionally, set up access policies and roles for your buckets and the data within them to minimize the risk of unauthorized access.

Backup and Scaling: Minio supports data replication across multiple Minio instances to ensure high availability and fault tolerance. Set up replication for important buckets to other servers or even different cloud storage providers. Regularly back up your data to prevent loss in case of failures.

Using Minio for centralized data management offers flexibility, scalability, and high performance. Thanks to its compatibility with Amazon S3, Minio can serve as a private cloud storage solution, providing reliable and secure storage for large volumes of data. With proper configuration and management, Minio becomes an indispensable component of your IT infrastructure.

From Theory to Practice

docker-compose.yml

Earlier, we reviewed this file; let's continue without dwelling on it:

version: '3.9'
services:
  nginx:
    image: nginx:latest
    container_name: main_nginx
    command: '/bin/sh -c ''while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'''
    volumes:
      - ./default.conf:/etc/nginx/conf.d/default.conf:ro
      - ./certbot/conf/:/etc/letsencrypt/:rw
      - ./certbot/www/:/var/lib/letsencrypt/:rw
    ports:
      - '80:80'
      - '443:443'
    network_mode: host

  certbot:
    image: certbot/certbot
    container_name: main_certbot
    restart: unless-stopped
    volumes:
      - ./certbot/conf/:/etc/letsencrypt/:rw
      - ./certbot/www/:/var/lib/letsencrypt/:rw
    entrypoint: '/bin/sh -c "trap exit TERM; while :; do certbot renew; sleep 48h & wait $${!}; done;"'
    depends_on:
      - nginx

  minio:
    image: minio/minio:latest
    container_name: main_minio
    restart: always
    command: server --console-address ":9001" /data
    ports:
      - '9000:9000'
      - '9090:9001'
    volumes:
      - minio_volume:/data
    env_file:
      - ./.env
    depends_on:
      - nginx

volumes:
  minio_volume:

Nginx Configurations

Used to serve various services, including web applications and Minio.

upstream minio_s3 {    
    server 127.0.0.1:9000;
}
upstream minio_console {
    server 127.0.0.1:9090;
}
upstream web {
    server 127.0.0.1:8000;
}
upstream fallback_web {
    server 192.168.0.10:8000;
}
upstream async {
    server 127.0.0.1:8080;
}
upstream to_my_workstation {
    server 192.168.0.10;
}

# gzip
gzip on;
gzip_vary on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

server {
    listen                  80;
    listen                  [::]:80;
    server_name             www.domain.ru;

    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        root /var/lib/letsencrypt/;
        try_files $uri =404;
    }

    return 301 https://www.domain.ru$request_uri;
}

server {
    server_name             www.domain.ru;
    listen                  443 ssl;
    listen                  [::]:443 ssl;
    http2 on;
    client_max_body_size    50M;
    ssl_certificate         /etc/letsencrypt/live/www.domain.ru/fullchain.pem;
    ssl_certificate_key     /etc/letsencrypt/live/www.domain.ru/privkey.pem;
    include                 /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam             /etc/letsencrypt/ssl-dhparams.pem;
    error_page              502 503 504 @fallback;

    location / {
        proxy_pass             http://web;
        proxy_set_header       Host                $host;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
    }

    location @fallback {
        proxy_pass             http://fallback_web;
        proxy_set_header       Host                $host;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
    }
    
    # This block configures routes to a WebSocket running in an asynchronous environment.
    # However, it can be any, according to your requirements.
    location /ws/ {
        proxy_pass             http://async;
        proxy_set_header       Upgrade             $http_upgrade;
        proxy_set_header       Connection          "upgrade";
        proxy_set_header       Host                $server_name;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
        proxy_read_timeout     86400;
    }
}

server {
    listen                  80;
    listen                  [::]:80;
    server_name             aws.domain.ru;

    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        root /var/lib/letsencrypt/;
        try_files $uri =404;
    }

    return 301 https://aws.domain.ru$request_uri;
}

server {
    server_name             aws.domain.ru;
    listen                  443 ssl reuseport;
    listen                  [::]:443 ssl reuseport;
    ssl_certificate         /etc/letsencrypt/live/aws.domain.ru/fullchain.pem;
    ssl_certificate_key     /etc/letsencrypt/live/aws.domain.ru/privkey.pem;
    include                 /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam             /etc/letsencrypt/ssl-dhparams.pem;
    ignore_invalid_headers  off;
    client_max_body_size    0;
    proxy_buffering         off;
    proxy_request_buffering off;

    location / {
        proxy_set_header       Host                $host;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
        proxy_connect_timeout  300;
        proxy_http_version     1.1;
        proxy_set_header       Connection          "";
        chunked_transfer_encoding off;
        proxy_pass             http://minio_s3;
    }
}

server {
    listen                  80;
    listen                  [::]:80;
    server_name             console.domain.ru;

    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        root /var/lib/letsencrypt/;
        try_files $uri =404;
    }

    return 301 https://console.domain.ru$request_uri;
}

server {
    server_name             console.domain.ru;
    listen                  443 ssl;
    listen                  [::]:443 ssl;
    ssl_certificate         /etc/letsencrypt/live/console.domain.ru/fullchain.pem;
    ssl_certificate_key     /etc/letsencrypt/live/console.domain.ru/privkey.pem;
    include                 /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam             /etc/letsencrypt/ssl-dhparams.pem;
    ignore_invalid_headers  off;
    client_max_body_size    0;
    proxy_buffering         off;
    proxy_request_buffering off;

    location / {
        proxy_set_header       Host                $host;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
        proxy_set_header       X-NginX-Proxy       true;      
        real_ip_header         X-Real-IP;
        proxy_connect_timeout  300;
        proxy_http_version     1.1;
        proxy_set_header       Upgrade             $http_upgrade;
        proxy_set_header       Connection          "upgrade";
        chunked_transfer_encoding off;
        proxy_pass             http://minio_console;
    }
}

server {
    listen                  80;
    listen                  [::]:80;
    server_name             dev.domain.ru;

    location ^~ /.well-known/acme-challenge/ {
        default_type "text/plain";
        root /var/lib/letsencrypt/;
        try_files $uri =404;
    }

    return 301 https://dev.domain.ru$request_uri;
}

server {
    server_name             dev.domain.ru;
    listen                  443 ssl;
    listen                  [::]:443 ssl;
    ssl_certificate         /etc/letsencrypt/live/dev.domain.ru/fullchain.pem;
    ssl_certificate_key     /etc/letsencrypt/live/dev.domain.ru/privkey.pem;
    include                 /etc/letsencrypt/options-ssl-nginx.conf;
    ssl_dhparam             /etc/letsencrypt/ssl-dhparams.pem;
    
    # This configuration redirects all traffic to the workstation (your computer),
    # where Nginx handles it on port 80 without using SSL.
    # Access to this path is allowed only for devices from the local network and through
    # your internet IP address, ensuring protection against unauthorized access and scanning.
    location / {
        proxy_pass             http://to_my_workstation;
        
        allow                  192.168.0.0/24;  # Access allowed only for local network devices.
        allow                  ##.##.##.##;     # Your global IP address. If working from a local network, it matches the domain's IP.
        deny                   all;             # Deny access to everyone else.
        proxy_set_header       Host                $host;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
    }
    
    # This Nginx configuration is set up to proxy requests to an internal server
    # intended for a bot, such as Telegram.
    # A critical part of security is integrating a secret key into the bot's URL,
    # which allows filtering unauthorized requests. Therefore, no filtering is necessary here.
    location /bot/ {
        proxy_pass             http://to_my_workstation:8090;
        proxy_set_header       Host                $host;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
    }
    
    # This block proxies WebSocket connections directly to the application on the workstation.
    # Optionally, you can also restrict access from the global network.
    location /ws/ {
        proxy_pass             http://to_my_workstation:8080;
        proxy_set_header       Upgrade             $http_upgrade;
        proxy_set_header       Connection          "upgrade";
        proxy_set_header       Host                $server_name;
        proxy_set_header       X-Real-IP           $remote_addr;
        proxy_set_header       X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header       X-Forwarded-Proto   $scheme;
        proxy_read_timeout     86400;
    }
}

Let's delve into the key aspects of this configuration and how it ensures effective traffic management and security:

Upstream Blocks: Several upstream blocks are defined, each representing a group of servers to which Nginx can forward requests. These blocks include endpoints for the main web application, its fallback, asynchronous services, Minio S3 and Minio Console, as well as the development workstation.

Gzip Configuration: Enabling Gzip compression for specific content types helps reduce the size of transmitted data, increasing page load speeds and improving overall performance.

HTTP to HTTPS Redirection: The configuration includes automatic redirection of all HTTP requests to HTTPS, enhancing security by using encrypted connections.

SSL/TLS Settings: For each server, paths to the SSL certificate and key are specified, and HTTP/2 support is enabled to improve performance.

Error Handling and Fallback Redirection: In case of a 502 (Bad Gateway) error, requests are redirected to a fallback server, ensuring higher availability of the application.

Request Proxying: Requests are proxied to internal services, including the web application, asynchronous services, and Minio. For each location block, appropriate headers and proxy parameters are set.

Interaction with Minio: Separate servers with proxying are used to serve static content and media files through Minio S3 and Minio Console. This allows Minio to act as centralized data storage, providing easy access to media and static assets.

This Nginx configuration demonstrates the flexibility of Nginx as a reverse proxy and static content server. Utilizing upstream blocks to define server groups, setting up HTTP to HTTPS redirection, configuring SSL/TLS, and integrating with Minio for serving static content and media enable the creation of an efficient, secure, and high-performance infrastructure for web applications and services.

.env

# Let's Encrypt
EMAIL=username@gmail.com
# Set to 1 if you're testing your setup to avoid hitting request limits
STAGING=0
DOMAIN=domain.ru

# Minio Console
MINIO_ROOT_USER='secret-long-login'        # Used as the login for accessing the Minio console
MINIO_ROOT_PASSWORD='very-secret-long-pass' # Used as the password for accessing the Minio console

Minio Login and Password Initialization: The Minio login and password are initialized when deploying the application within the container.

init-letsencrypt.sh

The init-letsencrypt.sh script is designed to automate the process of obtaining and renewing SSL/TLS certificates from Let's Encrypt for use in your infrastructure, including Nginx and Minio. This script significantly simplifies the setup of secure connections by automating steps that typically require manual intervention. Here's how it works:

Loading Environment Variables: The script begins by loading variables from the .env file, allowing easy adaptation to different configurations without needing to modify the script itself.

Defining Domains: Next, the script defines the domains for the certificates using the DOMAIN variable from the .env file, ensuring flexibility when working with various domains.

Checking and Preparing Directories: The script checks for the existence of pre-created directories for certificates and, if necessary, creates them. It also downloads recommended TLS parameters from Certbot.

Creating a Dummy Certificate: To ensure Nginx can successfully start before obtaining the actual certificate, the script creates a dummy certificate. This temporary solution helps avoid errors during startup.

Starting and Reloading Nginx: The script starts the Nginx container to ensure the web server is accessible for domain verification by Certbot.

Obtaining and Renewing Certificates: Then, the script runs the Certbot container for each domain, using the web root for verification, and requests new certificates. If certificates already exist, the script renews them.

How to Use the Script:

  1. Preparation: Ensure that Docker and Docker Compose are installed, and that you have superuser privileges to execute the script.
  2. Preparing Nginx Configuration: To successfully run the script and obtain Let's Encrypt certificates, temporarily comment out all SSL-related lines in the Nginx configuration (port 443) and SSL certificate paths. This is necessary for Certbot to perform domain verification via HTTP.

    # listen 443 ssl;
    # listen [::]:443 ssl;
    # ssl_certificate /etc/letsencrypt/live/dev.domain.ru/fullchain.pem;
    # ssl_certificate_key /etc/letsencrypt/live/dev.domain.ru/privkey.pem;
    # include /etc/letsencrypt/options-ssl-nginx.conf;
    # ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    
  3. Configuring the .env File:
    Populate the .env file with your data, including EMAIL and DOMAIN, as well as Minio credentials if necessary.
  4. Running the Script:
    Execute the script with the command:

    sudo ./init-letsencrypt.sh
    

    The script will guide you through the process of obtaining certificates and configuring Nginx.

  5. Activating SSL in Nginx: After successfully obtaining the certificates and completing the script, return to the Nginx configuration file and uncomment the previously commented SSL-related lines. This will enable secure connections for your web server.
  6. Starting the Server: Apply the changes by restarting Nginx. Run the following command to rebuild and launch the containers with the updated configuration:

    docker-compose up --build
    

This approach allows you to easily and securely manage certificates for your infrastructure, simplifying deployment and ensuring the security of data transmission. The script itself is available .

This comprehensive guide has walked you through setting up a secure and efficient web infrastructure using Nginx, Docker, Certbot, and Minio. From configuring firewall rules to managing SSL/TLS certificates and integrating object storage, each step ensures that your applications are not only performant but also secure and scalable. By following these best practices and utilizing the provided configurations, you can build a robust IT environment capable of handling modern web demands while maintaining high standards of security and reliability.


Read also:

ChatGPT
Eva
💫 Eva assistant