NGINX Configuration Beginner’s Guide

NGINX Configuration Beginner’s Guide

NGINX is a powerful web server that also doubles as a reverse proxy and load balancer. It’s often chosen by those who want faster page loads and reliable traffic handling. This guide breaks down the basics of NGINX server configuration, giving you a roadmap for hosting everything from personal portfolios to high-traffic applications.

You’ll see how to configure NGINX with directives, HTTP blocks, server blocks, and location blocks. Each piece matters in its own way. By understanding these core elements, you’ll be able to create a well-structured NGINX server configuration that’s easy to maintain. If you’re new to NGINX, expect to pick up best practices along the way.

Wondering how to serve multiple domains or optimize performance? Curious about using location blocks for specific paths or files? By the time you finish, you’ll have a clear idea of how to make NGINX work in your favor. Let’s get started.

What Is NGINX?

What does NGINX do? NGINX is open-source software designed to handle large numbers of concurrent connections without dragging its feet. It manages everything from static file delivery to load balancing for busy websites. Many people label it a NGINX web server, but it also functions as a NGINX reverse proxy, which is handy when you need to shield back-end services from direct public access.

Apache has been a traditional choice for web hosting, but it processes requests in a more process-driven way. That approach can become heavy with spiking traffic. By contrast, NGINX uses an event-driven design to serve more clients with fewer resources. This often results in better performance and lower memory usage.

You’ll see it used by small businesses, big brands, and folks who run streaming services. Why? Because it’s flexible. For example, you can offload SSL processing to NGINX and pass unencrypted traffic to back-end servers. You can also distribute incoming requests to multiple servers behind the scenes.

Essentially, what is NGINX? It’s an engine that tackles concurrency at scale, serves static files efficiently, and smoothly manages complex workloads.

NGINX Directives

NGINX relies on directives to do its job. Each directive is like an instruction that tells the server how to behave. You’ll typically find them in the NGINX configuration file, usually at /etc/nginx/nginx.conf on many Linux distributions. The exact location may vary depending on your distribution (Ubuntu, CentOS, Arch Linux, etc.) and installation method. Directives look like directive_name value; and always end with a semicolon.

You’ll see NGINX configuration directives in different contexts. Some reside in the main (global) scope, which applies across the entire server. Others belong in the http block, the server block, or the location block. A few NGINX global directives define user permissions or worker processes, giving you a broad framework. Here’s a small example:

user nginx;
worker_processes auto;
pid /run/nginx.pid;

Those lines clarify who’s running NGINX, how many worker processes should start, and where to store the PID file. Below these, you’ll often see http { ... } for HTTP-specific settings.

Inside http, you may add directives for compression, caching, or logging. Then you’ll have server and location blocks for domain-specific or path-specific instructions. The nesting matters. If you put something in the http block, it’s shared by every server block unless overridden at a lower level. Keeping track of this hierarchy will help you organize your files with fewer errors.

HTTP Block

In most setups, the NGINX HTTP block exists in the main nginx.conf or a file that’s pulled in with an include statement. This block decides how HTTP traffic is handled before your server or location blocks refine the details.

Typical directives in this section might set the default content type, manage connection timeouts, or turn on file compression. For example:

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile       on;
    keepalive_timeout  65;
}

Here, include mime.types; helps NGINX figure out file types (like .html and .css). The sendfile on; directive uses an OS-based mechanism to streamline file transfers. If you want to enable gzip, you’d do it here too.

Think of this as a broad canvas. Any setting defined in the HTTP block applies globally, unless a deeper block or directive overrides it.

Server Blocks

If you plan to host multiple sites on a single NGINX instance, NGINX server blocks are your friend. Each block defines how a particular site or domain behaves. At a basic level, a server block might look like this:

server {
    listen 80;
    server_name example.com www.example.com;

    root /var/www/example;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

Here, you specify the port (listen 80;) and the domain names. You also decide where your site’s files live. This is often called “virtual hosting.” You can place multiple server blocks in one file or split them across separate files that NGINX includes. Changes not showing up? See the FAQs for tips.

SSL? No problem. Just use listen 443 ssl; with your certificate and key settings inside the same block. You’ll also have location blocks for different URLs, letting you fine-tune your site’s structure. For instance, you might forward requests to a back-end application at /api/ while serving static files under /.

Server blocks keep each site neatly contained, making it straightforward to manage multiple domains on the same physical or virtual server.

Location Blocks

Within a server block, NGINX location blocks spell out how certain paths or file patterns are treated. You might use them to serve images, forward PHP requests to a fastcgi process, or even deny access to hidden files. For example:

location /images/ {
    alias /var/data/images/;
}

location ~ \.php$ {
    fastcgi_pass unix:/run/php/php7.4-fpm.sock;
    include fastcgi_params;
}

The first location hands off everything under /images/ to a directory at /var/data/images/. The second uses a regular expression (~ \.php$) to match .php files and pass them to a PHP-FPM socket.

NGINX uses a priority system for location blocks. Exact matches (like location = /hello.html) beat more general ones. Regex-based matches can override, too, if specified with the right flags. Nested locations let you go deeper if you must tweak behavior for subpaths.

NGINX follows a specific order when matching location blocks:

  1. Exact match (=): location = /path
  2. Preferential prefix match (^~): location ^~ /path
  3. Regular expression match (~, ~*): location ~ \.php$
  4. Prefix match: location /path

NGINX first checks for exact matches. If none exist, it remembers the longest prefix match and then checks regex matches. If a regex matches, NGINX uses that location; otherwise, it uses the stored prefix match. Understanding this priority system helps prevent unexpected behavior in your configuration.

Not sure which location block is in effect? You can check the overall config by running nginx -T, then scanning for your domain and path. Also, consider using nginx -t to confirm you’ve got the syntax right before reloading.

Using Location Root and Index

Sometimes you’ll see a NGINX location directive with root or alias. This tells NGINX where to find actual files. For instance:

location / {
    root /var/www/mywebsite;
    index index.html index.htm;
}

When someone visits your domain, NGINX looks in /var/www/mywebsite for index.html or index.htm. If neither exists, you might set a custom error page or rely on the default 404.

alias is slightly different from root. With alias, the path you specify replaces the entire request path. With root, NGINX appends the request URI to the directory path. Either approach works if you plan carefully. Just be sure you understand how the files line up, so requests map correctly to the underlying filesystem.

This straightforward arrangement helps you avoid headaches. If you see unexpected 404s, double-check which directive you used and confirm your file paths match reality. See the FAQs for more information.

Listening Ports

By default, NGINX listens on port 80 for HTTP and port 443 for HTTPS. That’s defined with the NGINX listen directive. For example:

server {
    listen 80;
    server_name mysite.org;
    ...
}

You can also enable more than one NGINX listen port:

listen 80;
listen 8080;

If you have multiple network interfaces, you might specify an IP with:

listen 192.168.1.10:80;

Remember to adjust your firewall rules so traffic to these ports isn’t blocked. That way, visitors can reach your content no matter which port they’re using.

Server_name Directive

The NGINX server name directive tells a server block which domain(s) or subdomains it should handle:

server_name example.org www.example.org;

This block will respond to requests for either domain. Wildcards are also allowed:

server_name *.example.net;

That matches any subdomain. You can even go more advanced and use regex, though that’s less common. For example:

server_name ~^(?<subdom>.+)\.example\.com$;

Keep your setup clear. If you have many subdomains, a wildcard often simplifies things. Always confirm that DNS settings direct traffic to your server’s IP address.

NGINX Reverse Proxy

A NGINX reverse proxy works by receiving traffic and passing it to another service. This can reduce direct exposure of your back-end app and let NGINX handle SSL or rate limiting. A basic NGINX reverse proxy config might look like this:

server {
    listen 80;
    server_name proxydemo.com;

    location / {
        proxy_pass http://127.0.0.1:4000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

In this NGINX reverse proxy example, incoming traffic to proxydemo.com gets forwarded to http://127.0.0.1:4000/. The extra headers help the target app see the original host and IP address. If you need load balancing, you can define multiple servers:

upstream myapp {
    server 192.168.1.50:4000;
    server 192.168.1.51:4000;
}

server {
    listen 80;
    server_name balancedsite.com;

    location / {
        proxy_pass http://myapp;
    }
}

When setting up load balancing, remember to place any keepalive directive after your load-balancing algorithm:

upstream myapp {
server 192.168.1.50:4000;
server 192.168.1.51:4000;
least_conn; # Load balancing algorithm
keepalive 32; # Keepalive connections
}

This NGINX reverse proxy setup distributes requests across the servers listed in upstream, improving performance and availability. This is a simple way to handle scaling without rewriting your core application.

NGINX Configuration FAQ

Why aren’t my changes showing up?
NGINX only loads its config on startup or after a reload. So after editing, run sudo systemctl reload nginx or sudo nginx -s reload.

How do I see if my NGINX configuration is valid?
Use nginx -t to test for NGINX config errors. If something’s wrong, it’ll pinpoint the file and line number.

Why do my static files return 404?
Check your root or alias paths. A single typo or mismatch between location blocks and directories can lead to missing files.

How do I improve speed?
Turn on sendfile, try gzip on; for compression, and set sensible timeouts. For huge traffic, consider load balancing or caching layers.

Can I test without risking downtime?
It’s best to have a staging environment. If that isn’t possible, apply changes gradually and keep an eye on logs. That way, you catch any problems quickly.

Custom NGINX Configurations on Contabo Servers

When you run NGINX on servers from Contabo, you’ll find a strong platform that can handle different workloads. If you’re on a budget and need flexibility, try a VPS plan with enough RAM and SSD storage to support your apps. Once you’ve spun up the server, installing NGINX is straightforward:

sudo apt update
sudo apt install nginx

Then check its status:

sudo systemctl status nginx

You can fine-tune your config based on the resources you have. For example, if you’re hosting a Node.js application, you’d likely use a reverse proxy setup to offload SSL connections. If you’re running PHP, you might enable fastcgi and create specific location blocks. Keep an eye on memory and CPU usage. When traffic grows, you might scale up to a bigger plan or add more servers to share the load.

Custom NGINX Setup Using Hosting Panels

Hosting panels like cPanel, Plesk, or Webmin can simplify your cPanel NGINX, Plesk NGINX, or Webmin NGINX configurations. For cPanel, there are add-ons that let you manage NGINX through a user-friendly interface. You can check out our cPanel VPS offerings if you prefer that environment.

Meanwhile, Plesk servers have built-in NGINX support. You can run Apache and NGINX together or rely on NGINX alone. In Webmin, you’ll see a dedicated NGINX module for creating virtual hosts, setting up SSL, or customizing directives. All of these panels aim to make your job easier.

Conclusion

NGINX offers a flexible way to serve content, manage load balancing, and handle reverse proxies. You’ve learned about directives, HTTP blocks, server blocks, location blocks, and how they connect to form a solid NGINX configuration. You’ve also explored listening ports, domain settings, and the benefits of a reverse proxy setup.

If you’re eager to keep going, there’s a world of NGINX performance tuning tips to discover. Try enabling caching or fine-tuning worker processes to get the most out of your hardware. By putting these basics into practice, you’ll build a stable environment for nearly any project that comes your way.

Scroll to Top