Skip to content

How to set up Nginx for optimized traffic load?

Nginx is a web server much better than Apache for handling the user load and request management. We just need to create a shortcut for the configuration file in /etc/nginx/sites-enabled/ . So first we need to understand the structure of Nginx.

Structure of Nginx

The structure of Nginx contains many things like server block, Including configurations, setting up server names, document root, and reverse proxies.

Read more about the Nginx structure in detail.

Performance Optimizations

For Nginx performance optimizations, we need to look at these parameter configurations:

  • Workers
  • Network activity
  • Buffers
  • Timeout
  • Disk I/O activity
  • Compression
  • Caching

We’ll go through each of them one by one understanding the need of changing their value and what should be the value according to the requirement.

Workers

Worker processes are managed through the event block in nginx.conf file. For Editing the nginx.conf file you’ll need to run nano /etc/nginx/nginx.conf in the terminal and then add this code in the event block:

events { 
    worker_processes    auto;
    worker_connections  1024;
    worker_rlimit_nofile 20960;
    multi_accept        on;  
    mutex_accept        on; 
    mutex_accept_delay  500ms; 
    use                 epoll; 
    epoll_events        512;  
}

Here worker_processes defines the number of workers that can run on the server. Putting the value as auto lets Nginx determine the number of workers by itself by calculating the disk, server load, and network system.

worker_connections defines the number of simultaneous connections which can be handled by one worker node in Nginx. The default value for worker_connections is 512 and this can be set to a maximum of 1024.

worker_rlimit_nofile is related to worker_connections. This is set to a large number in order to handle a large number of simultaneous connections.

multi_accept allows a single worker to accept many connections in a queue, which is a sequence of data waiting for process.

mutex_accept allows workers to accept connections one by one. This is turned off by default, but we need to turn it on because we have turned on the configuration for many connections.

mutex_accept_delay tells worker how long should it wait before accepting a new connection.

use determines the method of processing the incoming connection. We have set this as epoll because we’re working in ubuntu.

epoll_events determines the number events nginx needs to transfer to kernel.

Network activity

In this section, we will talk about tcp_nodelay and tcp_nopush. These two directives are used to help prevent small packets from waiting, for a specified period, which is about 200ms.

Code needed in HTTP section:

http {   

  tcp_nopush  on; 
  tcp_nodelay on;

  }

tcp_nodelay is off by default, but this is enabled to allow all the packets to be sent at once.

tcp_nopush is enabled to make sure the John Nagle’s buffering algorithm is in use. Enabling this makes sure to add packages to each other and send them all together.

Buffers

A buffer is a temporary space, where the data is kept for some time before processing it. For enabling and configuring buffer, directives are added into server:

server { 

   client_body_buffer_size 16k;
   client_max_body_size 2m; 
   client_body_in_single_buffer on;  
   client_body_temp_pathtemp_files 1 2;
   client_header_buffer_size  1m; 
   large_client_header_buffers 4 8k;

 }

client_body_buffer_size is used to set the buffer size of the request body. Value for this directive should be 8k if you’re using 32 bit system, and 16k if you’re using 64 bit system.

client_max_body_size is used to set the upload file size limit. By default it is 1m, you’ll need to set this up accordingly.

client_body_in_single_buffer is used to make sure complete request body is stored in the buffer, as sometimes all the request body is not stored in buffer, but some of it is stored in a file system.

client_header_buffer_size is used to allocate a buffer for request header.

large_client_header_buffers is used to setup the maximum and size of the buffer for reading the large headers in the request.

Timeout

Timeout directives can be really useful in stopping the long running processes from wasting the resource time. For timeout we can add in HTTP:

http {  

 keepalive_timeout  30s; 
 keepalive_requests 30;
 send_timeout      30s;

}

keepalive_timeout sets the timeout to keep the connection alive. default value for this is 75s.

keepalive_requests specifies the number of requests which are needed to be kept alive for a specific period of time.

send_timeout is set to specify the timeout for data transmission to the client.

Disk I/O activity

Using disk I/O directives we’ll configure the asynchronous activity to improve data transfer and effective caching. When talking about Disk I/O, we’re mainly talking about read & write operations between hard disk and RAM. We can add the following code in the configuration:

location /pdf/  {
   sendfile on; 
   aio      on; 
  }  

location /audio/ {  
    directio    4m
    directio_alignment 512  
}

sendfile is used to serve small files. It’s value should be on, in case we want to use operating system resources.

aio is used to handle asynchronous operations. When turned on, it enables multi-threading for both read and write operations.

directio allows sending read and write to be sent directly to the application, which improves the effectiveness of caching.

directio_alignment is related to directio. It sets the block size value for data transfer.

Compression

Another way of ensuring the performance of web server is by compressing the amount and size of data being transferred over the network. For compression, you can use this code:

http {  

  gzip on;
  gzip_comp_level  2;
  gzip_min_length  1000; 
  gzip_types  text/xml text/css; 
  gzip_http_version 1.1; 
  gzip_vary  on;  
  gzip_disable "MSIE [4-6] \."; 

}

gzip let’s nginx know to enable the compression. By default it is turned off.

gzip_comp_level helps in setting up the compression level. If the value is set too high, in that case, CPU resources will be wasted. Value should be put 2 or 3.

gzip_min_length is to set the minimum length of the response for compression.

gzip_types helps to determine the type of responses you want to compress. text/html is added by default. Others are needed to be added.

gzip_http_version is the minimum value of the HTTP version needed to be compressed.

gzip_vary helps in adding the header Vary:Accept Encoding in the response.

gzip_disable some clients or browsers do not support gzip compression, like IE6. This directive helps in disabling the compression by using the User-Agent from headers.

Caching

Caching is going to help reduce loading the same resource multiple times for the same data. To enable caching we can add:

http {  

open_file_cache max=1,000 inactive=30s; 
open_file_cache_valid 30s; 
open_file_cache_min_uses 4; 
open_file_cache_errors on; 

 }

open_file_cache is used to enable the caching in nginx. It helps in storing the meta-data of the files and directories.

open_file_cache_valid is used to set a period of time to revalidate the information about the files and directories.

open_file_cache_min_uses is used to clear the cache of the files and directories after some specified period of inactivity.

open_file_cache_errors is used to show the errors to users who does not have permission for some request or accessing files. This directive will cache the errors.

Bonus

In nginx or any other web server, when setting up a project or serving an app, we should always disable the directory indexing which will help us in securing the file data from attackers and gives error when the files are not executable.

For achieving this you can use:

location / {  
 auto_index  off;
}

1 thought on “How to set up Nginx for optimized traffic load?”

  1. Pingback: Understanding the structure of Nginx. - Raman

Leave a Reply

Your email address will not be published. Required fields are marked *