Android,iOS,Gadgets,Reviews Everything About Technology

Optimizing NGINX configuration



NGINX is a fast replacement for slow Apache2. Of course, NGINX, like any other web server, requires the correct settings for better performance.


- Advertisement -

  • Freshly installed and configured Debian  or Ubuntu.
  • Installed and configured NGINX server.
  • Understanding the basics of Linux setup

Workflows and work joins

The first two variables that you need to configure are workflows and work joins. First, we’ll figure out what they’re responsible for. worker_process – the basis for setting up and living NGINX. This variable specifies the number of processes that are bound to a specific IP address and port. Usually, one process per kernel is allowed. Pointing out more importance, we will not harm the system, but, most likely, other processes will simply be idle.

To determine the optimal value of worker_process, just look at the number of cores in your system. If you use the setting DigitalOcean setting to 512 MB, then most likely you have one core. When expanding the system, it is worthwhile to look at how many cores you have and, in accordance with their number, set the value of worker_process.

The following team will help us:

~~~ {.bash} grep processor / proc / cpuinfo | wc -l

Suppose you get a value of 1. This is the answer to the question of how many cores in your system.
The variable ** worker_connections ** specifies the allowed number of concurrent connections for the serviced NGINX. The default value is 768, although if you take into account that one browser at least opens two connections, you can safely divide this value in half. Therefore, you need to set the maximum allowable value. You can check the kernel limitations using the command:
ulimit -n

On weak machines (512 MB) this value is likely to be 1024 – which can be taken as the initial value.

Update our settings

~~~ {.bash} sudo nano /etc/nginx/nginx.conf

# /etc/nginx/nginx.conf

worker_processes 1;
worker_connections 1024;

Remember that the maximum number of clients served is multiplied by the number of cores in the system. In this case, we allow 1024 connections per second. Also this value is later reduced by the keepalive_timeout directive .


There is still a small trick that we can take advantage of. This is to change the size of the buffer. If this size is too small, then NGINX creates a temporary file, thereby provoking constant read / write cycles from the disk. First, you need to understand what the following variables mean.

client_body_buffer_size– the size of the client’s buffer. These are restrictions related to POST requests. Usually they are used when submitting forms.

client_header_buffer_size– the same as the previous one, only limits the size of the headers. Usually 2K is more than enough.

client_max_body_size– the maximum size of the request from the client. If this limit is exceeded, NGINX will generate an error 413 or Request Entity Too Large (the request size is too large)

large_client_header_buffers – The maximum number and size of the buffer for large headers.

~~~ {.nginx}


client_body_buffer_size 10K; client_header_buffer_size 2k; client_max_body_size 8m; large_client_header_buffers 2 2k;

Setting time limits also significantly improves performance.
The variables ** client_body_timeout ** and ** client_header_timeout ** are responsible for setting the wait time before responding to the client's request, be it the main part or the header. If neither is sent, the server generates an error 408 or Request Time Out (the request timeout is exceeded).
The variable ** keepalive_timeout ** sets the lifetime of the connection to the client. Simply put, the server disconnects the client after this time.
Finally, ** send_timeout ** is not set for the entire period of the response, but only for the gap between the two reads. If the client did not perform any actions after this time, the server disconnects from it.
# /etc/nginx/nginx.conf

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;

Gzip compression

This compression reduces the size of network data with which NGINX works. But be careful, if the value is large enough, the server starts to load the CPU quite heavily.

~~~ {.nginx}


gzip on; gzip_comp_level 2; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text / plain application / x-javascript text / xml text / css application / xml;

### Caching Static Files
It is possible to set the expiration time of headers for unchanged files, provided that these files are often transmitted. This variable can be set in the block of the virtual host NGINX.
# /etc/nginx/sites-available/example-server.conf

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 365d;

You can add or exclude file types in an array.


NGINX reflects in its log every incoming request. If you use third-party monitoring tools, you can disable this feature. Just change the value of the access_log directive :

~~~ {.nginx}


access_log off

Save and close the file, and then:
sudo service nginx restart


In the end, a well-tuned server is one that is easy to monitor and fine-tune. It is impossible to single out unique settings, each case must be considered individually. Moreover, if you pay due attention to the speed of the system, you should pay attention to the balancing and horizontal scaling of the system. These are just a few of the many system improvement points that every system administrator should know.