Mindmajix

Reverse Cache Proxy in Nginx

However, most people do not know what Nginx is. It is just an Open Source Http Web server and a reverse web server. It is currently being used today for the powering of websites, ranging from simple to complex ones. This web server helps in handling users, and especially multiple concurrent users. When it comes to websites, users usually issue requests to the system, and then wait for the feedback from it.

The problem comes when all the users are new to the site, and they all issue requests. In this case, servicing the requests for all of these users becomes a bit tricky. The solution to this problem is to come up with the strategy of caching. Consider a situation whereby all of the concurrent users are requesting the same page. In this case, the page can be placed in the cache, in which case once any user requests it, then it will be issued to them directly.

Configuration and Setup

For users of Ubuntu, the configuration and setup can be done as follows: Begin by opening the file “/etc/nginx/nginx.conf” in the text editor of your choice. Under the definition for “http {“, add the following lines:

proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m
inactive=600m;
proxy_temp_path /var/www/cache/tmp;
real_ip_header X-Forwarded-For;

With the first two lines in the above code, a cache directory will be created in your system.

The next step should be the creation of a virtual host under “/etc/nginx/sites-available/website.” This is shown below:

server {
listen 80;
server_name _;
server_tokens off;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $h;
proxy_set_header X-Real-IP $r_addr;
proxy_set_header X-Forwarded-For $p_add_x_forward_for;
proxy_cache my-cache;
proxy_cache_valid 3s;
proxy_no_cache $cookie_PHPSESSID;
proxy_cache_bypass $cookie_PHPSESSID;
proxy_cache_key “$scheme$host$request_uri”;
add_header X-Cache $upstream_cache_status;
}
}

server {
listen 8080;
server_name _;
root /var/www/root_for_document/;
index index.php index.html index.htm;
server_tokens off;
location ~ \.php$ {
try_files $uri /index.php;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
}
location ~ /\.ht {
all denied;
}
}

To enable the above, you can do the following:

cd
ln -s /etc/nginx/ available-sites/website /etc/nginx/enabled-sites/website
/etc/init.d/nginx restart

The first definition of the server is for the reverse cache proxy which runs at port number 80. The next one should be the backend one. With the proxy pass http://127.0.0.1:8080/, the traffic will be forwarded to port 8080. When it comes to static content, Nginx is very fast in serving this. An example of this content is the single page which is described earlier on. The improved performance is due to the use of the cache which makes the processing easy. The benchmark for this is given below:

Screenshot_33

With the above command, 1,000 requests, which are 100 concurrent, will be sent to our reverse cache proxy which is on port number 80. Consider the command given below: 

Screenshot_34

What happens with the above command is that 1,000 requests, having 100 concurrent will be send to the backend at port number 8080. For the case of the port number 80, it will take 0.2 seconds for the 1,000 requests to be run while for the port number 8080, it will take 2.5 seconds for the same number of requests to run. This translates to be 12.5 times faster.

On port 80, 4,300 requests will be processed in a second, while in port number 8080, only 400 requests will be processed per second. This translates to 10.7 times faster.

Although PHP accelerators can be very useful, it might not be effective in certain scenarios when compared to the reverse cache proxy. The PHP accelerator works by caching the content of PHP scripts which has been compiled so as to improve on performance. This normally happens in environments where shared memory is being used so as to avoid the concept of recompiling the source code for each request which is made. Whenever the source code of the PHP script is changed, then the OpCode which is stored is changed to the appropriate one.

Varnish is also a good tool to use for a reverse cache proxy. You need to know that its focus is mainly on HTTP. Nginx can act like a web server, a mail server, a Reverse Cache Proxy, and a load balancer. However, this is not the case with Varnish. The two tools are good in reverse cache proxying. The good thing with Varnish is that it can be configured more easily. However, it takes more memory and CPU. The process of setting up Nginx as a backend or as a reverse cache proxy is much easier, as you will not be required to install anything. With the latter, when the infrastructure grows in size, then the process of adding or installing new software will not be easy. This is why the use of Varnish is not very recommended compared to Nginx.

It can be concluded that once Nginx has been set up as a reverse cache proxy, the system will exercise an improved performance when it comes to certain scenarios. The process of setting this up is very easy.


0 Responses on Reverse Cache Proxy in Nginx"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.
Course Adviser

Fill your details, course adviser will reach you.