Mindmajix

Using Nginx with a Web Application

With Nginx, communication with web applications which have been developed dynamically is made possible and a bit easy. It can also be used for distribution of traffic among backend servers. Just like Varnish, it can also be used for the caching of web content.

Configuration of Nginx Server

Consider the code given below:

server {
listen 80 default_site;
root /var/www;
index index.html index.htm; !
server_name server_name.com www.server_name.com; !
location / {
try_files $uri $uri/ =404;
}
}

The above code shows a basic configuration of Nginx so as to serve files on a particular website. It listens on port number 80, which is just a regular HTTP port. We must tell it what the default site is. This is the site where Nginx will go if it receives a request which has no website which has been specified. The root of the web also has to be set, and this is where we will store our files for the web application.

Screenshot_22

For the purpose of server configuration setup, it is recommended that you use H5BP. Once the Nginx has been installed on the distributions of Linux such as Debian, Ubuntu, Fedora, and others, the sites will be enabled by default, and the structure of the sites will be available.

With Nginx, requests can be sent to HTTP listeners, to WSGI listeners, to fastCGI listeners, and communication with memcache can be done directly. Some fancy caching can be done in HTTP as shown below:

location /static {
try_files $uri $uri/ =404;
}
location / {
proxy_pass 127.0.0.1:9000;
proxy_param APPENV production;
include proxy_params;
}

The above code shows how one can proxy their request from Nginx to another application. Whenever you have an application which is using Nginx, then the Nginx will be placed at the front of the application so as to receive requests from users, but you come up with a way to tell or guide it on how to handle static files. You should also tell it the time that it should send out a request from the application.

Consider the code given below:

location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
# Or:
#fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param APPENV production;
include fastcgi.conf;
}
Note that with PHP-FPM, listening to ports is done by use of fastCGI rather than using HTTP. However, in our case above, we are listening to the port number 9000 which is located on the local host. An HTTP request will not be accepted in the above case. Only requests which are made in fastCGI will be accepted, and this specifies the standard way how PHP files are processed.

For the case of HTTP proxies, non-static URLs are matched and then passed into an application. In PHP, this only happens with files having a “.php” extension, as they are the ones which are matched and then passed into the application.

The type of server information which PHP is to use has also been specified.

location /static {
try_files $uri $uri/ =404;
}
location / {
uwsgi_pass 127.0.0.1:9000;
uwsgi_param APPENV production;
include uwsgi_params;
}

Lastly, the HTTP request is taken away by WSGI Nginx, in which it is converted into the correct protocol that the gateway uses, and then it is sent off to the gateway. The gateway will then communicate with the application where the request is processed, and then the response is sent back

Nginx as a Load Balancer

Nginx makes a good load balancer, just like other software such as HAProxy. It can be configured as follows for the purpose of load balancing:

upstream myapp {
zone backend 64k;
least_conn;
server 127.0.0.1:9000 max_fails=2 fail_timeout=30s;
server 127.0.0.1:9001 max_fails=2 fail_timeout=30s;
server 127.0.0.1:9002 max_fails=2 fail_timeout=30s;
}
server {
location / {
health_check;
include proxy_params;
proxy_pass http://myapp/;
# handling of the web socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $hupgrade;
proxy_set_header Connection “upgrade”;
}
}

Upstream are the load balancers to be load balanced within. Note that we have only three servers, and all are listening to port number 9000. Consider the code given below:

server {
location / {
health_check;
include proxy_params;
proxy_pass http://myapp/;
# Handling of the Web Socket connections
proxy_http_version 1.1;
proxy_set_header Upgrade $hupgrade;
proxy_set_header Connection “upgrade”;
}

Our aim is to stop the Nginx from sending its requests to the servers which might have broken down. That is why we have the above parameter. The Health check is responsible for checking this


0 Responses on Using Nginx with a Web Application"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.
Course Adviser

Fill your details, course adviser will reach you.