Linux DevOps - NGINX


V - NGINX
V - NGINX

A - Introduction to NGINX
NGINX is a high performance web server developed to meet the growing needs of enterprise. NGINX is not the only web server on the market, however: one of its main competitors is Apache HTTP Server (httpd). When it was created, NGINX's initial function was that of a reverse proxy server; it can also play the role of a load balancer, i.e. orchestrating a workload between several machines.
It is on these roles initially planned for NGINX that we will work in this course: creating the entry point (reverse proxy) of our architecture and balancing the load between the machines that will host our website.
In comparison with Apache, we can note the following 3 differences that make nginx
have better strengths:
It allows easier access to static files and faster with very low resource usage.
It can handle more simultaneous requests.
NGINX is also much easier to configure thanks to a configuration file syntax inspired by various programming languages that results in compact and easily maintainable configuration files.
NGINX is faster at delivering static content while remaining relatively light on resources because it does not incorporate a dynamic programming language processor. When a static content request arrives, NGINX simply responds with the requested file without running any additional processes. This does not mean that NGINX cannot handle requests that require a dynamic programming language. In such cases, NGINX simply delegates the tasks to separate processes such as PHP-FPM, Node.js or Python. Once this process has completed its work, NGINX returns the response to the client.
B - Creating a domain name
To complete the rest of the course, we will need a domain name. Therefore, we will create an account on cloudns. The site offers a free version with a domain name available and usable for life. In our example we will use the domain name courses-datascientest.cloudns.ph, but we recommend that you choose your own DNS name. This is because each domain name is unique and cannot be reused when creating.

So it's time to create our account, to do so click Login:

If you don't have an account yet, click Create at new account:

Fill in our account information, and click Register.

You should receive an email that allows you to activate your account. Then click on the activation link:

You can now log in with the information provided at account creation:

Once on the dashboard, click the create zone button on the DNS hosting
menu.
Select available area, which will provide a free domain that we can use throughout this training.

So we are required to enter a domain for our account. For this course, it will be course-datascientest. When finished, click the Register button.

Next, click on your domain courses-datascientest.cloudns.ph to create records.

We may notice a number of already existing configurations, click the Add New Record button.

We need to choose type A, fill in the "subdomain" field, and finally provide ip address of the NGINX server on the indicate to field so that our subdomain will always point to our server. Now you can click Save to save the new subdomain.

Now we can notice that our subdomain is indeed present and it does return to our server's ip address. However, we need to wait about an hour for the propagation time so that the subdomain is indeed available worldwide. We now have a working domain to serve us for the rest of our course.
C - Installing Nginx
Installing NGINX on a Linux based system is fairly straightforward. The first thing to do is to upgrade once you are logged in and install the package that allows us to have nginx
operational. Run the following commands:
sudo apt update && sudo apt upgrade -y # we update the packages and the operating system
sudo apt install nginx -y # we install nginx and automatically validate with the -y flag
Upon completion of the installation, NGINX should automatically register as a service with systemd
. So we need to start the service and also ensure that it can run on system restart:
sudo systemctl enable nginx # start the service on every system reboot
sudo systemctl start nginx # start the service now
sudo systemctl status nginx # check the status of the nginx service
Output display:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2022-11-16 09:59:20 UTC; 36s ago
Docs: man:nginx(8)
Main PID: 58183 (nginx)
Tasks: 3 (limit: 4689)
Memory: 4.1M
CGroup: /system.slice/nginx.service
├─58183 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─58184 nginx: worker process
└─58185 nginx: worker process
Nov 16 09:59:19 ip-172-31-24-148 systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 16 09:59:20 ip-172-31-24-148 systemd[1]: Started A high performance web server and a reverse proxy server.
So we can go to the server's ip address to validate that everything is working. We need to get to the default page of nginx
:

So we can confirm that Nginx is installed.
E - NGINX Configuration
As a web server, NGINX will need to serve static or dynamic content to clients. But how that content will be served is usually controlled by configuration files.
NGINX configuration files are located in the /etc/nginx/
directory and carry the .conf
extension. We can go to this directory to list the files present:
cd /etc/nginx #move to the nginx directory
ls -larth #display file and directory list with all information
Display in output:
total 72K
-rw-r--r-- 1 root root 664 Feb 4 2019 uwsgi_params
-rw-r--r-- 1 root root 636 Feb 4 2019 scgi_params
-rw-r--r-- 1 root root 180 Feb 4 2019 proxy_params
-rw-r--r-- 1 root root 1.5K Feb 4 2019 nginx.conf
-rw-r--r-- 1 root root 3.9K Feb 4 2019 mime.types
-rw-r--r-- 1 root root 2.2K Feb 4 2019 koi-win
-rw-r--r-- 1 root 2.8K Feb 4 2019 koi-utf
-rw-r--r-- 1 root root 1007 Feb 4 2019 fastcgi_params
-rw-r--r-- 1 root root 1.1K Feb 4 2019 fastcgi.conf
-rw-r--r-- 1 root root 3.0K Feb 4 2019 win-utf
drwxr-xr-x 2 root root 4.0K Nov 10 06:38 modules-available
drwxr-xr-x 2 root root 4.0K Nov 10 06:38 conf.d
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 sites-available
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 snippets
drwxr-xr-x 8 root root 4.0K Nov 16 09:59 .
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 sites-enabled
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 modules-enabled
drwxr-xr-x 99 root root 4.0K Nov 16 09:59 ..
The main configuration file for NGINX is the nginx.conf
file. We can display its contents:
cat nginx.conf
Contents of the file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# auth_http serverurnginx.cours-datascientest.cloudns.ph/auth.php;
# # pop3_capabilities "TOP" "USER";
# imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen serveurnginx.cours-datascientest.cloudns.ph:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen serveurnginx.cours-datascientest.cloudns.ph:143;
# protocol imap;
# proxy on;
# }
#}
Trying to understand this file in its current state would be tedious.
Let's rename the file and create a new empty file:
sudo mv nginx.conf nginx.conf.backup # rename the nginx.conf file to nginx.conf.backup
sudo touch nginx.conf # creates a new empty file
It is not advisable to modify the original nginx.conf
file for reasons of good practice even if we absolutely know what we are doing.
F - configuring a basic web server
In this part, we will configure a basic static web server from scratch. The goal being to introduce the syntax and fundamental concepts of NGINX configuration files.
f.1 - first configuration file
Let's start by opening the nginx.conf
file created:
sudo nano /etc/nginx/nginx.conf
We will use nano
as our text editor. We can use something more modern if we want, but in a real-world scenario, we are more likely to work with nano or vim on servers instead of anything else.
After opening the file, let's update its contents to look like this:
events {
}
http {
server {
listen 80;
server_name cours1.cours-datascientest.cloudns.ph;
return 200 "Welcome to Datascientest, we are on the NGINX course!";
}
}
If we have experience building REST APIs, we can guess from the line return 200 "Welcome to Datascientest, we are on the NGINX course!"
that the server has been configured to respond with a status code of 200 and the message "Welcome to Datascientest, we are on the NGINX course!"
.
Don't worry if this seems unclear to you at the moment. We'll explain this file line by line, but let's see this configuration in action first.
f.2 - How to validate and reload configuration files
After writing a new configuration file or updating an old one, the first thing to do is check the file for syntax errors. The nginx
binary includes a -t
option to validate NGINX configuration files.
sudo nginx -t
Output display:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If we have any syntax errors, this command will inform us, including the line number.
Although the configuration file is correct, NGINX will not use it. NGINX reads the configuration file once when it starts up and continues to run on that basis.
If we set the configuration file to be correct, it will not be used.
If we update the configuration file, we must explicitly tell NGINX to reload the configuration file. There are two ways to do this.
- We can restart the NGINX service by running the
sudo systemctl restart nginx
command.
sudo systemctl restart nginx
- We can also send a
reload
signal to NGINX by running thesudo nginx -s reload
command.
sudo nginx -s reload
The -s
option is used to send various signals to NGINX. The available signals are stop
, quit
, reload
and reopen
. Once we have reloaded the configuration file by running the nginx -s reload
command, we can see it in action by sending a simple GET
request to the server:
curl -i http://cours1.cours-datascientest.cloudns.ph
Output display:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 16 Nov 2022 10:15:46 GMT
Content-Type: text/plain
Content-Length: 62
Connection: keep-alive
Welcome to Datascientest, we are on the NGINX course!
The server responds with a status code of 200
and the expected message.
You must at each change of the NGINX configuration, send the signal to reload the configuration with the command sudo nginx -s reload
, or else restart the NGINX service with the command sudo systemctl restart nginx
. We will assume in the rest of the course that you know this and progressively we will stop talking about it.
If we go to our server address again, we notice that the page displayed is no longer the same.
f.3 Understanding directives and contexts
The few lines of code we've written here, while seemingly simple, introduce two of the most important terminologies in NGINX configuration files. These are directives and contexts.
Technically, anything in a NGINX configuration file is a directive. Directives are of two types:
- Simple directives
A simple directive consists of the directive name and space-delimited parameters, such as listen
, return
and others. Single directives are terminated with semicolons.
- Block directives
Block directives are similar to single directives, except that instead of ending with semicolons, they end with a pair of braces { }
surrounding additional instructions.
A block directive capable of containing other directives inside is called a context, such as events
, http
and so on. There are four main contexts in NGINX:
events { }
The events
context is used to define the global configuration for how NGINX will handle requests at a general level. There can only be one events
context in a valid configuration file.
http { }
As the name implies, the http
context is used to define the configuration for how the server will handle HTTP
and HTTPS
requests, specifically. There can only be one http
context in a valid configuration file.
server { }
The server
context is nested within the http
context and used to configure specific virtual servers within a single host. There can be multiple server
contexts in a valid configuration file nested within the http
context. Each server
context is considered a virtual host.
main
The main
context is the configuration file itself. Anything written outside the three contexts mentioned above is on the main
context.
We can treat contexts in NGINX like scopes in other programming languages. There is also an aspect of inheritance between them. We can find an alphabetical index of directives on the official NGINX documentation.
We have already mentioned that there can be multiple server
contexts in a configuration file. But when a request reaches the server, how does NGINX know which of these contexts should handle the request?
The listen
directive is one way to identify the correct server
context in a configuration. Consider the following scenario:
events {
}
http {
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
return 200 "Hello Learner, I am listening on port 80!
}
server {
listen 443;
server_name serveurnginx.cours-datascientest.cloudns.ph;
return 200 "Hello Learner, I am listening on port 443!
}
}
Now, if we send a request through the curl
command to http://serveurnginx.cours-datascientest.cloudns.ph:80, we will receive "Hello Learner, I'm listening on port 80!"
as a response.
And if we send a request through this same command to the address http://serveurnginx.cours-datascientest.cloudns.ph:443, we will receive "Hello Learner, I'm listening on port 443!"
as a response.
curl serveurnginx.cours-datascientest.cloudns.ph:80
Output display:
Hello Learner, I am listening on port 80!
We can also test listening on port 443
:
curl serveurnginx.cours-datascientest.cloudns.ph:443
Output display:
Hello Learner, I am listening on port 443!
These two server blocks are like two people holding telephone receivers, waiting to answer when a request reaches one of their numbers. Their numbers are indicated by the listen
directives.
In addition to the listen
directive, there is also the server_name
directive which is the domain name used to reach the server. Consider the following scenario of an imaginary course management application.
You need to create the subdomain laboratory.ourdomain
and also associate it with the ip address of our server. So in this case, we have two subdomains that point to the same server, serveginx.cours-datascientest.cloudns.ph and laboratory.cours-datascientest.cloudns.ph.
So let's adapt our NGINX configuration file as follows:
events {
}
http {
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
return 200 "our course platform!
}
server {
listen 80;
server_name laboratory.courses-datascientest.cloudns.ph;
return 200 "our lab platform";
}
}
This is a basic example of the idea of virtual hosts. We can run two separate applications under different server names, and all on the same server.
If we send a request through the
curl
command to:http://serveurnginx.cours-datascientest.cloudns.ph
, we will get "our course platform!" as the response.If we send a request through the
curl
command to the address:http://laboratoire.cours-datascientest.cloudns.ph
, we will get "our lab platform!" as a response.
We could have done this from our local machine without necessarily creating DNS entries. Simply by editing the /etc/hosts
file. Only this will no longer work when we run the test on a different machine than the one we installed NGINX on. Let's also put this in place to understand. We will modify the /etc/hosts
file:
sudo nano /etc/hosts
We will add these two lines:
127.0.0.1 serverurnginx.cours-datascientest.cloudns.ph
127.0.0.1 lab.cours-datascientest.cloudns.ph
Guided exercise:
We want to make requests through curl in order to validate the configuration. Of course you have to reload the NGINX configuration to do the tests.
Finally, the return
directive is responsible for returning a valid response to the user. This directive takes two parameters: the status code and the message string to return.
f.4 - Serving static content with NGINX
Now that we have a good understanding of how to write a basic configuration file for NGINX, let's upgrade the configuration to serve static files instead of plain text responses.
In order to serve static content, we must first store them somewhere on our server. If we list the files and directory at the root of our server, using ls
, we will find a directory there called /var
:
ls -larth /
Output display:
total 76K
lrwxrwxrwx 1 root root 8 Sep 14 22:56 sbin -> usr/sbin
lrwxrwxrwx 1 root root 7 Sep 14 22:56 bin -> usr/bin
lrwxrwxrwx 1 root root 10 Sep 14 22:56 libx32 -> usr/libx32
lrwxrwxrwx 1 root root 9 Sep 14 22:56 lib64 -> usr/lib64
lrwxrwxrwx 1 root root 9 Sep 14 22:56 lib32 -> usr/lib32
lrwxrwxrwx 1 root root 7 Sep 14 22:56 lib -> usr/lib
drwxr-xr-x 2 root root 4.0K Sep 14 22:56 opt
drwxr-xr-x 2 root root 4.0K Sep 14 22:56 media
drwxr-xr-x 14 root root 4.0K Sep 14 22:57 usr
drwx------ 2 root root 16K Sep 14 22:58 lost+found
drwxr-xr-x 8 root root 4.0K Sep 14 23:01 snap
drwxr-xr-x 3 root root 4.0K Nov 13 05:32 home
drwxr-xr-x 2 root root 4.0K Nov 13 11:28 datascientest
dr-xr-xr-x 13 root root 0 Nov 14 06:12 sys
dr-xr-xr-x 177 root root 0 Nov 14 06:12 proc
drwxr-xr-x 3 root root 4.0K Nov 14 08:11 mnt
drwxr-xr-x 18 root root 3.3K Nov 16 09:55 dev
drwxr-xr-x 20 root root 4.0K Nov 16 09:57 .
drwxr-xr-x 20 root root 4.0K Nov 16 09:57 .
drwxr-xr-x 4 root root 4.0K Nov 16 09:58 boot
drwxr-xr-x 14 root root 4.0K Nov 16 09:59 var
drwxrwxrwt 14 root root 4.0K Nov 16 09:59 tmp
drwxr-xr-x 26 root root 960 Nov 16 10:41 run
drwxr-xr-x 99 root root 4.0K Nov 16 10:58 etc
drwx------ 5 root root 4.0K Nov 16 11:14 root
drwxr-xr-x 3 root root 4.0K Nov 16 11:15 srv
This /var
directory contains another directory www
that is intended to hold site-specific data that is served to our users. We will create a directory there called datascientest
now let's go into that directory and create an html file with welcome to the nginx course as its content.
cd /var/www/
sudo mkdir datascientest_website
cd datascientest_website
sudo touch index.html # create the file index.html in which we add the content "welcome to the nginx course"
Now that we have the static content to distribute, let's update our configuration:
events {
}
http {
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
}
}
The return
directive has now been replaced by a root
directive. This directive is used to declare the root directory of a site.
By writing root /var/www/datascientest_website/;
, we tell NGINX to look for files to serve in the /var/www/datascientest_website/
directory if a request comes in to that server. Since NGINX is a web server, it is smart enough to serve the default index.html
.
Let's see if our configuration works. Let's reload the NGINX configuration file and we can test:
sudo nginx -s reload
curl serveurnginx.cours-datascientest.cloudns.ph
Output display:
welcome to the nginx course
f.5 - Managing static file types
Although NGINX is smart enough to find the index.html
file by default, it is not able to interpret the different file types. To solve this problem, we can update our configuration when we need it to read css
files as well:
events {
}
http {
types {
text/html html;
text/css css;
}
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
}
}
The only change we made to the code is a new types
context nested within the http
block. As we may have already guessed from the name, this context is used to configure file types.
By writing text/html html
in this context, we are telling NGINX to parse any file, because text/html
this ends with the extension html
. We may think that setting the CSS file type should be enough, as HTML is parsed correctly, but it is not.
If we introduce a types
context in the configuration, NGINX parses only our configured files. So if we only define text/css css
in this context, NGINX will start parsing the HTML file as plain text.
f.6 - How to include partial configuration files
Mapping file types to the types
context can work for small projects, but for larger projects it can be tedious and error-prone.
NGINX provides a solution to this problem. If we list the files in the /etc/nginx
directory again, we will see a file named mime.types
.
ls -larth /etc/nginx
Output display:
total 76K
-rw-r--r-- 1 root root 664 Feb 4 2019 uwsgi_params
-rw-r--r-- 1 root root 636 Feb 4 2019 scgi_params
-rw-r--r-- 1 root root 180 Feb 4 2019 proxy_params
-rw-r--r-- 1 root root 1.5K Feb 4 2019 nginx.conf.backup
-rw-r--r-- 1 root root 3.9K Feb 4 2019 mime.types
-rw-r--r-- 1 root 2.2K Feb 4 2019 koi-win
-rw-r--r-- 1 root 2.8K Feb 4 2019 koi-utf
-rw-r--r-- 1 root root 1007 Feb 4 2019 fastcgi_params
-rw-r--r-- 1 root root 1.1K Feb 4 2019 fastcgi.conf
-rw-r--r-- 1 root root 3.0K Feb 4 2019 win-utf
drwxr-xr-x 2 root root 4.0K Nov 10 06:38 modules-available
drwxr-xr-x 2 root root 4.0K Nov 10 06:38 conf.d
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 sites-available
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 snippets
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 sites-enabled
drwxr-xr-x 2 root root 4.0K Nov 16 09:59 modules-enabled
drwxr-xr-x 99 root root 4.0K Nov 16 10:58 .
-rw-r--r-- 1 root root 205 Nov 16 14:07 nginx.conf
drwxr-xr-x 8 root root 4.0K Nov 16 14:07 .
Let's see the contents of this file:
cat /etc/nginx/mime.types
Output display:
types {
text/html html htm shtml;
text/css css;
text/xml xml;
image/gif gif;
image/jpeg jpeg jpg;
...
...
video/x-ms-wmv wmv;
video/x-msvideo avi;
}
The file contains a long list of file types and their extensions. To use this file in our configuration file, let's update our configuration to look like this:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
}
}
The old types
context has been replaced by a new include
directive. As the name suggests, this directive allows us to include the contents of other configuration files.
f.7 - Dynamic routing with NGINX
The configuration we wrote in the previous section was a very simple static content server configuration. All that configuration did was match a file from the root of the corresponding site to the address that the client is visiting.
So, if the client requests files that exist in the root such as index.html
, about.html
or mini.min.css
NGINX will return the file. But if we visit a route such as http://serveurnginx.cours-datascientest.cloudns.ph/nothing
, it will respond with the default 404 page.
In this part, we will learn about the location
context, variables, redirects, rewrites and the try_files
directive.
Context Location
In this part we will talk about the location
context. Let's update the configuration as follows:
events {
}
http {
server {
listen 80;
server_name serverurnginx.courses-datascientest.cloudns.ph;
location /cours {
return 200 "Nginx. \Administration Linux.ph";
}
}
}
We have replaced the root
directive with a new context called location
. This context is usually nested within server
blocks. There can be multiple location
contexts within a server
context. If we send a request through the curl
command to http://serveurnginx.cours-datascientest.cloudns.ph/cours
, we will get a response code 200 and a list of courses (_nginx_ and _linux_).
curl -i http://serveurnginx.cours-datascientest.cloudns.ph/cours
Output display:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 16 Nov 2022 14:59:26 GMT
Content-Type: text/plain
Content-Length: 30
Connection: keep-alive
Nginx.
Linux Administration.
Now, if we send a request to http://serveurnginx.cours-datascientest.cloudns.ph/cours-test
, we will get the same response:
curl -i http://serveurnginx.cours-datascientest.cloudns.ph/cours-test
Output display:
curl -i http://serveurnginx.cours-datascientest.cloudns.ph/cours-christie
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 16 Nov 2022 15:00:21 GMT
Content-Type: text/plain
Content-Length: 30
Connection: keep-alive
Nginx.
Linux Administration.
This happens because, by writing location /courses
, we are telling NGINX to match any URI starting with "courses". This type of match is called a prefix match.
To perform an exact match, we will need to update the code as follows:
events {
}
http {
server {
listen 80;
server_name serverurnginx.courses-datascientest.cloudns.ph;
location = /cours {
return 200 "Nginx. \Administration Linux.ph";
}
}
}
Adding a =
sign before the location URI will ask NGINX to respond only if the URL matches exactly. Now, if we send a request to anything other than /course
, we will get a 404 response.
curl -I http://serveurnginx.cours-datascientest.cloudns.ph/cours-test
Output display:
HTTP/1.1 404 Not Found
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 16 Nov 2022 15:17:11 GMT
Content-Type: text/html
Content-Length: 162
Connection: keep-alive

But if we send on the query about the exact URI, we get a result:
curl -I http://serveurnginx.cours-datascientest.cloudns.ph/cours
Output display:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 16 Nov 2022 15:21:47 GMT
Content-Type: text/plain
Content-Length: 30
Connection: keep-alive

e.8 - Variables in NGINX
Variables in NGINX are similar to variables in other programming languages. The set
directive can be used to declare new variables anywhere in the configuration file:
set $<variable_name> <variable_value>;
# set name "Fall"
# set company "Datascientest
# set profession devops
Variables can be of three types:
String
Integer
Boolean
In addition to the variables we declare, there are variables built into NGINX modules. An alphabetical index of variables is available in the official documentation.
To see some of the variables in action, let's update the configuration as follows:
events {
}
http {
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
return 200 "Host - $host\nURI - $uri\nArguments - $args\n";
}
}
sudo nginx -s reload # allows you to reload the nginx configuration
If we go to look at the result returned by the server, we will have this:

As we can see, the $host
and $uri
variables contain the root address and the requested URI relative to the root, respectively. The $args
variable, as we can see, contains all the query strings.
f.9 - Redirects and rewrites
A redirect in NGINX is identical to redirects on any other platform. To show how redirects work, let's update our configuration. But first let's add a new file named apropos.html
to the /var/www/datascientest_website/
directory. We will put the following sentence "About our site"
in this file:
cd /var/www/datascientest_website/
touch apropos.html
We now have 2 files in our datascientest_website
directory:
ls
Output display:
apropos.html index.html
Now let's modify the NGINX configuration file as follows:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
location = /home {
return 307 /index.html;
}
location = /apropos {
return 307 /apropos.html;
}
}
}
Now, if we send a request to http://serveurnginx.cours-datascientest.cloudns.ph/apropos
, we will be redirected to http://serveurnginx.cours-datascientest.cloudns.ph/apropos.html
:
curl -I http://serveurnginx.cours-datascientest.cloudns.ph/apropos
Output display:
HTTP/1.1 307 Temporary Redirect
Server: nginx/1.18.0 (Ubuntu)
Date: Wed, 16 Nov 2022 16:08:20 GMT
Content-Type: text/html
Content-Length: 180
Location: http://serveurnginx.cours-datascientest.cloudns.ph/apropos.html
Connection: keep-alive
As we can see, the server responded with a status code of 307 and the location shows http://serveurnginx.cours-datascientest.cloudns.ph/about.html
. If we visit http://serveurnginx.cours-datascientest.cloudns.ph/apropos
from a browser, we will see that the URL will automatically change to http://serveurnginx.cours-datascientest.cloudns.ph/apropos.html
.
The rewrite
directive, however, works a little differently. It changes the URI internally, without informing the user. To see it in action, let's update our configuration as follows:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
rewrite /home/index.html;
rewrite /apropos /apropos.html;
}
}
Now, if we send a request to http://serveurnginx.cours-datascientest.cloudns.ph/apropos
, we will get a 200 response code and the HTML code of the about.html
file in response:
curl -i http://serveurnginx.cours-datascientest.cloudns.ph/apropos
Output display:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 17 Nov 2022 06:27:56 GMT
Content-Type: text/html
Content-Length: 23
Last-Modified: Wed, 16 Nov 2022 16:05:01 GMT
Connection: keep-alive
ETag: "63750a2d-17"
Accept-Ranges: bytes
About our site
And if we visit the URI using a browser, we will see the page apropos.html while the URL remains unchanged:

Other than how the URI change is handled, there is another difference between a redirect and a rewrite. When a rewrite occurs, the server
context is re-evaluated by NGINX. Thus, a rewrite is a more expensive operation than a redirect.
f.10 - How to try for multiple files
The other concept we'll look at is the try_files
directive. Instead of responding with a single file, the try_files
directive allows us to check for multiple files.
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
try_files /logo.jpg /not_found;
location /not_found {
return 404 "Sorry, we can't find this file";
}
}
}
As we can see, a new try_files
directive has been added. By writing try_files /logo.jpg /not_found;
, we tell NGINX to look for a file named logo.jpg
in the root whenever a request is received. If it does not exist, go to the /not_found
route.
We update the configuration to try a non-existent file such as logo.jpg
, we get a 404 response with the message "Sorry, we cannot find this file".
Now, the problem with writing a try_files
directive in this way is that no matter what URL we visit, as long as a request is received by the server and the logo.jpg file is found on disk, NGINX will return it.
And this is why try_files
is often used with the NGINX variable $uri
.
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
try_files $uri /not_found;
location /not_found {
return 404 "Sorry, we can't find this file";
}
}
}
By writing try_files $uri /not_found;
, we are telling NGINX to try the URI requested by the client first. If it doesn't find that one, let it try the next.
If we visit http://serveurnginx.cours-datascientest.cloudns.ph/index.html, we will get the old index.html page. The same goes for the about.html
page.
But if we request a file that doesn't exist, we'll get the response from the /not_found
location:
curl -i http://serveurnginx.cours-datascientest.cloudns.ph/riendutout
Output display:
HTTP/1.1 404 Not Found
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 17 Nov 2022 06:44:46 GMT
Content-Type: text/plain
Content-Length: 43
Connection: keep-alive
Sorry, we can't find this file
One thing we may have already noticed is that if we visit the server root http://serveurnginx.cours-datascientest.cloudns.ph
, we get the 404 response.
This is because when we access the server root, the $uri
variable does not match any existing file, so NGINX serves us the fallback location. If we want to solve this problem, let's update our configuration as follows:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
root /var/www/datascientest_website/;
try_files $uri $uri/ /not_found;
location /not_found {
return 404 "Sorry, we can't find this file";
}
}
}
By writing try_files $uri $uri/ /not_found;
, we are telling NGINX to try the requested URI first. If that doesn't work, try the requested URI as a directory, and whenever NGINX lands in a directory, it automatically starts looking for an index.html file.
Now, if we visit the server, we will get the file the contents of the file index.html
.
f.11 - NGINX logs
By default, NGINX log files are located in /var/log/nginx
. If we list the contents of this directory, we can see something like the following:
ls -larth /var/log/nginx
Output display:
total 48K
drwxr-xr-x 2 root adm 4.0K Nov 16 09:59 .
drwxrwxr-x 11 root syslog 4.0K Nov 17 00:00 .
-rw-r----- 1 www-data adm 8.7K Nov 17 06:36 error.log
-rw-r----- 1 www-data adm 22K Nov 17 06:45 access.log
Let's start by dumping the two files:
# delete the 2 files present by default
sudo rm /var/log/nginx/access.log /var/log/nginx/error.log
# create new files with the same names
sudo touch /var/log/nginx/access.log /var/log/nginx/error.log
# reopen the log files
sudo nginx -s reopen
If we don't send a reopen
signal to NGINX, it will continue to write logs to the previously opened streams and the new files will remain empty.
Now, to make an access log entry, let's send a request to the server.
curl -I http://serveurnginx.cours-datascientest.cloudns.ph
Output display:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Thu, 17 Nov 2022 06:55:43 GMT
Content-Type: text/html
Content-Length: 29
Last-Modified: Wed, 16 Nov 2022 11:41:57 GMT
Connection: keep-alive
ETag: "6374cc85-1d"
Accept-Ranges: bytes
Let's now display the contents of the logs
sudo cat /var/log/nginx/access.log
Output display:
3.8.49.199 - - [17/Nov/2022:06:53:50 +0000] "HEAD / HTTP/1.1" 404 0 "-" "curl/7.68.0"
As we can see, a new entry has been added to the access.log file. Any request to the server will be logged in this file by default. But we can change this behavior by using the access_log
directive.
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
location / {
return 200 "this will be saved in the default file.\n";
}
location = /admin {
access_log /var/log/nginx/admin.log;
return 200 "this will be logged in a separate file.\n";
}
location = /no_logging {
access_log off;
return 200 "this will not be logged.\n";
}
}
}
The first access_log
directive inside the /admin location block instructs NGINX to write any access logs for this URI to the /var/logs/nginx/admin.log
file. The second one inside the /no_logging location completely disables access logs for that location.
Let's validate and reload the NGINX configuration. Now, if we send requests to these locations and inspect the log files, we should have these results:
curl http://serveurnginx.cours-datascientest.cloudns.ph/no_logging
Output display:
this will not be saved.
sudo cat /var/log/nginx/access.log
The /var/log/nginx/access.log
file will be empty.
curl http://serveurnginx.cours-datascientest.cloudns.ph/admin
Output display:
this will not be saved.
sudo cat /var/log/nginx/access.log
The /var/log/nginx/access.log
file will be empty.
sudo cat /var/log/nginx/admin.log
Output display:
3.8.49.199 - - [17/Nov/2022:09:19:01 +0000] "GET /admin HTTP/1.1" 200 46 "-" "curl/7.68.0"
curl http://serveurnginx.cours-datascientest.cloudns.ph/
Output display:
this will be saved in the default file.
sudo cat /var/log/nginx/access.log
Output display:
3.8.49.199 - - [17/Nov/2022:09:17:39 +0000] "GET / HTTP/1.1" 200 51 "-" "curl/7.68.0"
The error.log
file, meanwhile, contains the failure logs. To make an entry in error.log, we need to crash NGINX. To do this, let's update our configuration as follows:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
return 200 "..." "...";
}
}
As we know, the return
directive only takes two parameters, but we have given three here. Now let's try to reload the configuration and an error message will be displayed:
sudo nginx -s reload
Output display:
# nginx: [emerg] invalid number of arguments in "return" directive in /etc/nginx/nginx.conf:14
Let's check the contents of the error log and the message should also be there:
sudo cat /var/log/nginx/error.log
# 2022/11/17 08:35:45 [notice] 4169#4169: signal process started
# 2022/11/17 10:03:18 [emerg] 8434#8434: invalid number of arguments in "return" directive in /etc/nginx/nginx.conf:14
Error messages have levels. A notice
entry in the error log is harmless, but a emerg
or emergency entry should be handled immediately.
There are eight levels of error messages:
debug
: Useful debugging information to help determine where the problem lies.info
: Informative messages that don't need to be read but may be good to know.notice
: Something normal happened that is worth noting.warn
: Something unexpected happened, but is not a cause for concern.error
: Something did not work.crit
:There are issues that need to be addressed critically.alert
:Prompt action is required.emerg
: The system is in an unusable state and requires immediate attention.
By default, NGINX logs all message levels. We can override this behavior using the error_log
directive. If we want to set the minimum level of a message to warn
, let's update our configuration file as follows:
events {
}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
error_log /var/log/error.log warn;
return 200 "..." "...";
}
}
Let's validate and reload the configuration, and from now on only messages of higher level warn
or equal will be logged.
cat /var/log/nginx/error.log
Output display:
# 2022/11/17 11:27:02 [emerg] 12769#12769: invalid number of arguments in "return" directive in /etc/nginx/nginx.conf:16
In contrast to the previous output, there are no notice
entries here. emerg
is a higher level error than warn
and that is why it has been logged.
For most projects, we can leave the error configuration as is. The only suggestion would be to set the minimum error level to warn
. That way we won't have to look at unnecessary entries in the error log.
Practical Case
In a practical case we will deploy a virtual host with a wordpress site and with SSL/TLS certificates to secure access to our website. We will gradually explain throughout this case study the different elements used.
G - LEMP stack

g1 - Introduction
A stack typically consists of a database, server and client side technologies, a Web server, a particular operating system. Sometimes back-end technologies are cross-platform, so no particular operating system. It is a web stack consisting of a set of software and frameworks (software development framework) or libraries used to create complete web applications.
LEMP is an open source web application stack used to develop web applications. It has good community support and is used worldwide in many large-scale web applications. Nginx is the second most used web server in the world after Apache.
g.2 - Components of the LEMP stack
Each component of a stack communicates with each other. Here are some details:
Linux means: The web server runs on the Linux operating system. It is free and open-source and well known to be highly secure and less vulnerable to malware and viruses, even compared to Windows or macOS.
E stands for NGINX: The engine of our web server, it can also be used as a load balancer between multiple machines and reverse proxy.
M stands for MySQL or MariaDB: it is an open source SQL-based database that is used to store and manipulate data while maintaining data consistency and integrity. It organizes data in a table format in rows and columns. It is also compatible with the ACID model.
P stands for PHP: it stands for Hypertext Preprocessor and is a scripting language that runs on the server side and communicates with the MySQL database and performs all the operations requested by the user, such as retrieving data, adding data, manipulating data or processing. the data.
g.3 - Installation
Before we begin this case study, the LEMP server must be installed on our server. You can install it on a freshly created Vagrant machine of your choice, as it will be easier to recycle the environments afterwards. If it is not installed, we can install it by running the following command:
sudo apt-get install nginx mariadb-server php php-fpm php-curl php-mysql php-gd php-mbstring php-xml php-imagick php-zip php-xmlrpc -y
Once the LEMP server is installed, check the PHP version using the following command:
php -v
You will get the PHP version in the following output:
PHP 7.4.3 (cli) (built: Nov 2 2022 09:53:44) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
with Zend OPcache v7.4.3, Copyright (c), by Zend Technologies
Next, let's modify the PHP configuration file and change some default settings:
sudo nano /etc/php/7.4/fpm/php.ini
Modify the following lines:
cgi.fix_pathinfo=0
upload_max_filesize = 128M
post_max_size = 128M
memory_limit = 512M
max_execution_time = 120
Once this is done, we can save and close the file.
g.4 - Create a database for WordPress
Guided exercise:
WordPress uses a database to store its content.
Create a wordpress
MariaDB database and a user called wordpress
for our site.
Once that's complete, we can move on to the next step.
g.5 - WordPress

WordPress is a free, open source content management system primarily used to publish blogs on the Internet. WordPress simplifies the creation and maintenance of websites and blogs. Due to its popularity, more than a third of all websites today are powered by WordPress. It is written in PHP and uses MariaDB and/or MySQL as its database.
First, let's access the Nginx web root directory and download the latest version of WordPress using the following command:
cd /var/www/html
sudo wget https://wordpress.org/latest.tar.gz
Output display:
--2022-11-17 11:17:02-- https://wordpress.org/latest.tar.gz
Resolving wordpress.org (wordpress.org)... 198.143.164.252
Connecting to wordpress.org (wordpress.org)|198.143.164.252|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 22751086 (22M) [application/octet-stream]
Saving to: 'latest.tar.gz'
latest.tar.gz 100%[===========================================================================================>] 21.70M 11.5MB/s in 1.9s
2022-11-17 11:17:04 (11.5 MB/s) - 'latest.tar.gz' saved [22751086/22751086]
Once the WordPress is downloaded, let's extract the downloaded file with the following command:
sudo tar -zxvf latest.tar.gz
Next, we'll rename the sample WordPress configuration file.
sudo mv /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
Next, edit the WordPress configuration file and set the parameters for our database:
sudo nano /var/www/html/wordpress/wp-config.php
Set the parameters for our database as shown below:
define('DB_NAME', 'wordpress' ); # database name
/** Database USER_NAME */ # name of the user
define( 'DB_USER', 'wordpress' );
/** Database password */ # user's password
define( 'DB_PASSWORD', 'DatascientestWordpress@' );
/** Database hostname */ # the location of our database which is located on the same server as the website so "localhost
define( 'DB_HOST', 'localhost' );
Save and close the file when you are done. Next, set the appropriate permission and ownership for the WordPress directory:
sudo chown -R www-data:www-data /var/www/html/wordpress
sudo chmod -R 755 /var/www/html/wordpress
g.6 - NGINX virtual hosts
Server blocks, often referred to as NGINX virtual host, are a feature of the NGINX web server that allows you to host multiple websites on a single server. Unlike setting up and configuring a server for each domain, hosting multiple websites on a single machine saves time and money.
The domains are isolated and independent, each having:
- A site documentation directory
- A website security policy
- An SSL certificate
Guided exercise:
We need a subdomain for our website. For the following, we will use the subdomain wordpress.cours-datascientest.cloudns.ph
. You will need to create your own for the rest of the course, this is in the format wordpress.yourdomain. You can do this from your cloudns.net
To set this up for our practical case, we need to create a Nginx virtual host configuration file to serve WordPress over the Internet.
We need to restore the /etc/nginx/nginx.conf
file to its original state.
sudo rm -f /etc/nginx/nginx.conf #deletes from the nginx.conf file created by us
sudo mv /etc/nginx/nginx.conf_backup nginx.conf # rename /etc/nginx/nginx.conf_backup to nginx.conf
sudo nano /etc/nginx/conf.d/wordpress.conf # create new file for virtual host configuration
Let's add the following configuration, kindly replace the domain wordpress.cours-datascientest.cloudns.ph
with yours:
server {
listen 80;
root /var/www/html/wordpress;
index index.php index.html index.htm;
server_name wordpress.cours-datascientest.cloudns.ph;
client_max_body_size 500M;
location / {
try_files $uri $uri/ /index.php?$args;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location ~*.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Let's save the file and then check the Nginx configuration:
sudo nginx -t
Output display:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Next, restart the Nginx and PHP-FPM services to apply the changes.
sudo systemctl restart nginx
sudo systemctl restart php7.4-fpm
We can also check the status of Nginx using the following command:
systemctl status nginx
Output display:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-11-17 11:45:25 UTC; 31s ago
Docs: man:nginx(8)
Process: 82258 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 82259 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 82261 (nginx)
Tasks: 2 (limit: 4689)
Memory: 1.4M
CGroup: /system.slice/nginx.service
├─82261 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─82262 nginx: worker process
Nov 17 11:45:25 ip-172-31-24-148 systemd[1]: Starting A high performance web server and a reverse proxy server...
Nov 17 11:45:25 ip-172-31-24-148 systemd[1]: Started A high performance web server and a reverse proxy server.
g.7 - WordPress web installation
Now let's open our web browser and access the WordPress installation wizard using our URL. For this course the url will be http://wordpress.cours-datascientest.cloudns.ph
. So we will be redirected to the following page:

Let's select our language French and click the Continue button. We should see the WordPress site configuration page. So let's enter the various information requested. Please be sure to keep your password:

Let's specify your website name, admin username, password, email address and click on the Install WordPress button. Once the WordPress is installed, we should see the following page:

Click on the Login button. We should see the WordPress login page. Let's specify our admin username, password and click on the Login button. This takes us to the WordPress home page.

g.8 - Enable HTTPS on WordPress with Let's Encrypt

Let's Encrypt is an organization that acts to validate the identity of entities such as websites, email addresses, businesses or individuals and bind them to cryptographic keys through the publication of electronic documents called digital certificates. Adopting the HTTPS
protocol on the web is easier with the help of its certificates. Since its launch in late, 2015, Let's Encrypt has become the world's largest certificate authority, representing more currently valid certificates than all other browser-approved certificate authorities combined. As of January 2019, it had issued more than 538 million certificates for 223 million domain names.
To enable the HTTPS
protocol on our site, we need to install the Certbot
client from Let's Encrypt
on our system. We can install it by running the following command:
sudo apt-get install python3-certbot-nginx -y
Once the Certbot
client is installed, let's run the following command to enable HTTPS on our site:
sudo certbot --nginx -d wordpress.cours-datascientest.cloudns.ph
We are asked to provide a valid email address and agree to the terms of use as outlined below:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): fallewi@gmail.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: A
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for wordpress.cours-datascientest.cloudns.ph
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/conf.d/wordpress.conf
Next, we will choose to redirect HTTP traffic to HTTPS as below:
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/conf.d/wordpress.conf
Select 2 to redirect http
requests to https
and complete the installation. You should see the following output:
- - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled
https://wordpress.cours-datascientest.cloudns.ph
You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=wordpress.cours-datascientest.cloudns.ph
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/wordpress.cours-datascientest.cloudns.ph/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/wordpress.cours-datascientest.cloudns.ph/privkey.pem
Your cert will expire on 2023-02-15. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
We can log back into our Wordpress instance to see if we are being returned to https://wordpress.cours-datascientest.cloudns.ph/wp-admin/
. We can now see that access to our site is secure.

In this section we were able to implement the deployment of a secure Wordpress site with free certificates provided by let's encrypt
.
H - NGINX as a reverse proxy

h.1 - Presentation
A reverse proxy makes various servers and services appear as a single unit. It allows you to hide the presence of several separate servers behind the same name. When configured as a reverse proxy, NGINX sits between the client and one or more primary servers. The client sends requests to NGINX, and then NGINX forwards the request to the back-end.
Once the back-end server has finished processing the request, it sends it to NGINX. In turn, NGINX sends the response back to the client.
During the entire process, the client has no idea who is actually processing the request. This seems like a difficult concept to grasp, but once we do it ourselves, we'll see how easy NGINX makes it.
h.2 - Benefits and features of reverse proxy
Benefits
- Improves security
- Load balancing
- Caching
- SSL encryption and more
Features of the Nginx reverse proxy
Load balancing - A reverse proxy server ( or in English reverse proxy) sits in front of several main servers and distributes client requests across multiple servers. This will improve website speed and capacity utilization. If one server goes down, the load balancer will redirect traffic to another server.
Security - A reverse proxy server provides an additional defense against security attacks by masking the identity of the primary servers.
Performance - A reverse proxy server can cache common content and compress inbound and outbound data, which will greatly improve the connection speed between the client and server.
h.3 Configuration
Let's look at a very basic and impractical example of a reverse proxy. Let's add this content to the /etc/nginx/conf.d/reverseproxy.conf
file of our Datascientest
machine :
server {
listen 80;
listen [::]:80;
server_name wordpress.cours-datascientest.cloudns.ph;
location / {
proxy_pass "http://192.168.10.11/";
}
}
server {
listen 80;
listen [::]:80;
server_name serveurnginx.cours-datascientest.cloudns.ph;
location / {
proxy_pass "http://192.168.10.12/";
}
}
The most important configuration step in an Nginx reverse proxy setup is the addition of a proxy_pass
parameter that maps an incoming URL to a main server. The proxy_pass
field is configured in the location
section of any virtual host configuration file.
We can install apache
( Apache is a free and open-source web server software widely used to host websites around the world. The official name is Apache HTTP Server and it is maintained and developed by Apache Software Foundation.)
In our current configuration, if we go to the url wordpress.cours-datascientest.cloudns.ph
, the request will be returned to https://google.fr
. Similarly, if we go to the url serveginx-datascientest.cloudns.ph
, the request will be returned to http://google.com/
.
To globally configure the proxy_pass
parameter, we can modify the default file in the sites-available
folder of Nginx.
sudo nano /etc/nginx/sites-available/default
h.4 - Transmission of request headers
When Nginx transmits a request, it automatically sets two header fields in a client proxy request, Host
and Connection
, and removes empty headers. Host
is set to the $proxy_host
variable and Connection
is set to close
.
To adjust or set the headers for proxy connections, we can use the proxy_set_header
directive, followed by the header value. We can find a list of all available request headers and their allowed values here. If we want to prevent a header from being passed to the server, let's set it to an empty string ""
.
We can change the value of the Host
header field $host
and remove the Accept-Encoding
header field by setting its value to an empty string.
location / {
proxy_set_header Host $host;
proxy_set_header Accept-Encoding "";
proxy_pass "http://192.168.10.11/";
}
h.5 - Common Nginx reverse proxy options
Content delivery via HTTPS has become a standard these days. So this is an example of reverse proxy configuration with HTTPS Nginx protocol, including recommended Nginx proxy settings and headers.
location/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
proxy_http_version 1.1
- Sets the HTTP protocol version for the proxy, by default it is set to 1.0. For web sockets andkeepalive
connections, we must use version 1.1.proxy_cache_bypass $http_upgrade
- Defines conditions under which the response will not be fetched from a cache.Upgrade $http_upgrade
andConnection "upgrade"
- These header fields are required if our application uses web sockets.Host $host
- The$host
variable in the following order of precedence contains: the hostname of the request line, or the hostname of the request header fieldHost
, or the name of the server corresponding to a request.X-Real-IP $remote_addr
- Transfers the visitor's real remote IP address to the proxy server.X-Forwarded-For $proxy_add_x_forwarded_for
- A list containing the IP addresses of each server by which the client was proxied.X-Forwarded-Proto $scheme
- When used inside an HTTPS server block, each HTTP response from the proxy server is rewritten as HTTPS.X-Forwarded-Host $host
- Sets the originating host requested by the client.X-Forwarded-Port $server_port
- Sets the originating port requested by the client.
I - NGINX as a load balancer

i.1 - Introduction
Given NGINX's reverse proxy design, it can be easily configured as a load balancer.
In a real-world scenario, load balancing may be required on large-scale projects spread across multiple servers. Load balancing is a great way to scale our application and increase its performance and redundancy. Nginx can be configured as a simple yet powerful load balancer to improve the availability and resource efficiency of your servers.
How does Nginx work? Nginx acts as a single point of entry to a distributed web application running on multiple separate servers. Beforehand, we must have at least two hosts with web server software installed and configured to take advantage of the load balancer's benefits.
i.2 - Configuring NGINX as a load balancer
Let's start configuring our Datascientest
machine for load balancing. The google.fr
and google.com
websites will serve as the backend
servers (Servers to which incoming requests to the load balancer are redirected ) in our configuration. Essentially, all we have to do is configure NGINX with instructions so that we know what type of connections to listen for and where to redirect them. Let's create a new configuration file using the text editor of our choice:
sudo nano /etc/nginx/conf.d/load-balancer.conf
In the load-balancer.conf
, we need to define the following two segments, upstream backend and server
# declaration of the group of servers to which requests from the Loadbaexecute nginx will be returned
upstream backend {
server google.fr; # google.fr website address
server google.com; # google.com website address
}
# declaration of the listening port and the backend server group to which the incoming request will be returned on the specific port
server {
listen 80;
server_name server-nginx.cours-datascientest.cloudns.ph;
location / {
proxy_pass http://backend;
}
}
On Debian and Ubuntu systems, we will need to remove the _default_ symbolic link from the sites-enabled
sudo rm /etc/nginx/sites-enabled/default
Then use the following to restart NGINX.
sudo systemctl restart nginx
Let's verify that NGINX is starting up correctly. If the restart fails, we'll need to take a look at the /etc/nginx/conf.d/load-balancer.conf
file you just created to make sure there are no typos or missing semicolons.
When we enter on our browser the https://serveur-nginx.cours-datascientest.cloudns.ph
address of the load balancer in our web browser, we have the page for one of our servers come up which will take turns displaying the pages for our google.fr
and google.com
websites.
i.3 - Load balancing methods
Load balancing with Nginx uses a round-robin algorithm by default if no other method is defined, as in the first case above. With the scheme, round-robin, each server is selected in turn according to the order we defined in the load-balancer.conf
file. This balances the number of requests equally for short operations.
Load balancing based on least connections (least connections) is another simple method. As the name implies, this method directs requests to the server with the fewest connections at that time. This works more fairly than round-robin with applications where requests can sometimes take longer to complete.
To enable the least connection balancing method, let's add the least_conn parameter to your upstream section, as shown in the example below.
upstream backend {
least_conn;
server google.fr; # google.fr website address
server google.com; # google.com website address
}
server {
listen 80;
server_name server-nginx.cours-datascientest.cloudns.ph;
location / {
proxy_pass http://backend;
}
}
The Round-robin
and least connections
algorithms are the two most commonly used algorithms. However, they cannot provide session persistence. If our web application requires users to then be directed to the same main server as when they connected previously, we will use the IP hashing
method instead.
IP hashing uses the visitors' IP address as a key to determine which host should be selected to process the request. This allows visitors to be directed to the same server every time, provided the server is available and the visitor's IP address has not changed.
The IP hash is used to determine which host should be selected to process the request.
To use this method, let's add the parameter ip_hash
to our upstream segment, as in the case below.
upstream backend {
ip_hash;
server google.fr; # google.fr website address
server google.com; # google.com website address
}
In a server configuration where the resources available between different hosts are not equal, it may be desirable to favor some servers over others. Setting server weights allows you to further refine load balancing with Nginx. The server with the highest (weight) in the load balancer is selected most often.
upstream backend {
server google.fr weight=4; # google.fr website address
server google.com weight=2; # google.com website address
}
server {
listen 80;
server_name server-nginx.cours-datascientest.cloudns.ph;
location / {
proxy_pass http://backend;
}
}
In the above configuration, the first server is selected twice as often as the second.
i.4 - Load balancing with HTTPS enabled
We will enable HTTPS for our site, this is a great way to protect your visitors and their data. Using encryption with a load balancer is pretty easy. All we have to do is add another server section to your load balancer configuration file that listens for HTTPS
traffic on port 443
with SSL. Next, let's configure a proxy_pass
for your upstream segments as with HTTP in the previous example above.
Open your configuration file again to modify it.
sudo nano /etc/nginx/conf.d/load-balancer.conf
Let's then add the following server segment to the end of the file.
server {
listen 443 ssl;
server_name server-nginx.cours-datascientest.cloudns.ph;
ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem; #Path to SSL/TLS certificate
ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem; #Path to the private key
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #TLS version used
location / {
proxy_pass http://backend;
}
}
Then let's save the file without forgetting to restart NGINX.
sudo systemctl restart nginx
Setting up encryption on our load balancer when we use private network connections to our back-end has some great benefits.
Now all connections to your load balancer will be served via an encrypted HTTPS connection. Requests to unencrypted HTTP will also be redirected to use HTTPS. This provides a seamless transition to encryption. Nothing is required of your visitors.
i.5 - Backend node health checks
In order to know which servers are available, Nginx's reverse proxy implementations include passive server health checks. If a server does not respond to a request or responds with an error, Nginx will note that the server has failed. It will try to avoid forwarding connections to that server. This mechanism is called a health check
.
The number of consecutive failed connection attempts within a certain period of time can be defined in the load balancer configuration file.
Let's set a max_fails
parameter on the server lines. By default, when no max_fails
is specified, this value is set to 1
. Optionally setting max_fails_
to 0
will disable status checks for this server.
If max_fails
is set to a value greater than 1, subsequent failures must occur within a specific time period for failures to be counted. This timeout is specified by a fail_timeout
parameter, which also defines how long the server should be considered to fail.
By default, the fail_timeout
is set to 10 seconds.
Once a server is considered failed and the time set by fail_timeout
has elapsed, Nginx will begin probing the server with background requests. If the tests are successful, the server is marked active again and will be included in load balancing again.
upstream backend {
server google.fr weight=4; # google.fr website address
server google.com weight=2 max_fails=3 fail_timeout=30s; # google.com website address
}
server {
listen 80;
server_name server-nginx.cours-datascientest.cloudns.ph;
location / {
proxy_pass http://backend;
}
}
This mechanism allows us to adapt our server backend to the current demand by powering on or off hosts as needed. When we start additional servers during high traffic, this can easily increase the performance of our application when new resources become automatically available to our load balancer.
We call this feature auto-scaling
.
