{"id":25760,"date":"2022-12-02T10:00:00","date_gmt":"2022-12-02T18:00:00","guid":{"rendered":"https:\/\/coderpad.io\/?p=25760"},"modified":"2023-06-05T13:48:40","modified_gmt":"2023-06-05T20:48:40","slug":"how-to-configure-different-load-balancing-algorithms-on-nginx","status":"publish","type":"post","link":"https:\/\/coderpad.io\/blog\/development\/how-to-configure-different-load-balancing-algorithms-on-nginx\/","title":{"rendered":"How to Configure Different Load Balancing Algorithms on Nginx"},"content":{"rendered":"\n<p>Low latency, high uptime, and good performance are required in today&#8217;s world of keeping users engaged with your application.&nbsp;<\/p>\n\n\n\n<p>During times of high traffic, the overall performance of most web applications drops, the latency rises, and sometimes the request times out. This often happens when the server computing power is not enough to process the workload during this period of high traffic.<\/p>\n\n\n\n<p>As a prerequisite, you only need a good understanding of basic web terminology like HTTP, servers, requests etc. We\u2019ll start with learning about <a href=\"https:\/\/www.nginx.com\/\" target=\"_blank\" rel=\"noopener\">NGINX<\/a> as a load balancer and the different load-balancing algorithms. From there, you will learn how to configure the different algorithms to best fit your particular use case, and the pros and cons of load balancing.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What is load balancing?<\/h2>\n\n\n\n<p>Take this system setup, for example:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_63891905ad218.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">The server bottleneck.<\/figcaption><\/figure>\n\n\n\n<p>The system makes use of a single web server to process all web requests from the client. This single server can be overworked when it receives multiple concurrent requests beyond what it can process.<\/p>\n\n\n\n<p><strong>HTTP load balancing<\/strong> is a method that can be used to mitigate this. HTTP load balancing is a method where requests or workloads are distributed across multiple instances of a web server with the same or varying capacity profile to ensure that no single server is overworked. This optimizes resource utilization, provides fault tolerance, and improves the system&#8217;s overall performance.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_638919074a163.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">A Load Balancer in action acting as a proxy server that accepts the request from the client.<\/figcaption><\/figure>\n\n\n\n<p>The load balancer acts as a proxy server that accepts the request from the client. This request is then distributed across the multiple servers in a fashion specified by the load balancing algorithm configured on the load balancer.\u00a0<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">NGINX as a load balancer<\/h2>\n\n\n\n<p>NGINX is software that can be used as a web server, reverse proxy, HTTP cache, mail proxy, and load balancer. It has been adopted by several of the busiest websites \u2014 like Adobe and WordPress \u2014 for fast request processing and response delivery.<\/p>\n\n\n\n<p>It is heavily used as a load balancer for high-traffic websites. If properly configured, it can serve <a href=\"https:\/\/en.wikipedia.org\/wiki\/C10k_problem\" target=\"_blank\" rel=\"noopener\">more than 10 thousand concurrent requests<\/a> with low memory usage.<\/p>\n\n\n\n<p>The behavior of NGINX depends on the <em>context<\/em> and <em>directives<\/em> specified in the Nginx configuration file. Depending on the mode of installation, this configuration file can be in any of the following directories:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>\/etc\/nginx\/nginx.conf<\/code><\/li>\n\n\n\n<li><code>\/usr\/local\/nginx\/conf\/nginx.conf<\/code><\/li>\n\n\n\n<li><code>\/usr\/local\/etc\/nginx\/nginx.conf<\/code><\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Contexts, directives, and blocks<\/h3>\n\n\n\n<p>The Nginx configuration file has a tree-like structure defined by a set of commands ( statements ) and braces ( <code>{ }<\/code> ). The statements are called <em>directives<\/em> which are either <em>block directives<\/em> or <em>simple directives<\/em>.<\/p>\n\n\n\n<p>The simple directives have a name and a list of space-separated parameters. They are terminated by a semicolon:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-1\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\"><span class=\"hljs-attribute\">directive_name<\/span> parameter_1 parameter_n ;<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-1\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>The block directives end with a brace <code>{ }<\/code> instead of a semicolon, and they can contain inner directives.<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-2\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\"><span class=\"hljs-attribute\">directive_name<\/span> parameter_n{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">inner_directive<\/span> parameters;\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-2\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>These block directives with braces are called <code>context<\/code>. The inner directives are valid only within the context they are designed for.<\/p>\n\n\n\n<p>The following contexts are relatively important in the discourse of Nginx as a load balancer.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">The <em>Main c<\/em>ontext<\/h4>\n\n\n\n<p>The <em>main <\/em>context is a global context containing directives that affect the whole application. The directives defined in this context include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The number of worker processes<\/li>\n\n\n\n<li>The location of the log file<\/li>\n\n\n\n<li>The process ID<\/li>\n<\/ul>\n\n\n\n<p>Unlike other contexts, the main context doesn\u2019t define an explicit block using braces. All directives with a global scope are regarded to be in the main context.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">The <em>Events <\/em>context<\/h4>\n\n\n\n<p>Nginx uses an <a href=\"https:\/\/en.wikipedia.org\/wiki\/Nginx#cite_note-Welcome-21\" target=\"_blank\" rel=\"noopener\">asynchronous event-driven approach, rather than threads, to handle requests<\/a>. The <em>events <\/em>context contains the directives that define how Nginx processes requests.<\/p>\n\n\n\n<p>Some of the directives that are specified in this context include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The number of connections per worker process<\/li>\n\n\n\n<li>The <a href=\"http:\/\/nginx.org\/en\/docs\/events.html\" target=\"_blank\" rel=\"noopener\">connection processing<\/a> method to use<\/li>\n\n\n\n<li>Directive that decides whether a worker process will accept a new connection<\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">The <em>HTTP <\/em>context<\/h4>\n\n\n\n<p>This context contains inner context and directives that determine how Nginx handles HTTP and HTTPS connections. When Nginx is configured as a load balancer, this context contains most of the directives and inner contexts that allow Nginx to act as a load balancer. Some of the directives defined in this context include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The default content type<\/li>\n\n\n\n<li>The proxy headers<\/li>\n\n\n\n<li>The server and upstream inner context<\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">The <em>Server <\/em>context<\/h4>\n\n\n\n<p>This is an inner context in the HTTP context. It contains the directives for the virtual server that respond to a request. Some of the directives defined in this context include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The server name<\/li>\n\n\n\n<li>The server port to listen to<\/li>\n\n\n\n<li>The location inner context<\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\">The <em>Location <\/em>context<\/h4>\n\n\n\n<p>This is an inner context in the <em>server context<\/em>. It defines how Nginx responds to HTTP\/HTTPS requests for a particular endpoint. You can specify custom headers, URL redirection, and request distribution to upstream servers.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">The <em>Upstream <\/em>context<\/h4>\n\n\n\n<p>This is an inner context in the <em>HTTP context<\/em>. It defines a pool of servers that can be used for load balancing. When configured as a load balancer, Nginx accepts client requests, distributes them evenly among the multiple web servers specified in its <em>upstream context<\/em>. The fashion in which the loads are distributed among the upstream servers depends on the load-balancing algorithms.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Load balancing algorithm<\/h2>\n\n\n\n<p>The load balancing algorithm is a logical process that is configured on the load balancer that determines how it will distribute the client\u2019s request among the upstream servers.<\/p>\n\n\n\n<p>Generally, the load balancing algorithms can be classified into two types:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Static load balancing algorithms<\/li>\n\n\n\n<li>Dynamic load balancing algorithms<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Static load balancing algorithm<\/h3>\n\n\n\n<p>These algorithms<strong> do not<\/strong> take the current state of the servers \u2014 like the number of active connections, available resources, and computing power \u2014&nbsp; into consideration while distributing the requests among the servers. They distribute the request in a preset fashion.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Types of static load balancing algorithms<\/h4>\n\n\n\n<p>The following are the different types of static load balancing algorithms:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Round robin<\/li>\n\n\n\n<li>Weighted round robin<\/li>\n\n\n\n<li>IP hash<\/li>\n<\/ul>\n\n\n\n<p>1. <strong>Round robin<\/strong>: In this algorithm, the load balancer circularly distributes the load without any consideration for the processing capacity, number of active connections, or available resources of the servers. This is the default load-balancing algorithm of Nginx.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_6389190815704.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">The load balancer circularly distributes the load from clients to a pool of servers. There are four servers, each assigned a client respectively.<\/figcaption><\/figure>\n\n\n\n<p>2.<strong> Weighted round robin<\/strong>: This algorithm is similar to the round robin. However, the administrator can assign weight to each server based on their chosen criteria. The loads are distributed while considering the weight assigned to each server in the pool of servers. This algorithm is suitable when the upstream servers have varying capacity profiles.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_638919090e665.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">The load balancer distributes load based on the weight assigned to each server in the pool of servers. The first server is assigned a weight of 2, while the second and third servers are assigned a weight of one. This implies that the number of requests that will be distributed to the first server will be 2 times greater than those of the other two servers.<\/figcaption><\/figure>\n\n\n\n<p>3.<strong> IP hash<\/strong>: This algorithm hashes the IP address of the client sending the request with a hashing function and then sends the request to one of the servers for processing. Subsequent requests from the client\u2019s IP address are always sent to the same server.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Dynamic load balancing algorithms<\/h3>\n\n\n\n<p>Dynamic load balancing algorithms consider the state of the server \u2014 like available resources and the number of active connections \u2014 before distributing the client\u2019s request to the upstream servers. The server that will process the request is determined by the dynamic state of the servers.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Types of dynamic load balancing algorithms<\/h4>\n\n\n\n<p>Dynamic load balancing algorithms come in one of two types:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Least connection<\/li>\n\n\n\n<li>Least time<\/li>\n<\/ul>\n\n\n\n<p>1.<strong> Least connection<\/strong>: This algorithm distributes the client&#8217;s request to servers with the least active connections at a particular time. This will ensure that no one server is overworked while other servers have fewer active connections.<\/p>\n\n\n\n<p>2. <strong>Least time<\/strong>: This algorithm distributes requests to the servers based on the average response time of the servers and the number of active connections on the server. This load-balancing algorithm is only supported by <em>Nginx Plus<\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Configurations<\/h2>\n\n\n\n<p>In this configuration exercise, we will:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Spin up three local servers running on ports 8000, 8001, and 8002<\/li>\n\n\n\n<li>Configure the load balancing algorithm for different algorithms<\/li>\n\n\n\n<li>Balance the requests to the servers with the different algorithms we will configure.<\/li>\n\n\n\n<li>Spin up Nginx with docker.<\/li>\n<\/ul>\n\n\n\n<p>We will create the servers with Python\u2019s <a href=\"https:\/\/docs.python.org\/2\/library\/simplehttpserver.html\" target=\"_blank\" rel=\"noopener\">SimpleHTTPServer<\/a> library.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Creating the servers<\/h2>\n\n\n\n<p>When you launch a server with the <code>SimpleHTTPServer<\/code> library:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>It loads up the <code>index.html<\/code> file in the directory in the browser if there is an <code>index.html<\/code>.\u00a0\u00a0<\/li>\n\n\n\n<li>If there is no <code>index.html<\/code> file, the server will display the file directory of the current working directory.<\/li>\n<\/ul>\n\n\n\n<p>To show that the load balancing works, we will create three different <code>index.html<\/code> files with different contents indicating which servers are serving the request. To do this, we will create different folders for these servers. Each server folder will include an `index.html` file.<\/p>\n\n\n\n<p>Create a directory and <code>cd<\/code> into it:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ mkdir Nginx_Tuts\n\n$ cd Nginx_Tuts<\/code><\/span><\/pre>\n\n\n<p>Create three different folders in this directory:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ mkdir server_1 server_2 server_3<\/code><\/span><\/pre>\n\n\n<p>Create an <code>index.html<\/code> file in each of these directories and add different contents in the html files.<\/p>\n\n\n\n<p>In <code>server_1<\/code> directory:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-3\" data-shcb-language-name=\"HTML, XML\" data-shcb-language-slug=\"xml\"><span><code class=\"hljs language-xml shcb-wrap-lines\">$ cd server_1\n\n$ echo \u201c<span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">h1<\/span>&gt;<\/span> Served with Server 1 <span class=\"hljs-tag\">&lt;\/<span class=\"hljs-name\">h1<\/span>&gt;<\/span>\u201d &gt;&gt; index.html<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-3\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">HTML, XML<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">xml<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>In the second directory:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-4\" data-shcb-language-name=\"HTML, XML\" data-shcb-language-slug=\"xml\"><span><code class=\"hljs language-xml shcb-wrap-lines\">$ cd server_2\n\n$ echo \u201c<span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">h1<\/span>&gt;<\/span> Served with Server 2 <span class=\"hljs-tag\">&lt;\/<span class=\"hljs-name\">h1<\/span>&gt;<\/span>\u201d &gt;&gt; index.html<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-4\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">HTML, XML<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">xml<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>In the third directory:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-5\" data-shcb-language-name=\"HTML, XML\" data-shcb-language-slug=\"xml\"><span><code class=\"hljs language-xml shcb-wrap-lines\">$ cd server_3\n\n$ echo \u201c<span class=\"hljs-tag\">&lt;<span class=\"hljs-name\">h1<\/span>&gt;<\/span> Served with Server 3 <span class=\"hljs-tag\">&lt;\/<span class=\"hljs-name\">h1<\/span>&gt;<\/span>\u201d &gt;&gt; index.html<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-5\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">HTML, XML<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">xml<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>You should have a file structure as shown below:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-6\" data-shcb-language-name=\"CSS\" data-shcb-language-slug=\"css\"><span><code class=\"hljs language-css shcb-wrap-lines\"><span class=\"hljs-selector-tag\">Nginx_Tuts<\/span>\n\n\u251c\u2500\u2500 <span class=\"hljs-selector-tag\">server_1<\/span>\n\n\u2502 \u00a0 \u2514\u2500\u2500 <span class=\"hljs-selector-tag\">index<\/span><span class=\"hljs-selector-class\">.html<\/span>\n\n\u251c\u2500\u2500 <span class=\"hljs-selector-tag\">server_2<\/span>\n\n\u2502 \u00a0 \u2514\u2500\u2500 <span class=\"hljs-selector-tag\">index<\/span><span class=\"hljs-selector-class\">.html<\/span>\n\n\u2514\u2500\u2500 <span class=\"hljs-selector-tag\">server_3<\/span>\n\n\u00a0\u00a0\u00a0\u00a0\u2514\u2500\u2500 <span class=\"hljs-selector-tag\">index<\/span><span class=\"hljs-selector-class\">.html<\/span><\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-6\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">CSS<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">css<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p><code>cd<\/code> into each of these directories and start up the server on different ports:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ cd server_1\n\n$ python -m SimpleHTTPServer 8000<\/code><\/span><\/pre>\n\n\n<p>You will get the following output in the command line:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-7\" data-shcb-language-name=\"CSS\" data-shcb-language-slug=\"css\"><span><code class=\"hljs language-css shcb-wrap-lines\"><span class=\"hljs-selector-tag\">Serving<\/span> <span class=\"hljs-selector-tag\">HTTP<\/span> <span class=\"hljs-selector-tag\">on<\/span> 0<span class=\"hljs-selector-class\">.0<\/span><span class=\"hljs-selector-class\">.0<\/span><span class=\"hljs-selector-class\">.0<\/span> <span class=\"hljs-selector-tag\">port<\/span> 8000 ...<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-7\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">CSS<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">css<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>Do the same for the other servers also using \u201c8001\u201d and \u201c8002\u201d port numbers.<\/p>\n\n\n\n<p>We have successfully spun up multiple local servers!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Static load balancing algorithms<\/h2>\n\n\n\n<p>We will create the <code>nginx.conf<\/code> file for each algorithm from scratch and only specify the context and directive we need in the configuration file. We will build an Nginx docker image with the configuration file we created.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Round-robin configuration<\/h3>\n\n\n\n<p>As discussed earlier, The default Nginx load balancing algorithm is <em>round robin<\/em> and this algorithm distributes requests to the upstream servers in a circular fashion.<\/p>\n\n\n\n<p>Define the <code>http<\/code> and <code>events<\/code> context:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-8\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\"><span class=\"hljs-section\">http<\/span> {\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-8\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>Define a <code>server<\/code> inner context in the <code>http<\/code> context and specify the port <code>8080<\/code> that Nginx will listen to:\u00a0<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-9\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\">http{\n\n\u00a0\u00a0\u00a0<span class=\"hljs-section\">server<\/span> {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-9\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>Add an <em>upstream<\/em> context in the <code>http<\/code> context that specifies the list of servers that we created earlier.<\/p>\n\n\n\n<p>Name the upstream servers as <code>ourservers<\/code> so that we can identify this pool of servers with that name:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-10\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\"><span class=\"hljs-section\">http<\/span> {\n\n\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">upstream<\/span> ourservers {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8000<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8001<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8002<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0<span class=\"hljs-section\">server<\/span> {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-10\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>Then:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Add a <em>location<\/em> context in the server block that will process all requests sent to the base route <code>\/<\/code><\/li>\n\n\n\n<li>Add a <a href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_proxy_module.html#proxy_pass\" target=\"_blank\" rel=\"noopener\">proxy_pass<\/a> directive<\/li>\n\n\n\n<li>The <em>proxy_pass<\/em> directive will resolve and distribute requests sent to the base route location to the pool of upstream servers we added earlier.<\/li>\n<\/ul>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-11\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\"><span class=\"hljs-section\">http<\/span> {\n\n\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">upstream<\/span> ourservers {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8000<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8001<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8002<\/span>\n\n\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0server {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">location<\/span> \/ {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">proxy_pass<\/span> http:\/\/ourservers\/;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0}\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-11\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>With this configuration, all the requests sent to the base route <code>\/<\/code> on\u00a0 <code>localhost<\/code> port <code>8080<\/code> will be proxied and passed to the server groups <code>ourservers<\/code> where the requests will be distributed in a round robin fashion among the servers that we specified in the upstream block.<\/p>\n\n\n\n<p>We don&#8217;t need to specify any directive in the <em>event<\/em> context for our case, however, the context must still be declared.&nbsp;<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Building the Nginx Docker image<\/h4>\n\n\n\n<p>Create a Dockerfile in the <code>Nginx_Tuts<\/code> directory and add the following:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">FROM nginx:alpine\n\nCOPY nginx.conf \/etc\/nginx\/nginx.conf<\/code><\/span><\/pre>\n\n\n<p>This will:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Pull an <a href=\"https:\/\/hub.docker.com\/_\/nginx\" target=\"_blank\" rel=\"noopener\">alpine nginx docker image<\/a> from the docker hub.<\/li>\n\n\n\n<li>Replace the configuration in the <code>\/etc\/nginx\/nginx.conf<\/code> with the <code>nginx.conf<\/code> file we created<\/li>\n<\/ol>\n\n\n\n<p>Build a docker image from this docker file:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ docker build -t loadbalancer .<\/code><\/span><\/pre>\n\n\n<p>You should get the following output after a successful build:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_6389190a2d335.jpg\" alt=\"\"\/><figcaption class=\"wp-element-caption\">The <code>loadBalancer<\/code> image is built with the docker command <code>docker build -t loadbalancer .<\/code><\/figcaption><\/figure>\n\n\n\n<p>Run the Nginx container from the Docker image we just built:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ docker run --net=host loadbalancer .<\/code><\/span><\/pre>\n\n\n<p>The <a href=\"https:\/\/docs.docker.com\/network\/host\/\" target=\"_blank\" rel=\"noopener\"><code>--net=host<\/code><\/a> argument makes the container\u2019s application available on port 80 on the host\u2019s IP address.\u00a0<\/p>\n\n\n\n<p>You should get the following output from the command above:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_6389190aaafe8.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">The <code>loadbalancer<\/code> image is run using the docker run command: <code>docker run --net=host loadbalancer .<\/code> The container is spun with a <code>--net=host<\/code> argument to make the container&#8217;s application available on port 8080 and the host&#8217;s IP address.<\/figcaption><\/figure>\n\n\n\n<p>Open your browser and send a request to <a href=\"http:\/\/127.0.0.1:8080\"><code>http:\/\/127.0.0.1:8080<\/code><\/a>, you should get the following output:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video autoplay controls src=\"https:\/\/user-images.githubusercontent.com\/64500446\/199350654-4d352600-6f62-4542-89c0-3a361ca74281.webm\"><\/video><figcaption class=\"wp-element-caption\">Servers are picked circulary in a round-robin manner.<\/figcaption><\/figure>\n\n\n\n<p>As we notice, from the video demonstration above, the servers were picked circularly.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Weighted round-robin<\/h3>\n\n\n\n<p>Weights can be assigned to the different servers that are configured in the upstream block directive, as discussed earlier. To configure this, we will simply assign a weight value to each of the servers in the pool of servers we had specified earlier:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-12\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\">http{\n\n\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">upstream<\/span> ourservers {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8000<\/span> weight=<span class=\"hljs-number\">4<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8001<\/span> weight=<span class=\"hljs-number\">2<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8002<\/span> weight=<span class=\"hljs-number\">1<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0<span class=\"hljs-section\">server<\/span> {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">location<\/span> \/ {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">proxy_pass<\/span> http:\/\/ourservers\/;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0}\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-12\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>With this configuration, the first server will process 4 times the requests that will be processed by the third server and 2 times the requests that will be processed by the second server. The second server will process 2 times the request that will be processed by the third server<\/p>\n\n\n\n<p>Next, we will create a new Docker file and build a new Nginx image with this configuration file.<\/p>\n\n\n\n<p>In the Dockerfile:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">FROM nginx:alpine\n\nCOPY weighted-rr-nginx.conf \/etc\/nginx\/nginx.conf<\/code><\/span><\/pre>\n\n\n<p>Build an nginx image from the dockerfile:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ docker build -t wrr-loadbalancer .<\/code><\/span><\/pre>\n\n\n<p>You should get the following output:<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_6389190b33e28.png\" alt=\"\"\/><figcaption class=\"wp-element-caption\">A new <code>nginx<\/code> image is built from the Dockerfile with the command <code>docker build -t wrr-loadbalancer . <\/code><\/figcaption><\/figure>\n\n\n\n<p>Run a new container from the new Docker image you just created:<\/p>\n\n\n<pre class=\"wp-block-code\"><span><code class=\"hljs shcb-wrap-lines\">$ docker run --net=host wrr-loadbalancer .<\/code><\/span><\/pre>\n\n\n<p>You should get the following output:<\/p>\n\n\n\n<figure class=\"wp-block-image is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/d2h1bfu6zrdxog.cloudfront.net\/wp-content\/uploads\/2022\/12\/img_6389190b7d3a1.png\" alt=\"\" width=\"840\" height=\"103\"\/><figcaption class=\"wp-element-caption\">The <code>wrr-loadbalancer<\/code> image is run using the docker run command: <code>docker run --net=host wrr-loadbalancer .<\/code> The container is spun with a <code>--net=host<\/code> argument to make the container&#8217;s application available on port 8080 and the host&#8217;s IP address.<\/figcaption><\/figure>\n\n\n\n<p>Open your browser and send a request to <code>http:\/\/127.0.0.1:8080<\/code>:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video autoplay controls src=\"https:\/\/user-images.githubusercontent.com\/64500446\/199351433-6d9ca410-75a9-4990-b522-6de1e7eb14a6.webm\"><\/video><figcaption class=\"wp-element-caption\">The requests are distributed to the servers based on the weight assigned to each server.<\/figcaption><\/figure>\n\n\n\n<p>As you will notice in the video above, the requests are distributed to the servers based on the weight assigned to each server.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">IP-hash configuration<\/h3>\n\n\n\n<p>This algorithm hashes the IP address of the client and makes sure every request from this client is served by the same server. When this server is unavailable, the request from this client will be served by another server.<\/p>\n\n\n\n<p>To configure this, add an <code>ip_hash<\/code> directive in the upstream context:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-13\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\">http{\n\n\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">upstream<\/span> ourservers {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ip_hash;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8000<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8001<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8002<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0<span class=\"hljs-section\">server<\/span> {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">location<\/span> \/ {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">proxy_pass<\/span> http:\/\/ourservers\/;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0}\u00a0\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-13\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>You can build and run a new Nginx image with this configuration.<\/p>\n\n\n\n<p>When we navigate to the <code>127.0.0.1:8080<\/code> in the browser, the same server will keep serving the request from my IP address.<\/p>\n\n\n\n<p>This is demonstrated in the video below:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video controls src=\"https:\/\/user-images.githubusercontent.com\/64500446\/199352502-44df42e6-9bec-4437-8190-caf34ed235e4.webm\"><\/video><figcaption class=\"wp-element-caption\">The same server continues to respond to client requests from a defined IP address.<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Dynamic load balancing algorithms<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Least connection configuration<\/h3>\n\n\n\n<p>In the <em>least connection<\/em> algorithm, the load balancer sends the client&#8217;s request to the server with the least number of active connections.<\/p>\n\n\n\n<p>This can be configured by specifying the <code>least_conn<\/code> directive in the upstream context:<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-14\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\">http{\n\n\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">upstream<\/span> ourservers {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0least_conn;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8000<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8001<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8002<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0<span class=\"hljs-section\">server<\/span> {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">location<\/span> \/ {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">proxy_pass<\/span> http:\/\/ourservers\/;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0}\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-14\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>You can build and run a new Nginx image with this configuration.<\/p>\n\n\n\n<p>The video illustration is shown below:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video autoplay controls src=\"https:\/\/user-images.githubusercontent.com\/64500446\/199352585-32668481-87d5-42f8-a04c-0522c6801821.webm\"><\/video><figcaption class=\"wp-element-caption\">The load balancer sends the client&#8217;s request to the server with the least number of active connections.<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Least time configuration<\/h3>\n\n\n\n<p>With this configuration, Nginx distributes requests to the servers based on the average response time as well as the number of active connections. In addition, different weights can be assigned to these servers depending on the capacity profile of the servers. If weight is assigned to each server, The weight parameter will be considered alongside the average response time and the number of active connections.<\/p>\n\n\n\n<p>This algorithm can be configured by adding a <code>least_time<\/code> directive to the <em>upstream<\/em> context.<\/p>\n\n\n\n<p>The average response time of the servers is either based on the time to receive the response header or the response body. This is controlled by the <code>header<\/code> and <code>last_byte<\/code> parameters in the <code>least_time<\/code> directive. There is a third optional parameter called <code>inflight<\/code> that indicates whether the response time of all requests will be tracked or just the response time for successful requests.<\/p>\n\n\n\n<p>The configuration below shows the <em>least_time<\/em> configuration that uses the average time of the response body to track the average response time.&nbsp;<\/p>\n\n\n<pre class=\"wp-block-code\" aria-describedby=\"shcb-language-15\" data-shcb-language-name=\"Nginx\" data-shcb-language-slug=\"nginx\"><span><code class=\"hljs language-nginx shcb-wrap-lines\">http{\n\n\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">upstream<\/span> ourservers {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">least_time<\/span> last_byte;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8000<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8001<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">server<\/span> localhost:<span class=\"hljs-number\">8002<\/span>;\n\n\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0<span class=\"hljs-section\">server<\/span> {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">listen<\/span> <span class=\"hljs-number\">8080<\/span>;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">location<\/span> \/ {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0<span class=\"hljs-attribute\">proxy_pass<\/span> http:\/\/ourservers\/;\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0}\n\n}\n\n<span class=\"hljs-section\">events<\/span> {\n\n}<\/code><\/span><small class=\"shcb-language\" id=\"shcb-language-15\"><span class=\"shcb-language__label\">Code language:<\/span> <span class=\"shcb-language__name\">Nginx<\/span> <span class=\"shcb-language__paren\">(<\/span><span class=\"shcb-language__slug\">nginx<\/span><span class=\"shcb-language__paren\">)<\/span><\/small><\/pre>\n\n\n<p>The video description below assumes all servers have the same number of active connections:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video autoplay controls src=\"https:\/\/user-images.githubusercontent.com\/64500446\/199364470-606ed16a-0c3e-4090-bd81-2b23c6fbe716.webm\"><\/video><figcaption class=\"wp-element-caption\">The load balancer distributes requests to the servers based on the average response time as well as the number of active connections<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Pros and cons of load balancing<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Pros<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Load balancing provides the requisite performance benefit that is needed for a high-traffic website. It does this by distributing the load evenly among the pool of servers.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>It also ensures high availability and fault tolerance in the system. The application can continue working normally in the event of failure of one of the upstream servers. Stand-by backup servers can be configured to replace failed servers.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Cons<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The load balancer can introduce a single point of failure to the system since it controls the client&#8217;s request delivery to the server. If it fails, it can bring down the whole system.<\/li>\n<\/ul>\n\n\n\n<ul class=\"wp-block-list\">\n<li>It is not guaranteed that a single server will process the client&#8217;s request every time. This could lead to a loss of session. Additional configuration is required to maintain a <a href=\"https:\/\/www.nginx.com\/resources\/glossary\/session-persistence\/\" target=\"_blank\" rel=\"noopener\">persistent session<\/a> between the client and server. Some of the load balancing algorithms \u2014 like IP Hash \u2014 solve this by ensuring that a particular server always processes a client&#8217;s request.<\/li>\n<\/ul>\n\n\n\n<h1 class=\"wp-block-heading\">Conclusion<\/h1>\n\n\n\n<p>In this tutorial, we discussed:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Nginx as a load balancer<\/li>\n\n\n\n<li>The static and dynamic load-balancing algorithms<\/li>\n\n\n\n<li>The Round Robin, Weighted Round Robin, and the IP Hash load balancing algorithms from the static type<\/li>\n\n\n\n<li>The Least connection and Least load-balancing algorithms<\/li>\n\n\n\n<li>We also configured these different types of algorithms as well as showed a video illustration of them.<\/li>\n\n\n\n<li>Lastly, we discussed some of the pros and cons of this technique.<\/li>\n<\/ul>\n\n\n\n<p>The configuration and files used in this tutorial can be found in this <a href=\"https:\/\/github.com\/DrAnonymousNet\/Nginx-Configurations\" target=\"_blank\" rel=\"noopener\">GitHub repository<\/a>.<\/p>\n\n\n\n<p><em>Ahmad is a Software developer and technical writer focusing on backend technologies. He has an interest in optimization and scalability techniques. When he is not writing software, he is writing about how to build them. You can reach out to him on <a href=\"https:\/\/www.linkedin.com\/in\/mustapha-ahmad-a2a497163\" target=\"_blank\" rel=\"noopener\">LinkedIn<\/a>.\u00a0<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>During times of high traffic, the overall performance of most web applications drops, the latency rises, and sometimes the request times out. This often happens when the server&#8217;s computing power is insufficient to process the workload during high traffic. This article teaches how to keep your server&#8217;s uptime high and maintain good performance using load-balancing algorithms.<\/p>\n","protected":false},"author":1,"featured_media":26319,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[9],"tags":[],"persona":[29],"blog-programming-language":[37],"keyword-cluster":[],"class_list":["post-25760","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-development"],"acf":[],"_links":{"self":[{"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/posts\/25760","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/comments?post=25760"}],"version-history":[{"count":81,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/posts\/25760\/revisions"}],"predecessor-version":[{"id":26435,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/posts\/25760\/revisions\/26435"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/media\/26319"}],"wp:attachment":[{"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/media?parent=25760"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/categories?post=25760"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/tags?post=25760"},{"taxonomy":"persona","embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/persona?post=25760"},{"taxonomy":"blog-programming-language","embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/blog-programming-language?post=25760"},{"taxonomy":"keyword-cluster","embeddable":true,"href":"https:\/\/coderpad.io\/wp-json\/wp\/v2\/keyword-cluster?post=25760"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}