The Comfortable Advantages of the Hosting Control Panel

Hosting control panel Plesk

Let’s be honest. We all have enough to think about without constant security worries, micro-managing customers, and recovering lost versions of work.

But as a web admin, a lot of time and effort goes into these repetitive tasks every single day. Decades of complex back-ends have left end customers dependent on expert coverage for their sites, leaving necessary but mundane responsibilities to the IT crowd. 

Well, web admins and developers rejoice! The web and server landscape is rapidly transforming to allow automated management. 

Let’s say you need multiple new websites, and they have to be secured. Let’s say you need them backed up regularly, maybe when you’re not even at your desk. Some old sites need to be migrated and your SEO rank is tanking. That’s a lot to handle, right? 

Enter: The Control Panel. Web and server management, taken care of in a few clicks. *Breathe a sigh of relief*. 

So what can you expect from Plesk Hosting Control Panel? Read on.



What you need is a functioning system. But what you want is simplicity. 

With a single interface that organises multiple sites, giving you clear updates on security, back-ups and running efficiency, you can have both. Instead of juggling tools and manually monitoring each and every site’s health, the control panel takes care of complex tasks in moments.

Features like one-click deployment, mail server support, and automated health alerts allow quick streamlining of multiple tasks. On potentially unlimited domains, wherever you are.

Service provider view Plesk

Multiple customer management. Easy monitoring. Backup and security tasks lined up neatly. You’re welcome.


Security is the #1 priority of any online business. Without it, customers won’t even be able to access your page. For that reason, web admins and hosters spend sleepless nights ensuring that their servers and customers are protected.

Under a web manager, you can sleep easy. Besides the necessities like SSL certificates, managed packages compile firewalls, encryptions, antivirus and more, holding the fort for you and your customers.

And guess what? Even your data is backed up and protected in the event of downtime or tricky updates, with a helpful advisor that keeps an eye on that status of your server at all times.


Advisor control panel Plesk


Unboxing any new feature for your site, or indeed setting up the server and domain for the site itself, can be time and soul-consuming tasks. However, web managers – or control panels – are set up with a catalog of extensions that can be installed and launched in minutes, and certain features can become active in one. Single. Click.


Not to mention the inbuilt performance monitor to show how fast your site is running and flagging any issues. So it keeps up the speed for your website visitors too. 

Want to automate your speed optimization? The Speed Kit is just a click away.


Speed Kit Plesk baqend


So you’re able to work fast, in total security, with ease. And it’s available on desktop and mobile for full control on-the-go. Just in case you needed further convincing. 

What else could you want?

Well there’s even more on offer if you leave traditional single-track hosting and web management behind:


Extensions efficiency Plesk

🚀 SEO Toolkit to put your website on the map,

🚀 G Suite integration because, well, who doesn’t use G Suite, 

🚀 Ultra-streamlined Premium Email with calendar and task lists,

and many more practical tools for WordPress, file and server management, all in one place. In fact, enjoy +100 add-ons beside what the Plesk interface already has built-in. Just, you know, for luck.


The future of web management has never been so clear. Customers are no longer content with sourcing each element of the online process in separate locations. They need just one, managed, safe place to work on what they love. 

Conclusion? Hosting control panels are good for the soul.

Ready to take the leap? See options for building and managing your web here


Footer coffee break Elvis Plesky Plesk Blog

Litespeed vs NGINX

Comparing NGINX and LiteSpeed

In a series of benchmarking tests, LiteSpeed’s HTTP/3 implementation demonstrated higher performance than that of the hybrid NGINX/Quiche.

Overall, LiteSpeed managed to transfer resources faster, demonstrate stronger scaling, and utilized less CPU & memory in the process. LiteSpeed bettered NGINX in all of these metrics by a factor of two or above. While NGINX was unable to complete certain tests, it achieved just below TCP speed in others.

Below, we’ll delve deep into the setup process and the range of benchmarks used.

  1. LiteSpeed vs NGINX: Why it Matters
  2. What is Cloudlare’s Patch for Quiche?
  3. Benchmarking Setup
  4. NGINX vs LiteSpeed: Benchmark Testing
  5. LiteSpeed Found to be More Impressive than NGINX

LiteSpeed vs NGINX HTTP/3: Why it Matters

HTTP/3 is a new web protocol, succeeding Google QUIC and HTTP/2. With the IETF QUIC Working Group approaching a finalized version of the drafts, it’s clear that the nascent HTTP/3 implementations are becoming more mature and some are beginning to see production use.

LiteSpeed became the first to ship HTTP/3 support in LiteSpeed Web Server, and QUIC has been supported since 2017. Improvements have been made here and there. It’s on this foundation that the support for HTTP/3 stands.

Recently, Cloudflare launched a special HTTP/3 NGINX patch and encouraged users to start experimenting.

What is Cloudlare’s Patch for Quiche?

Quiche is Cloudflare’s HTTP/3 and QUIC library, written in Rust (this language is high level and new). Said library provides a C API, which is how NGINX uses it.

Benchmarking Setup

The Platform

Both the servers and load tool run on the same VM. This is an Ubuntu-14 machine featuring a 20 core Intel Xeon E7-4870 and 32GB RAM. Netem was used to modify the bandwidth and RTT.

The Servers

We leveraged OpenLiteSpeed, an open-source version of the LiteSpeed Web Server (specifically, version 1.6.4.).

With regards to NGINX, we utilized 1.16.1 with Cloudflare’s Quiche patch.

OpenLiteSpeed and NGINX had been configured to utilize a single worker process, and NGINX’s maximum requests setting was boosted to 10,000. This enabled us to issue 1,000,000 requests with 100 connections.

# OpenLiteSpeed
httpdWorkers 1
# nginx
worker_processes 1;
http {
    server {
        access_log off;
        http3_max_requests 10000;

The Website

This comprised a simple collection of static files, including a 163 byte index file with files of varied sizes (1MB, 10MB, 100MB, 1GB).

The Load Tool

Load was generated with h2load with HTTP/3 support . This is easy to build when utilizing the Dockerfile supplied.

NGINX vs LiteSpeed: Benchmark Testing

We ran three LiteSpeed or NGINX tests and took the median value to get each number (the requests per second or resource fetch time).

LiteSpeed or NGINX: small page fetching

This testing involved fetching the 163 byte index page in numerous ways, through multiple network conditions. Key h2load options:
  • -n: Overall number of requests to be sent
  • -c: Amount of connections
  • -m: Amount of concurrent requests for each connection
  • -t: Amount of of h2load threads
-n 10000 -c 100
LiteSpeed NGINX
100 mbps, 100 ms RTT 925 reqs/sec 880 reqs/sec
100 mbps, 20 ms RTT 3910 reqs/sec 2900 reqs/sec
100 mbps, 10 ms RTT 6400 reqs/sec 4150 reqs/sec
-n 100000 -c 100 -t 10 Every connection will send 1000 requests now, with this being a longer run.
LiteSpeed NGINX
100 mbps, 100 ms RTT 995 reqs/sec 975 reqs/sec
100 mbps, 20 ms RTT 4675 reqs/sec 4510 reqs/sec
100 mbps, 10 ms RTT 8410 reqs/sec 7100 reqs/sec
At 100ms, LiteSpeed is quite a bit faster. But its speed increases substantially at 20ms and 10ms RTT. -n 100000 -c 100 -m 10 -t 10
LiteSpeed NGINX
100 mbps, 100 ms RTT 9100 reqs/sec 7380 reqs/sec
100 mbps, 20 ms RTT 24550 reqs/sec 5790 reqs/sec *
100 mbps, 10 ms RTT 25120 reqs/sec 6730 reqs/sec *
* High variance It was fascinating to find in the initial test that NGINX used 100 percent CPU, when OpenLiteSpeed utilized around 45 percent CPU only. This is why NGINX’s numbers tend to not improve despite the RTT dropping. Still, OpenLiteSpeed doesn’t utilize 100 percent CPU even with 20 and 10ms RTTs. -n 1000000 -c 100 -m 10 -t 10 We had to set the NGINX http3_max_requests parameter to 10000 (from the 1000 default value) to make sure it could issue in excess of 1000 requests for each connection.
LiteSpeed NGINX
200 mbps, 10 ms RTT 29140 reqs/sec 7120 reqs/sec *
* High variance At this point, LiteSpeed was using 100 percent CPU. NGINX allocated in excess of 1GB of memory during this test, while LiteSpeed remained below 28MB. So, that equates to greater than four times the performance for approximately 1/37th of the price.

Single file fetching

In this situation, we fetched a single file using alternative network conditions and measured the length of time required to download it. 10MB
LiteSpeed NGINX
10 mbps, 100 ms RTT 9.6 sec 10.9 sec
10 mbps, 20 ms RTT 9.6 sec 10.5 sec
10 mbps, 10 ms RTT 9.4 sec 10.8 sec
It’s clear that NGINX is a little slower here. It also utilizes three to four times more CPU than LiteSpeed did in the above tests. 100MB
LiteSpeed NGINX
100 mbps, 100 ms RTT 11.9 sec 39 sec *
100 mbps, 20 ms RTT 9.2 sec 38 sec *
100 mbps, 10 ms RTT 8.9 sec 29 sec
*High variance Across all three of the NGINX vs LiteSpeed benchmarks, NGINX utilized 100 percent CPU. It’s highly likely this is the cause of its weaker performance. 1GB We attempted to download a 1GB file with NGINX at 1gbps. However, we became tired of waiting for the download to complete. We believe the contrast in LiteSpeed and NGINX’s respective performance was pronounced at this speed.

NGINX vs LiteSpeed: Shallow Queue

NGINX has struggled with high bandwidth, so we decided to see how it would fare with low bandwidth instead. We used a shallow queue with netem’s limit parameter and set it to seven. Single 10MB file fetching
limit LiteSpeed NGINX
5 mbps, 20 ms RTT 1000 * 19.5 sec 23.1 sec
5 mbps, 20 ms RTT 7 29.5 sec 47.9 sec
*netem default We found LiteSpeed’s performance decreased by around 50 percent when we introduced a shallow queue on path. However, NGINX’s performance was degraded by more than 100 percent. So, LiteSpeed was found to be substantially faster than NGINX in both situations.

Conclusion – LiteSpeed Found to be More Impressive than NGINX

In all, we used numerous LiteSpeed vs NGINX benchmarking tests, and LiteSpeed performed to a higher standard than NGINX.

Files were transferred faster and less CPU & memory were used. NGINX never reached TCP level throughput when at a low bandwidth, and its throughput was a fraction of LiteSpeed’s at a high bandwidth.

NGINX’s HTTP/3 is unprepared for production use, as it provides weak performance and consumes more memory and CPU.

We’re not surprised by this result, frankly. QUIC and HTTP/3 are complicated protocols, and new implementations will find it difficult to match LiteSpeed’s performance.

NGINX is likely to show improvement in years to come, and we’ll be interested in running further benchmark tests when that time comes. But LiteSpeed HTTP/3 can’t be beaten in the meantime.

NGINX Configuration Guide

NGINX configuration guide

NGINX is a web server designed for use cases involving high volumes of traffic. It’s a popular, lightweight, high-performance solution.

One of its many impressive features is that it can serve static content (media files, HTML) efficiently. NGINX utilizes an asynchronous event-driven model, delivering reliable performance under significant loads.

This web server hands dynamic content off to FastCGI, CGI, or alternative servers (including Apache), before it’s sent back to NGINX for delivery to clients.

In this guide on how to configure NGINX, we’ll explore the essentials of NGINX to help you understand how it works and what benefits it offers.

NGINX Configuration: Understanding Directives

Every NGINX configuration file will be found in the /etc/nginx/ directory, with the main configuration file located in /etc/nginx/nginx.conf .

NGINX configuration options are known as “directives”: these are arranged into groups, known interchangeably as blocks or contexts .

When a # appears before a line, these are comments and NGINX won’t interpret them. Lines that contain directives should end with a semicolon (;). If not, NGINX will be unable to load the configuration properly and report an error.

Below, we’ve included a shortened copy of the /etc/nginx/nginx.conf file included with installations from NGINX repositories. This file begins with four directives:

  • user

  • worker_processes

  • error_log

  • pid

These exist outside any particular context or block, and are said to be within the main context.

Additional directives are found within the events and http blocks, and these also exist within the main context.

File: /etc/nginx/nginx.conf

user nginx;

worker_processes 1;

error_log /var/log/nginx/error.log warn;

pid /var/run/;

events {

. . .


http {

. . .


What is the Http Block?

The http block includes directives for web traffic handling, which are generally known as universal . That’s because they get passed on to each website configuration served by NGINX.

File: /etc/nginx/nginx.conf

http {

include /etc/nginx/mime.types;

default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;

#tcp_nopush on;

keepalive_timeout 65;

#gzip on;

include /etc/nginx/conf.d/*.conf;


What are Server Blocks?

The http block shown above features an include directive. This informs NGINX where website configuration files can be found.

  • When installing from NGINX’s official repository, the line will read include /etc/nginx/conf.d/*.conf; just as you can see in the http block placed above. Every website hosted with NGINX should feature a unique configuration file in /etc/nginx/conf.d/, and the name will be formatted as sites that have been disabled — not served by NGINX — should be titled

  • When installing NGINX from the Ubuntu or Debian repositories, the line will read: include /etc/nginx/sites-enabled/*;. The ../sites-enabled/ folder will include symlinks to the site configuration files located within /etc/nginx/sites-available/. You can disable sites within sites-available if you take out the symlink to sites-enabled.

  • According to the installation source, an illustrative configuration file can be found at /etc/nginx/conf.d/default.conf or etc/nginx/sites-enabled/default.

No matter the installation source, though, server configuration files feature one or more server blocks for a site. As an example, let’s look at the below:

File: /etc/nginx/conf.d/

server {

listen 80 default_server;

listen [::]:80 default_server;


root /var/www/;

index index.html;

try_files $uri /index.html;


What are Listening Ports?

The listen directive informs NGINX of the hostname/IP and TCP port, so it recognizes where it must listen for HTTP connections.

The argument default_server means that this virtual host will be answering requests on port 80 which don’t match the listen statement of a separate virtual host. When it comes to the second statement, this will listen over IPv6 and demonstrate similar behavior.

What is Name-based Virtual Hosting?

The server_name directive enables a number of domains to be served from just one IP address, and the server will determine which domain it will serve according to the request header received.

Generally, you should create one file for each site or domain you wish to host on your server. Let’s delve into some examples:

  1. Process requests for and

  2. The server_name directive can utilize wildcards. * and tell the server to process requests for all subdomains:

File: /etc/nginx/conf.d/

server_name *;


  1. Process requests for all domain names starting with example.:

File: /etc/nginx/conf.d/

server_name example.*;

With NGINX, you can define server names that are invalid domain names: it utilizes the name from the HTTP header to answer requests regardless of whether the domain name is valid or invalid.

You may find non-domain hostnames helpful if your server is on a LAN or you know all the clients likely to make requests on the server. This encompasses front-end proxy servers with /etc/hosts entries set up for the IP address NGINX is listening on.

What are Location Blocks?

NGINX’s location setting helps you set up the way in which NGINX responds to requests for resources inside the server. As the server_name directive informs NGINX how it should process requests for the domain, location directives apply to requests for certain folders and files (e.g. .

Let’s consider a few examples:

File: /etc/nginx/sites-available/

location / { }

location /images/ { }

location /blog/ { }

location /planet/ { }

location /planet/blog/ { }

These locations are literal string matches and match any part of an HTTP request following the host segment:


Returns: Let’s assume there’s a server_name entry for In this case, the location / directive determines what occurs with this request.

With NGINX, requests are always fulfilled with the most specific match possible:

Request: or

Returns: This will be fulfilled by the location /planet/blog directive as it’s more specific, despite location /planet being a match too.

File: /etc/nginx/sites-available/

location ~ IndexPage\.php$ { }

location ~ ^/BlogPlanet(/|/index\.php)$ { }

When location directives are followed by a ~ (tilde), NGINX will perform a regular expression match, which is always case-sensitive.

For example, IndexPage.php would be a match with the first of the above examples, while indexpage.php wouldn’t.

In the second example, the regular expression ^/BlogPlanet(/|/index\.php)$ { } would match requests for /BlogPlanet/ and /BlogPlanet/index.php but not /BlogPlanet, /blogplanet/, or /blogplanet/index.php. NGINX utilizes Perl Compatible Regular Expressions (PCRE).

What if you prefer matches to be case-insensitive? Well, you should use a tilde followed closely by an asterisk: ~*. You can see the above examples define that NGINX should process requests ending in a certain file extension: the first example determines that files ending in .pl, PL, .cgi, .perl, .Perl, .prl, and .PrL (as well as others) will all be a match for the request.

File: /etc/nginx/sites-available/

location ^~ /images/IndexPage/ { }

location ^~ /blog/BlogPlanet/ { }

When you add a caret and a tilde (^~) to location directives, you’re informing NGINX that, should it match a particular string, it should stop searching for more specific matches and utilize these directives here instead.

Beyond this, these directives function as the literal string matches do in the first group. Even if a more specific match comes along at a later point, the settings will be utilized if a request is a match for one of these directives.

Now, let’s look at additional details on location directive processing.

File: /etc/nginx/sites-available/

location = / { }

Finally, adding an equals symbol to the location setting forces an exact match with the requested path and ends searching for matches that are more specific.

So, for example, the last example will be a match for only, as opposed to If you use exact matches, you can enhance the speed of request times moderately. This can prove beneficial if certain requests are especially popular.

The processing of directives will follow this sequence flow:

  1. Exact string matches will be processed first: NGINX stops searching if a match is located and will fulfill the request.
  2. Any remaining literal string directives will be processed next. NGINX will stop and fulfill a request if it finds a match using the ^~ argument. If not, NGINX will continue processing of location directives.
  3. Each location directive with a regular expression (~ and ~*) will get processed next. If a regular expression is a match for the request, NGINX will end its search and fulfill the request.
  4. Finally, if there are no matching regular expressions, that literal string match which is most specific will be used.

Be sure that every file and folder found under a domain is a match for one or more location directives.

Nested location blocks are not recommended and not supported.

How to Use Location Root and Index

The location setting is another variable with its own arguments block.

When NGINX has identified the location directive that is the best match for a specific request, its response will be based on the associated location directive block’s contents. So, for instance:

File: /etc/nginx/sites-available/

location / {

root html;

index index.html index.htm;


We can see, in this example, that the document root is based in the html/ directory. Under the NGINX default installation prefix, the location’s full path is /etc/nginx/html/.


Returns: NGINX will try to serve the file found at /etc/nginx/html/blog/includes/style.css

Please note:

Absolute paths for the root directive can be used if you wish. The index variable informs NGINX which file it should serve when or if none are specified.

So, for instance:


Returns: NGINX will try to serve the file found at /etc/nginx/html/index.html.

When a number of files are specified for the index directive, the list will be processed in order and NGINX will fulfill the request with the first file found to exist. If index.html can’t be located in the relevant directory, index.htm will be utilized instead. A 404 message will be delivered in the event that neither exists at all.

Let’s consider a more complicated example that showcases a number of location directives for a server responding to the example domain:

File: /etc/nginx/sites-available/ location directive

location / {

root /srv/www/;

index index.html index.htm;


location ~ \.pl$ {

gzip off;

include /etc/nginx/fastcgi_params;

fastcgi_pass unix:/var/run/fcgiwrap.socket;


fastcgi_param SCRIPT_FILENAME /srv/www/$fastcgi_script_name;


Here, we can see that the second location block handles all requests for resources ending in a .pl extension, and it specifies a fastcgi handler for them. NGINX will use the first location directive otherwise.

Resources are found on the file system at /srv/www/ When no exact file names are defined in the request, NGINX will search for the index.html or index.htm file and provide it. A 404 error message will be returned if zero index files are located.

Let’s consider what takes place during a number of example requests:


Returns: /srv/www/ if this exists. Otherwise, it will serve /srv/www/ And if both of these don’t exist, a 404 error will be provided.


Returns: /srv/www/ if this exists. If the file can’t be found because it doesn’t exist, a /srv/www/ will be served. If neither exists, NGINX will return a 404 error.


Returns: NGINX will take advantage of the FastCGI handler to execute the file found at /srv/www/ and return the relevant result.


Returns: NGINX will utilize the FastCGI handler to execute the file found at /srv/www/ and return the relevant result.

This ends our guide on how to configure NGINX. We hope it helps you get started and set up in next to no time!