All About PostgreSQL Remote Access Under Plesk

Once you have installed the PostgreSQL database server, you may notice that the remote access mode is unavailable. This is a default setting implemented for effective security. But you might prefer to enable PostgreSQL remote access to the PostgreSQL database server so you can use it remotely from different locations, such as your house or workplace. So, how do you do it? Read on to find out all the key information on Plesk PostgreSQL remote access.

Plesk: What it is and how it works

Plesk and PostgreSQL go together beautifully. You may have heard of Plesk: it’s one of the U.S.’s and Europe’s biggest paid hosting platforms. Different editions are available, and Plesk is designed to support Windows as well as various editions of Linux. These include CentOS, Debian, Ubuntu, Cloud Linux and RedHat.

Database servers are required for Plesk, for the storage of its databases as well as those utilized by its various elements (such as the webmail service). Databases developed by hosting clients’ sites and APS apps (e.g. WordPress) are necessary too.

Plesk can support most of the popular database engines. The list of compatible options includes MySQL and, of course, PostgreSQL. It’s shipped with relevant tools for effective database management, and Plesk is able to work with database servers located on the same server or a remote machine.

We’ll take a closer look at connecting Plesk and PostgreSQL below.

PostgreSQL: What it is and how it works

This database system both utilizes and extends the SQL language. To do this, it leverages an object-relational model that stands apart from others. PostgreSQL is capable of handling highly-demanding workloads, designed to keep data stored safely and affording outstanding scalability. PostgreSQL was created at the University of California at Berkeley, as part of its POSTGRES initiative in the mid-1980s. In the decades since, PostgreSQL has undergone considerable work and adjustment — the core has expanded consistently through rigorous ongoing development.

The open source PostgreSQL community is incredibly committed, which makes this database system one of the best. It enjoys a reputation for ongoing data integrity and extensibility, as well as its strong out-of-the-box functionality. As a result, PostgreSQL can be run on the majority of the biggest operating systems in the world.

Another key facet of PostgreSQL is that it complies with ACID requirements, and has done so for almost two decades. Many solid add-ons can be used with PostgreSQL, too, such as POSTGIS. You can use this extension to utilize geospatial data for your database.

With all this in mind, it’s no surprise that PostgreSQL is regarded as one of the open source community’s biggest relational databases. It’s the primary option for a vast range of companies, individuals and organizations.

Last but not least, PostgreSQL is simple to set up and get running. All you need to do is pick the app you’d prefer to make and rely on PostgreSQL to safeguard your data in a strong database.

Using a Plesk server to configure remote PostgreSQL access

PostgreSQL is set to “localhost” by default — you’ll be refused entry if you attempt to connect to the server from outside the machine.

So, to enable access to PostgreSQL server remotely:

Step 1: Connect to PostgreSQL through SSH

Step 2: Execute the right command to get the location of postgresql.conf file (such as /var/lib/pgsql/data/postgresql.conf): psql -U postgres -c ‘SHOW config_file’

Step 3: Open postgresql.conf file and put this line at the end: listen_addresses = ‘*’

Step 4: Get the location of pg_hba.conf file:

grep pg_hba.conf /var/lib/pgsql/data/postgresql.conf

/var/lib/pgsql/data/pg_hba.conf

where /var/lib/pgsql/data/postgresql.conf is the file resulting from the second step

Step 5: Put this at the end of /var/lib/pgsql/data/pg_hba.conf file: host samerole all 203.0.113.2/32 md5

Some important points:

Connection is allowed from this remote IP: 203.0.113.2/32. If you’re aiming to enable connection from any IP, make sure to specify 0.0.0.0/0 .

The authentication method is md5. This demands that clients provide a double-md5-hashed password for secure authentication.

The user “john.doe” from database example1 can only access that database.

For different methods of authentication, check PostgreSQL documentation.

To put the changes into effect, restart PostgreSQL server through: Plesk > Tools & Settings > Services

Virtual Infrastructure Management Guide – What it is and How to Use it

Long gone are those days when companies depended on massive physical infrastructure hardware like memory, network cards, chips, and processing and storage resources. Virtual infrastructure helps companies of all scale leverage these tools with fewer costs and many other significant advantages.

This article shall look into virtualization and how managing it correctly will help companies scale up significantly.

Virtualization

In the physical infrastructure, we dedicate every server to a specific purpose. But the server may not be used to its full capacity. With virtualization, we can add more functionality to a single server and use it more efficiently. This migration reduces the maintenance and electricity costs of the additional server. With virtualization, we can run multiple virtual machines on a single hardware operating system.

Now how can we run multiple virtual machines on single hardware? The answer is Hypervisor. A software that can run virtual machines on top of hardware or as a hosted software. Let’s find out more!

Hypervisor

When it comes to hypervisors, the are two different types we have to look at: 

  1. Bare-metal Hypervisor. It runs directly on the hardware. These hypervisors have their operating system, and they are known for their stability and performance.
  2. Hosted Hypervisor. It runs inside the operating system. So these have an extra layer of software beneath them when compared to bare-metal Hypervisors. These perform well in restricted and small environments.

Hypervisors are present on top of a computer (operating system) or installed directly onto the server. Hypervisors allocate the physical resources to the virtual machines as they need so that they can work efficiently. Whenever a user or program requests additional resources, the Hypervisor will send a request to the physical hardware. And the changes will save locally.

A virtual machine can be treated as a data file. This data file can be moved from one computer to another and simultaneously work on both.

Types of virtualization:

  • Data Virtualization. It provides us with data from various resources devoid of its format and source for users and applications to use.
  • Desktop Virtualization. It’s sometimes called Virtual Desktop Infrastructure. With VDI, users can access all their files and applications on any computer.
  • Server Virtualization. With server virtualization, the physical hardware servers are divided into virtual servers. These virtual servers can operate multiple Operating Systems (OS).
  • Operating System Virtualization. We can run multiple OS such as Windows and Linux on multiple virtual machines.
  • Network Virtualization. With network virtualization, we can combine multiple hardware networks into a single network called a software-based network and the reverse is also possible.

Physical vs. Virtual Infrastructure

We can have multiple Virtual Machines running on a single physical device using the virtual infrastructure. Instead of allocating a single task to a machine, we can allocate multiple. 

As with the OS, we can install different types of OS in virtual machines. We can install a hosted hypervisor and run Linux OS on the virtual machine.

A virtual infrastructure can mean you have numerous servers and physical resources in a server room to store your business data in an enterprise setting.

Benefits of Virtual Infrastructures

  • Efficiency. Virtual infrastructures make the most efficient use of the physical hardware. Because the virtual machine uses the resources whenever it needs. So some machines are active at times, and others are not. This efficiency will directly result in less wastage.
  • Development and Testing. With the ease of use the virtual machines provide to us with operating systems and application installing, we can easily leverage this on improving both development and testing.
  • Scalable. Virtual servers adapt according to the company’s needs. They got built on the concept of what you use is what you pay.
  • Flexible. It allows for multiple server and networking configurations compared to a hardwired physical infrastructure, which requires more capital and effort to change. Virtual machines are easily portable so that you can move them between servers without problems.
  • Secure. Virtual infrastructure provides us with double security. All the traffic has to go through physical infrastructure first, and then there is additional security between the virtual machines. The security barrier that comes with the separation of virtual machines keeps the system devoid of bugs and viruses. 
  • Load balancing. The soft-ware based server balances the load given to the devices. The load gets distributed equally so that no machine gets more load than the other. 
  • Backup and recovery. If there is a physical infrastructure failure, we have to wait until the system is revived and running again. Virtual machine backups assure us of quick and efficient recovery.

More About Virtual Infrastructure Management

As we have seen, there are many benefits from virtual infrastructure when compared to physical infrastructure – many companies are migrating to the virtual infrastructure. Now, we shall see how we can manage this infrastructure to get the maximum out of it. 

Planning and Design

Many companies make the mistake of not planning well enough before migrating to virtualization. Every area of the company will somehow or the other be affected by the virtual infrastructure migration. Only the administrators and architects are held responsible for planning and design. This approach can lead to further roadblocks in the functioning of the company.

Critical members from every team must be involved in the design process, and everyone should see how the migration affects their team and suggest insights. This design will help smooth functioning even after the migration to the virtual infrastructure.

Efficient planning about the implementation of the infrastructure is also equally important. It may seem tempting to implement everything at once since virtual infrastructure provides us with the new OS, virtual machines, e.t.c. The idea doesn’t work out. A step-by-step plan can help us with implementing the infrastructure efficiently and correctly.

Performance and Capacity

We no longer need to check the performance of the infrastructure manually. Automated tools can help us in performance management, application management, and predictive recommendations. So, with the insights we get from the tools we can make decisions.

There are also many server management tools to monitor, track, and model and predict CPU, memory, network, and storage needs for your virtual environments. We can decide on hardware resources based on this analysis.

Storage

Virtual infrastructure greatly reduces the cost of storage. But the opposite is also true if it is not managed correctly. The virtual infrastructure uses shared networked storage. The migrated data centers always present on the shared networking storage—thousands of copies of data sitting idle on the repository.

The capacity needs should be planned correctly to avoid the wastage of the storage resources. The deduplication of data will help us greatly in reducing storage wastage.

Infrastructure Management 

Virtual sprawl is another major problem in virtual infrastructure management. The speed at which the servers are created is significantly reduced. This speed will lead to creating more and more virtual machines, and some are left in idle state and left unattended. This will directly place the server’s burden in the form of permissions, backups, upgrades, patches, and monitoring. If not monitored correctly, this will lead to a stall.

The solution to this problem is to have a request and approve the process. The management lifecycle should be monitored at all times. The idle virtual machines must be decommissioned immediately, and the storage must be allocated efficiently to the correct virtual machine so that the load is balanced.

Backup and Disaster Recovery

In virtual machines, we cannot load backup into everything. The backup gets loaded on to the shared physical resource. This resource backs up data of all the virtual machines that are running on its hardware. If there are more machines on a single machine, then the load is drastically increased on the hardware. This may lead to malfunction or even failure in the application. This can be taken care of by carefully keeping up with the management lifecycle. The load must be carefully observed so that the backup doesn’t get choked. Otherwise, we could lose so much data in a matter of minutes.

… So, as we’ve seen in this article, virtualization has a lot of advantages that can be leveraged if managed correctly. And we’ve also talked step by step about what is virtualization, types, virtual vs physical infrastructure, benefits of virtual infrastructure, and how we can manage virtual infrastructure for maximum benefits.

Fancy giving virtual infrastructure management a try? You can find more information about our virtualization solutions here (Plesk VPS) and here (SolusIO). Drop us a line in the comment section below if you like to share your experience with virtualization with us. Until next time!

Announcing Plesk Onyx Support Policy Update

Calling to all Plesk Onyx users – it’s time to say goodbye to your current software version. The dynamic hosting industry evolves very quickly. And our goal as a WebOps leading platform is to provide our customers with the best solutions. Plesk Obsidian entered the game so you can access the most complete tool on the market. With optimal usability, increased productivity, tougher default security, and many more key improvements.

With the launch of Obsidian last year, Plesk ended the era of upgrades and introduced the era of short releases. Switching to regular updates is imperative to always deliver a secure and stable version of Plesk. That is, with new features and improvements that partners and customers expect to get from an intelligent software solution. Find out more about Plesk Obsidian 18.0 mass update and new partner controls in this article. Also, keep in mind that only Obsidian gives you access to the full extensions catalog.

In order to fully accompany you to make the digital transformation easier, we provide you with the best support, that requires an update to the latest software version. In this regard, Plesk has an end-of-life support policy which is primordial in order to deliver innovative and cost-effective solutions.

Plesk Version Lifecycle

The table below describes when specific versions of Plesk will enter the extended support phase and when patches for critical issues will no longer be available. If the Plesk version in use is EoL (End of Life), Plesk strongly recommends upgrading to a supported Plesk version.

Product Released Extended Support* End of Life**
Plesk Obsidian June 4, 2019 *** not applicable not applicable
Plesk Onyx October 11, 2016 October 11, 2020 April 20, 2021

*In terms of Plesk Onyx, there is a 4-year support period, after which the product will no longer be available for new purchases and will only receive patches for critical issues. The Extended Support Period is six months starting from October 2020.

**End of Life: Once the Extended Support period is over, the product will stop receiving further development (including critical patches), and technical support requests will no longer be accepted.

*** Starting from this date, Plesk began accepting technical support requests for Plesk Obsidian (General Availability version launched on October 22nd, 2019).

Benefits of Auto-updates

It’s worth noting that Plesk is committed to supporting only Plesk Obsidian (18.x), and not older versions than the two previously released.

Here’s a good example. If a user installs Plesk 18.05.28 as a fresh instance but after a few weeks decides to install a new instance, its version will be 18.0.29 and not 18.0.28 anymore. Nonetheless, version 18.0.28 will still be supported until the following two newer versions are released – in this case, until version 18.0.30 is released.

This is why it’s very important to check your current version of your Plesk before asking for support. It’s possible that you’re no longer using a supported version – that is, the current version or the one before – and all you need to do is to update your Plesk to get full performance. You can turn the update option on to automatically update versions and simplify your admin tasks – at the end of the day, this is what Plesk’s here for 🙂  

Essential benefits of auto-updates are the following:

  1. No need to upgrade or migrate to a new major version each year.
  2. Immediate access to new features or improve existing ones.
  3. Constantly patching potential security vulnerabilities.
  4. Boosted speed and performance.
  5. Protection of user’s data.

You can find more information about Plesk Support Policy end-of-life on our lifecycle policy and change log pages. 

Got any questions about short releases and the Plesk Obsidian auto-updates? Drop us a line in the comment section below!

Next Level Ops Podcast: Modern Web Development Tools with Brian Richards

Hello Pleskians! This week we’re back with the tenth and final episode of the Official Plesk Podcast: Next Level Ops. We’re already at the close of the season and we’d like to thank every single one of our guests and listeners, as well as our host for being a part of Next Level Ops! In this installment, Superhost Joe chats with Brian Richards, Creator of WPSessions, about essential web development tools for modern web developers.

In This Episode: jQuery Turns 14, Brian’s Toolkit for Web Development, and Leveling Up

What coding tools are there for the everyday web developer? With a great amount of web development tools out there, how do you decide which ones to have in your toolbox? How can you level up your skills and find new tools to use? All of this and more in this episode of Next Level Ops.

“Knowing which tools to look for is the entire battle. So, where do you find the tools that help make your job easier? How do you know that they actually work as advertised? Why should you trust them? When can you trust them?”

Brian Richards, Creator of WPsessions

Use Code Linting

First of all, you can start with some concepts to get familiar with. For example, code linting helps you find errors in your code while you’re writing your code. It shows you where you’ve inserted a character that breaks your code depending on the language you’re coding in.

Configure Your Code Editor

Second, Brian recommends that you find a code editor that you love. Moreover, you can configure the code editor of your choice to be more productive for you by changing short codes and adding code completion and formatting. A few changes like this and it will customize your code editor to be the best choice for you. Keep in mind that instead of looking for the next shiny product, use the tools that work for you and stick to them. Keep reading for recommended code editors and local development tools below. 

Follow Coding Standards

Additionally, for coding it’s important to adapt some kind of coding standards and making sure that you follow them. Following standards should help you avoid running into bugs. Learn about local development environments that help you build projects for the web while offline. There are many tools specialized for the platform and languages you want to work with.

Love the Command Line

And last but not least, become familiar with and begin to love the command line. So, read on to find the key takeaways of recommended tools and strategies from Brian to orient your web development. This list is a must-have for web developers so better bookmark this page!

Key Takeaways

A List of Great Tools

  • Free and open-source code editor: VSCode
  • Code sniffers that can check your code for compliance with regulatory requirements.
  • Github needs little introduction. Use it for testing, deploying and peer-review.
  • Laravel Valet is a fast local, development environment for Mac with minimal resource requirements.
  • Use Local by Flywheel for local WordPress development.
  • Lando is a local development dependency management and automation tool.
  • Know and love the Command Line:
  • Wait at least two years before adopting a new library. And if you’re picking up a code library, don’t forget to follow the coding standards set by the library.

Choose Your Learning Battles

…Alright Pleskians, it’s time to hit the play button if you want to hear all the details. If you’re interested in hearing more from Next Level Ops, check out the rest of our podcasts. This was the last installment this season, so keep checking in to find out our future plans!

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Brian Richards

Brian is the Creator of WPsessions and an independent web developer.

Did you know we’re also on Spotify and Apple Podcasts? In fact, you can find us pretty much anywhere you get your daily dose of podcasts. As always, remember to update your daily podcast playlist with Next Level Ops.  Until next time, stay safe.

Plesk Requirements – Hardware & Software

Plesk Requirements - Hardware and Software

Plesk Obsidian is the new generation of the very popular Plesk control panel for website hosts. Plesk Obsidian has numerous advanced features and includes support for the latest tech, including Git, AutoSSL and Docker.

Plesk Hardware Requirements

As any other complex software solution, Plesk Obsidian is dependent on hardware resources.

Plesk Minimum Requirements

  • The minimum amount of RAM required for installing and running Plesk on Linux is 512 MB + 1 GB swap. On Windows – 2 GB of RAM.
  • The minimum amount of free disk space required for installing and running Plesk is 10 GB on Linux and 30 GB on Windows

Plesk Recommended Requirements

For an ordinary shared hosting environment we recommend that you have at least 1GB of RAM per 40 – 50 websites. So, 200 websites would imply Plesk hardware requirements of a minimum of 4GB. We base this recommendation on the following assumptions:

  • On average, shared hosting servers see that about 10% of websites are active, in other words 10% of websites have a persistent level of traffic week in, week out
  • 128MB of RAM will handle most websites. For example:
    • 64MB for WordPress
    • 64MB for Joomla
    • 128MB for Drupal
    • 128MB for Symfony
  • A maximum of 1 to 3 simultaneous visitors for each website, with no more than 500 unique visitors per website on any given day

Websites with higher traffic (for example, 5 to 10 concurrent visitors and a total of 1,000 to 30,000 visitors per day) will require more RAM, in the range of 500MB to 1GB for each website. Also note that Plesk hardware requirements means that you need enough disk space for memory swapping.

Amount of RAM on the server    Recommended free disk space for swapping
Less than 1 GB 1 GB
1 GB or more 1/2 * the amount of RAM

As concerns disk space – we recommend having this much disk space for hosting:

Type of hosting Recommended free disk space for websites
Typical shared hosting (100-500 websites per server) Between 2 and 2.5 GB per website
Dedicated VPS hosting (1-10 websites per server) Between 4 and 12 GB per website

Plesk Software Requirements

Plesk Obsidian is dependent on operating system and its software environment.

Supported Operating Systems

Linux

Plesk Obsidian for Linux can run on the following operating systems:

Operating system SNI support IPv6 support
Debian 9 64-bit ** Yes Yes
Debian 10 64-bit ** Yes Yes
Ubuntu 16.04 64-bit ** Yes Yes
Ubuntu 18.04 64-bit ** Yes Yes
Ubuntu 20.04 64-bit ** Yes Yes
CentOS 7.x 64-bit Yes Yes
CentOS 8.x 64-bit *** Yes Yes
Red Hat Enterprise Linux 6.x 64-bit * Yes Yes
Red Hat Enterprise Linux 7.x 64-bit * Yes Yes
Red Hat Enterprise Linux 8.x 64-bit * Yes Yes
CloudLinux 7.1 and later 64-bit Yes Yes
Virtuozzo Linux 7 64-bit Yes Yes

* – You need to enable the “Optional” channel to install Plesk Obsidian on Red Hat Enterprise Linux.

** – Plesk only supports Debian and Ubuntu servers running the “systemd” init system. Compatibility with “sysvinit” has not been tested nor guaranteed.

Notes:

  1. Before you start a Plesk installation, ensure that the package manager repositories (apt/yum/zipper) are configured and that these can be accessed from the server
  2. Currently Plesk supports CloudLinux, CentOS and Red Hat Enterprise Linux. Plesk supports all available minor versions for these three OSs.

Windows

Plesk Obsidian for Microsoft Windows can run on the following operating systems:

Operating system SNI support IPv6 support
Windows Server 2012 (64-bit, Standard, Foundation, Datacenter editions), including Server Core installations Yes Yes
Windows Server 2012 R2 (64-bit, Standard, Foundation, Datacenter editions ), including Server Core installations Yes Yes
Windows Server 2016 (64-bit, Standard, Foundation, Datacenter editions ), including Server Core installations Yes Yes
Windows Server 2019 (64-bit, Standard, Foundation, Datacenter editions), including Server Core installations Yes Yes

Plesk no longer supports Windows Server 2003, we recommend that you pick a more recent version of Windows Server according to the life cycle policy.

Note that the Plesk life cycle policy states that support for Windows Server 2008 has ceased on January 13, 2017. Also note that some Plesk features are not supported on Windows Server 2008. Plesk currently recommends that you use Windows Server 2012 R2 or later for running Plesk on Windows.

You must configure a static IP address on the OS before you install Plesk for Windows.

Plesk for Windows only supports NTFS and is an essential element of Plesk software requirements for Windows.

Support for ASP (active server pages) and FrontPage Server Extensions requires manual installation – you must install these components yourself.

To install on Windows Server 2008 you must first acquire and install Windows Installer 4.5, available from Microsoft.

Using Microsoft SQL Server in Plesk for Windows requires that Microsoft SQL Server is configured to use either standard security mode or mixed security mode. If Microsoft SQL Server is not already on your machine, install it while you install Plesk for Windows. It will be configured username “sa” and a password which is randomly chosen.

Plesk Installation Requirements

When installing Plesk, pay attention to the following installation requirements:

CloudLinux Support

Note that link traversal protection on CloudLinux can cause many different Plesk issues. To avoid Plesk issues when link traversal protection is enabled, first disable the fs.protected_symlinks_create kernel option.

Active Directory Domain Controllers Support

We recommend that Plesk is not installed on a server that also acts as either a backup or a primary domain controller. If you do so you may find that the server crashes when domains with some names are created.

AppArmor Support

Plesk Obsidian supports AppArmor on Ubuntu 14.04 and Ubuntu 16.04 only. Before installing Plesk Obsidian on Ubuntu 12.04 or any supported Debian version, disable AppArmor first.

Supported browsers

The following browsers are supported:

Desktop

  • Mozilla Firefox (latest) for Windows and Mac OS
  • Microsoft Internet Explorer® 11.x for Windows
  • Microsoft Edge® for Windows 10
  • Apple Safari (latest) for Mac OS
  • Google Chrome (latest) for Windows and Mac OS
  • Opera (latest) for Windows and Mac OS

Smartphones and Tablets

  • Chrome Mobile
  • Default browser (Safari) on iOS 8
  • Default browser on Android 4.x
  • Default browser (IE) on Windows Phone 8

Supported virtualization

The following virtualization platforms are supported:

  • VMware
  • XEN
  • Virtuozzo 7
  • OpenVZ
  • KVM
  • Hyper-V
  • LXC

Notes:

  1. Included in support is ensuring that Plesk functions properly, supports also includes discount licenses for “virtual servers”.
  2. Note that your license key may restrict you to running Plesk on a specific platform only. Your license key may be considered invalid in a different environment, however some functions such as Plesk repair, installation and others will remain working.

Earlier Versions Supported for Upgrade

Plesk Obsidian supports upgrade from the following earlier versions:

  • Plesk Onyx 17.8 for Linux/Windows (x64 only)
  • Plesk Onyx 17.5 for Linux/Windows (x64 only)
  • Plesk Onyx 17.0 for Linux/Windows (x64 only)

Source Hosting Platforms Supported for Migration

Configuration and content from the following hosting platforms can be imported into Plesk Obsidian:

  • Plesk for Linux and Plesk for Windows: 8.6, 9.5, 10.4, 11.0, 11.5, 12.0, 12.5, and Plesk Onyx.
  • cPanel 11.5
  • Confixx 3.3
  • Helm 3.2
  • Plesk Expand 2.3.2
  • Parallels Pro Control Panel for Linux 10.3.6

Plesk Supported Components

Supplied Components

Linux

Plesk Obsidian for Linux distribution packages include the following components:

  • Plesk Premium Antivirus 6.0.2
  • Kaspersky Anti-Virus 8.5.1.102
  • ImunifyAV
  • AWStats 7.7
  • ProFTPD 1.3.6c
  • qmail 1.03
  • Courier-IMAP 5.0.8
  • Postfix 3.4.8 (for most OSes), 2.11.11 (CentOS 6, Red Hat Enterprise Linux 6, and CloudLinux 6)
  • Dovecot 2.3.10.1
  • Horde IMP 5 (requires PHP 5.3)
    • Horde 5.2.23
    • IMP 6.2.24.1
    • Ingo 3.2.16
    • Kronolith 4.2.29
    • Nag 4.2.19
    • Mnemo 4.2.14
    • Passwd 5.0.7
    • Turba 4.2.25
    • Pear 1.10.9
  • Roundcube 1.4.7
  • phpMyAdmin 5.0.2
  • nginx 1.18.0
  • OpenSSL 1.0.2r
  • OpenSSL used by nginx 1.1.1g
  • TLS 1.3 (in nginx for customers’ websites)
  • PHP 5.2.17, 5.3.29, 5.4.45, 5.5.38, 5.6.40, 7.0.33, 7.1.33, 7.2.33, 7.3.21, 7.4.9.
    Note: making changes to the /usr/local/psa/admin/conf/php.ini file may result in Plesk failing to operate properly.
  • Fail2ban 0.10.3.1
  • ModSecurity 2.9.3
  • ModSecurity Rule Set 2.2.9-30-g520a94b
  • Resource Controller (for CentOS 7, Debian 8, and Ubuntu 16 servers)
  • Node.js 4.6.1, 6.14.1, 7.0.0, 8.16.0, 9.0.0, 10.0.0, 12.0.0.
    Note: on CentOS 6, Debian 7.x, and Ubuntu 12.x, Node.js 12 is not supported.
  • Phusion Passenger 6.0.2
  • Ruby 2.1.10, 2.2.10, 2.3.8, 2.4.6, 2.5.5, 2.6.3.
    Note: on Debian 9, only Ruby 2.4.6 and later is supported.
  • Bundler 1.13.5
  • Rootkit Hunter 1.4.4

Windows

Plesk Obsidian for Microsoft Windows distribution packages include the following components:

  • Plesk Premium Antivirus 6.0.2
  • Kaspersky Anti-Virus 8.6.1.51
  • Microsoft SQL Server Express 2012 SP3
  • Microsoft SQL Server Express 2014 SP2
  • Microsoft SQL Server Express 2016 SP1
  • Microsoft SQL 2017 Express
  • MariaDB 10.3.23 (for Plesk database)
  • MariaDB 10.3.23 (for customer websites)
  • BIND DNS Server 9.16.4
  • MailEnable Standard 10.27
  • PHP 5.2.17, 5.3.29, 5.4.45, 5.5.38, 5.6.40, 7.0.33, 7.1.33, 7.2.33, 7.3.21, 7.4.9
  • ASP.NET Core 2.1.20, 3.1.6
  • .NET Core 3.1.3, 2.1.17
  • Webalizer V2.01-10-RB02 (Windows NT 5.2) English
  • Horde 5.2.23 and IMP 6.2.23
  • Microsoft Web Deploy 3.5 + WebMatrix 3.0
  • Microsoft Web Deploy 3.6
  • IIS URL Rewrite Module 7.2.1993
  • Node.js 4.6.1, 6.14.1, 8.16.1, 9.0.0, 10.21.0, 12.18.0
  • 7zip 18.05
  • Microsoft Visual C++ 2017
  • ionCube Loader 5.0.21
  • SpamAssassin 3.0-3.4.4
  • myLittleAdmin 3.8.20
  • phpMyAdmin 5.0.2
  • AWStats 7.7

Supported Third-Party Components

Linux

Web servers:

  • Apache 2.2, 2.4

Mail servers:

  • Postfix 2.11.3

DNS servers:

  • BIND 9.8–9.11

Web statistics:

  • Webalizer 2.x–3.x

Web scripting:

  • mod_perl 2.0.8
  • mod_python 3.3.1 *
  • PHP 5.2–7.3.6 **

Database servers and tools:

  • MySQL 5.1–5.7
  • MySQL community edition 5.5, 5.6, 5.7
  • PostgreSQL 8.4–10
  • MariaDB 5.5-10.3.17
  • MariaDB Connector 3.0.9

Anti-spam tools:

  • SpamAssassin 3.0–3.4

* – mod_python is not supported on Red Hat Enterprise Linux 7.x, CentOS 7.x, and CloudLinux 7.x.

** – the PHP used by Plesk for its webmail functionality (Roundcube, Horde) will be sourced from the repository supplied by your operating system vendor. Optionally, install PHP from a different repository by following the instructions of the repository vendor. Note that the package name must stay the same. If the package name is different (e.g. Webtatic or IUS) Plesk webmail will not work correctly. You will also risk issues with dependencies under future updates.

Windows

This list of third-part components is shortened. Not included in this list are components which come with the Plesk distribution – these are already stipulated as supported as they are included in the distribution.

Web servers

  • Microsoft Internet Information Services (IIS) 7.5, 8.0, 8.5, 10.0

Mail servers

  • MailEnable Standard / Professional / Enterprise / Enterprise Premium 6.91–10.27
  • SmarterMail 100, 16.3
  • IceWarp Mail Server 12.0.3.1

Webmail tools

  • MailEnable Web Client
  • SmarterMail Web Client
  • IceWarp (Merak) Mail Server Web Client

Spam filtering tools

  • SmarterMail Spamfilter
  • IceWarp (Merak) Mail Server Anti-Spam

Antivirus tools

  • SmarterMail Anti-Virus
  • IceWarp (Merak) Mail Server Anti-Virus

DNS servers

  • Microsoft DNS Server
  • Simple DNS Plus 6.0.115

Web statistics

  • SmarterStats 11.1

Web scripting

  • ASP
  • ASP.NET 2.0-4.x
  • ASP.NET Core 2.1, 2.2.2
  • Python 2.7.17

Database servers

  1. Microsoft SQL Server 2005–2016
  2. MySQL community edition 5.5, 5.6, 5.7
  3. MySQL ODBC connector 5.3.14

Plesk on Cloud Platforms

Plesk Obsidian is available and compatible with the following cloud platforms
Platform AWS Azure Google Alibaba Lightsail DigitalOcean Linode
CentOS 7 (WebHost) 18.0 18.0 18.0 17.8.11
CentOS 7 (BYOL*) 18.0 18.0 18.0 17.8.11 18.0
Ubuntu 16.04 (WebHost) 17.8.11
Ubuntu 16.04 (BYOL*) 17.8.11 18.0
Ubuntu 18.04 (WebHost) 18.0 18.0 18.0
Ubuntu 18.04 (BYOL*) 18.0 18.0 18.0 18.0
Windows 2012 R2 (WebHost) 17.8.11
Windows 2012 R2 (BYOL*) 17.8.11
Windows 2019 (WebHost) 18.0 18.0 18.0
Windows 2019 (BYOL*) 18.0 18.0 18.0
Plesk WordPress Edition – CentOS 7 18.0 18.0 18.0
Plesk Business & Collaboration Edition – CentOS 7 18.0 18.0 18.0
Web Admin SE – CentOS 7 18.0
Web Admin SE – Ubuntu** 18.0 17.8.11 18.0 18.0
  • * – “BYOL” equals to “Bring Your Own License”. As soon as Plesk is deployed, you can proceed to use it via 14-day trial license OR buy your own Plesk license
  • ** – Ubuntu 16.04 for Alibaba and Lightsail; Ubuntu 18.04 for DigitalOcean

Tech Skills for a Changing World: The 5 Most Popular Plesk University Courses

The world is ever-changing, especially when it comes to technology. Gaining tech skills strengthens your resume and teaches you expertise in your chosen area of specialization. Online courses can help orient you by providing knowledge and expertise in specific technology fields. These courses also bring you close to real-world scenarios you can encounter so you get a practical understanding of how to troubleshoot problems. There are many reasons why training in technology skills works for your benefit. Let’s take a look at these reasons.

Why Should You Specialize in Tech Skills?

Specializing in tech skills proves that you are striving for both personal and professional development. It can help you irrespective of your current career level. If you’re just starting out, then you can undergo training to show your passion for learning and your interest in a particular tech field. If you’re more experienced, then advanced training will help you move forward in a changing world.

Tech training programs can help broaden your network as you will be in contact with both young aspirants as well as experienced professionals. Exposure to a broad network will open new opportunities for you.

Organizations always look for individuals who bring value to their company. Successful companies appreciate a continuous training mindset. Enrolling in tech training, taking tests or earning any certificates shows companies that you’re highly trained in a particular area. Your track record of learning new skills will help you establish professional credibility.

Plesk University offers many courses that can give you the ins and outs of different Plesk products. With some courses, you can also earn certifications. Check out our leaderboard of companies that have Plesk certified professionals. Did you know that access to all courses and exams in Plesk University is free? So, set yourself apart from your peers, choose your area of expertise, and join one of our most popular Plesk University courses. If you’re uncertain of what to choose, we’ve prepared a guide into 5 of our top courses and how they can benefit you.

The Top 5 Plesk University Courses

1. The Plesk Professional Course

In the Plesk Professional interactive course, you learn how to install Plesk Obsidian and use it to provide hosting services to your customers. You may as well ask what value will this add to your life, business, or resume. But don’t be dismissive – there are many reasons why knowing how to work with server management platforms is important. And it’s proven by a whopping 1,767 course completions.

First of all, technology has completely changed the way businesses work today. However, if you have a small business or are a solopreneur, you can’t always afford to have a separate IT department to manage all your digital processes. Who upgrades and monitors the servers and who troubleshoots problems as they arise? To ease these processes, you can either go to server management professionals who help in setting up and monitoring these crucial processes or you can learn the skills on your own. 

Let’s look at how server management is useful in more detail. You can use server management platforms to cut down costs. Instead of hiring people separately to manage servers, monitoring and management can be done by the platform. Hiring people can also be a challenging task for small businesses as finding the right staff always takes time. Depending on different pricing packages, using server manage platforms can result in comparatively lower skilled human resource costs. The best server management platforms also often provide you with the best support. On-call support with the best response time is usually included in the packages offered.

Additionally, as your company grows larger or if you’re planning to scale, the need for more servers grows. More staff is required to maintain and monitor server activities. Server management platforms are specifically dedicated to maintaining a greater number of servers. Knowing how to work with server management platforms saves you time, cost, and increases efficiency. An optimized server management platform automates regular administrative tasks, which can be time-consuming. This helps free your time up to perform other essential tasks.

2. The Plesk Associate Course

In the Plesk Associate course, you learn how to bundle infrastructure, Plesk, extensions, and services to create a managed WordPress hosting solution. Let’s take a look at how Managed WordPress solutions are a crucial add-on to the portfolio of services you can offer your customers. Or maybe you want to host your own WordPress. Either way, this is the course that can teach you the skills to do so and 1,461 course graduates agree.

Why is it useful to have Managed WordPress solutions? You need Managed WordPress to use WordPress efficiently as these solutions have the required resources and technology to maintain and update WordPress websites. The WordPress market share is 35% of all websites in the world. So, companies big or small want the management team that specializes in that Content Management System (CMS). Managed WordPress offers you many benefits, the most important of which are security, performance, and expertise on the platform. 

Ever hosted your own WordPress and woke up one day to find all those warnings and notifications for updates? Expertise is a crucial attribute when using WordPress. You may face extremely sophisticated security threats and down-time, which will ultimately affect your online presence and website performance. Managed WordPress acts like your hosting expert and takes care of these problems for you in the least possible time so that any losses are minimized. Wouldn’t it be great if you could gain this expertise?

The above points bring us to the next top Plesk University course:

3. The WordPress Toolkit Course

In the Plesk WordPress Toolkit course, you learn how easy it is to deploy, secure, and maintain a WordPress website with the WordPress Toolkit extension for Plesk. 

When talking about WordPress hosting, one of the first critical issues to address is security. A good Managed WordPress solution will provide you with the best security. Regular security helps in removing any malware present on your website. Hackers, bots, and other threats are tackled with the help of a specialized environment. The WordPress Toolkit is configured to do all of this for you with very little action needed from you. This is also a benefit when you want to offer Managed WordPress to your customers.

Additionally, if your business is growing or your customers are rapidly scaling, website traffic is going to fluctuate rapidly as well. Most likely you will require resources to keep the website running smoothly and avoid downtime if you’re scaling. Managed WordPress solutions take care of the smooth running of websites in such conditions. Crashing and downtime of the site can result in the loss of money and reputation of your brand and you want to avoid that at all costs. 

Last but not least is the issue of regular WordPress updates. WordPress releases updates on a regular basis. Keeping up with these updates is crucial for the high performance of your website. Excellent hosting management will help your site to cope with the updates and monitor how each update is impacting the website through automated tests. You’ll know if any issue is detected and have some action steps to follow to avoid downtime or security threats. Regular software updates also ensure high security for your website.

So, if you go for the WordPress Toolkit course, you’ll learn the tools to deal with the three most important issues – security, avoiding downtime, and regular updates – both for yourself and your business or your clients and customers.

4. The Plesk Obsidian: What’s New Course

The fourth course on our list is the Plesk Obsidian: What’s New course. This course is regularly updated and it showcases all the new features and changes in Plesk Obsidian. 

If you’re using Plesk every day and it’s a big part of your role, then this is the course for you. And if you haven’t kept up to date about Plesk Obsidian, now’s your chance. So, if you want to take a look at all the new features on Plesk Obsidian before signing up for the course, you can check out this guide

And so, on to the last course.

5. The SEO Toolkit Course

The fifth and final course on our list is the SEO Toolkit course. In this course, you learn how to use the SEO Toolkit extension to make your websites more visible by improving their search engine optimization (SEO).

Why is SEO important for your online presence? SEO helps in creating more visibility for your website or business. Good SEO management can drive more traffic to your website and a higher rank on search engines. SEO experts study and observe patterns that lead to higher rankings and the correct SEO tool can give you insights about developing better SEO strategies. Higher rankings lead to increased brand awareness, generating more leads, and ultimately increasing your sales revenue. With SEO tools, you also observe your site analytics, helping you to know your customers better and aligning your offers with their needs. You can also get to know where your SEO game is lacking.

So, there you have it. A wrap-up of the top 5 Plesk University courses for you to take this year. Build your skills and add more pizazz to your resume to take you to the next level! If you’ve taken any Plesk University course, let us know in the comments below. 

Until next time, arrivederci.

NGINX vs Apache – Which Is the Best Web Server in 2020?

NGINX vs Apache – which server is superior? NGINX and Apache are two of the biggest open source web services worldwide, handling more than half of the internet’s total traffic. They’re both designed to handle different workloads and to complement various types of software, creating a comprehensive web stack.

But which is best for you? They may be similar in many ways, but they’re not identical. Each has its own advantages and disadvantages, so it’s crucial that you know when one is a better solution for your goals than the other.

In this in-depth guide, we explore how these servers compare in multiple crucial ways, from connection handling architecture to modules and beyond.

First, though, let’s look at the basics of both Nginx and Apache before we take a deeper dive.

NGINX - NGINX vs Apache - Plesk

NGINX Outline

NGINX came about because of a grueling test, where a server has to reach 10,000 client connections all at the same time. It uses a non-synchronized, event-driven architecture to cope with this prodigious load. And its design means that it can take high loads and loads that vary wildly all in its stride, leveraging predictions for RAM usage, CPU usage, and latency to achieve greater efficiently.

NGINX is the brainchild of Igor Sysoev in 2002. He saw it as a solution to the C10K issue causing issues for web servers handling thousands of connections at the same time. He released it initially in 2004, and this early iteration achieved its objective through the utilization of an events-driven, asynchronous architecture.

Since its public release, NGINX has continued to be a popular choice, thanks to its lightweight utilization of resources and its flexibility to scale simply even with minimal equipment. As fans will testify, NGINX is excellent at serving static content with speed and efficiency, due to its design to pass dynamic requests to different software, which suits the specific purposes more effectively.

Administrators tend to choose NGINX because of such resource efficiency and responsiveness.

Apache - NGINX vs Apache - Plesk

Apache Outline

Robert McCool is credited with producing the Apache HTTP Server back in 1995. But as of 1999, it has been managed and maintained by the Apache Software Foundation instead. Apache HTTP Server is generally known as “Apache”, due to the HTTP web server being the foundation’s initial — and most popular — project.

Since 1996, Apache has been recognized as the internet’s most popular server, which has led to Apache receiving considerable integrated support and documentation from subsequent software projects. Administrators usually select Apache because of its power, wide-ranging support, and considerable flexibility.

It can be extended with its dynamically loadable module system, and is capable of processing various interpreted languages with no need to connect to external software

Apache vs NGINX – Handling Connections

One of the most significant contrasts between Nginx and Apache is their respective connection- and traffic-handling capabilities.

As NGINX was released following Apache, the team behind it had greater awareness of concurrency issues plaguing sites at scale. The NGINX team was able to use this knowledge to build NGINX from scratch to utilize a non-blocking, asynchronous, event-driven algorithm for handling connections. NGINX is designed to spawn worker processes capable of handling many connections, even thousands, courtesy of a fast-looping function. This searches for events and processes them continuously. As actual work is decoupled from connections easily, every worker is free to make connections only after new events are activated.

Every connection handled by the workers is situated in the event loop, alongside numerous others. Events inside the loop undergo asynchronous processing, so that work is handled in a non-blocking way. And whenever each connection closes, it will be taken out of the loop. NGINX can scale extremely far even with limited resources, thanks to this form of connection processing. As the single-threaded server doesn’t spawn processes to handle every new connection, CPU and memory utilization remains fairly consistent during periods of heavy load.

Apache offers a number of multi-processing modules. These are also known as MPMs, and are responsible for determining how to handle client requests. This enables administrators to switch its connection handling architecture simply, quickly, and conveniently.

So, what are these modules?

mpm-prefork

This Apache module creates processes with one thread to handle each request, and every child is able to accommodate one connection at one time. Provided the volume of requests remains less than that of processes, this module is capable of extremely fast performance.

But it can demonstrate a serious drop in quality when the number of requests passes the number of processes, which means this module isn’t always the right option.

Every process with this module has a major effect on the consumption of RAM, too, which makes it hard to achieve effective scaling. However, it could still be a solid choice when utilized alongside additional components built without consideration of threads. E.g. as PHP lacks thread safety, this module could be the best way to work with mod_php (Apache’s module for processing these specific files) safely.

mpm_worker

Apache’s mpm_worker module is designed to spawn processes capable of managing numerous threads each, with each of those handling one connection. Threads prove more efficient than processes, so this MPM offers stronger scaling than the module discussed above.

As there are no more threads than processes, fresh connections can take up one of the free threads rather than waiting for another suitable process to come along.

mpm_event

Apache’s third module can be considered similar to the aforementioned mpm_worker module in the majority of situations, though it’s been optimised to accommodate keep-alive connections. This means that, when using the worker module, connections continue to hold threads, whether or not requests are made actively for the full period during which the connection remains alive.

It’s clear that Apache’s connection handling architecture offers considerable flexibility when selecting various connections and request-handling algorithms. Options provided are primarily a result of the server’s continued advancement, as well as the growing demand for concurrency as the internet has changed so dramatically.

Apache vs NGINX – Handling Static and Dynamic Content

When pitting Nginx vs Apache, their ability to handle static and dynamic content requests is a common point of comparison. Let’s take a closer look.

NGINX is not designed for native processing of dynamic content: it has to pass to an external processor to handle PHP and other dynamic content requests. It will wait for content to be returned when it has been rendered, before relaying the results back to the client.

Communication has to be set up between NGINX and a processor across a protocol which NGINX can accommodate (e.g. FastCGI, HTTP, etc.). This can make things a little more complicated than administrators may prefer, particularly when attempting to anticipate the volume of connections to be allowed — an extra connection will be necessary for every call to the relevant processor.

Still, there are some benefits to using this method. As the dynamic interpreter isn’t integrated within the worker process, the overhead applies to just dynamic content. On the other hand, static content may be served in a simpler process, during which the interpreter is only contacted when considered necessary.

Apache servers’ traditional file-based methods mean they’re capable of handling static content, and their performance is primarily a function of those MPM methods covered earlier.

But Apache is designed to process dynamic content too, by integrating a processor of suitable languages into every worker instance. As a result, Apache can accommodate dynamic content in the server itself, with no need to depend on any external components. These can be activated courtesy of the dynamically-loadable modules.

Apache’s internal handling of dynamic content allows it to be configured more easily, and there’s no need to coordinate communication with other software. Modules may be swapped out if and when requirements for content shift.

NGINX or Apache – How Does Directory-level Configuration Work?

Another of the most prominent differences administrators discuss when discussing Apache vs NGINX relates to directory-level configuration, and whether it’s allowed in their content directories. Let’s explore what this means, starting with Apache.

With Apache, additional configuration is permitted on a per-directory level, through the inspection of files hidden within content directories — and the interpretation of their directives. They’re referred to as .htaccess.

As .htaccess files are located inside content directories, Apache checks every component on the route to files requested, applying those directives inside. Essentially, this allows the web server to be configured in a decentralized manner, typically utilized for the implementation of rewritten URLs, accessing restrictions, authentication and authorization, as well as caching policies.

Though these offer configuration in Apache’s primary configuration file, there are some key advantages to .htaccess files. First and foremost, they’re implemented instantly without needing to reload the server, as they’re interpreted whenever they’re located on a request path.

Secondly, .htaccess files enable non-privileged users to take control of specific elements of their web content without granting them complete control over the full configuration file.

This creates a simple way for certain software, such as content management systems, to configure environments without giving entry to central configuration files. It’s used by shared hosting providers for maintaining control of primary configurations, even while they offer clients their own directory control.

With NGINX, interpretation of .htaccess files is out of the question.It also lacks a way to assess per-directory configuration beyond the primary configuration file. As a result, it could be said to offer less flexibility than Apache, though it has a number of benefits too.

Specifically, improved performance is one of the main advantages compared to the .htaccess directory-level configuration system. In the case of standard Apache setups that accommodate .htaccess in any one directory, the server assesses the files in every parent directory leading to the file requested, whenever a request is made. Any .htaccess files found throughout this search will be read before being interpreted.

So, NGINX can serve requests in less time, due to its single-directory searches and file-reads for every request. Of course, this is based on files being located in a directory with a conventional structure.

Another benefit NGINX offers with directory-level configuration relates to security. Distributing access also leads to a distribution of security responsibility to single users, and they might not all be trustworthy. When administrators retain control of the whole server, there’s less risk of security-related problems which grant access to people who can’t be relied upon.

How does File and URI-based Interpretation Work with NGINX and Apache?

When discussing Nginx vs Apache, it’s important to remember the way in which the web server interprets requests, and maps them to system resources, is another vital issue.

When NGINX was built, it was designed to function as a web and proxy server. The architecture demanded to fulfil both roles means NGINX works with URIs mainly, and translates to the filesystem as required. This is evident in a number of ways in which its configuration files function.

NGINX has no means of determining filesystem directory configuration. As a result, it’s designed to parse the URI. NGINX’s main configuration blocks are location and server blocks: the former matches parts of the URI which come after the host and port, while the latter interprets hosts requested. Requests are interpreted as a URI, rather than one of the filesystem’s locations.

In the case of static files, requests are eventually mapped to a filesystem location. NGINX chooses the location and server blocks for handling the specific request, before combining the document root with the URI. It also adapts whatever’s required, based on the configuration specified.

With NGINX designed to parse requests as URIs rather than filesystem positions, it makes for simpler functionality in various areas. Specifically, in the following server roles: web, proxy, and mail. This means NGINX is easily configured by laying out appropriate responses to varied request patterns, and NGINX only checks filesystems when it’s prepared to serve the request. This is why it doesn’t implement .htaccess files.

Interpret requests as physical resources on a filesystem. It may also interpret requests as a URI location, which demands an assessment that’s a little less specific. Generally, Apache utilizes <Directory> or <Files> blocks for these purposes, and <Location> blocks for resources that are more abstract.

As Apache was conceived as a server for the web, its standard function is interpreting requests as traditional filesystem resources. This process starts with the document root and changing the part of the request which comes after host and port numbers, as it attempts to locate an actual file. So, on the web, filesystem’s hierarchy appears in the form of the available document tree.

Apache gives various alternatives for when requests fail to match underlying filesystems. E.g., Alias directives may be utilized for mapping alternative placements. Leveraging <Location> blocks is a way to work with the URI rather than the filesystem. A number of expression variants may be utilized to apply configuration throughout filesystems with greater flexibility.

As Apache is capable of functioning on the webspace and underlying filesystems, it has a heavier focus on filesystem methods. This is evident in a number of the design choices, such as the presence of .htaccess files in per-directory configuration. Apache documentation advises not to utilize URI-based blocks for inhibiting access when requests match those underlying filesystems.

NGINX vs Apache: How Do Modules Work?

When considering Apache vs NGINX, bear in mind that they can be extended with module systems, though they work in significantly different ways.

NGINX modules have to be chosen and compiled into its core software, as they cannot be dynamically loaded. Some NGINX users it’s less flexible as a result. This may be particularly true for those who feel unhappy managing their compiled software that’s positioned external to the distribution’s conventional packaging system.

Even though packages typically include modules which are used commonly, you would need to create the server from source if you need a non-standard module. Still, NGINX is incredibly useful, allowing users to dictate what they want out of their server by including only the functionality you plan to utilize.

For many people, NGINX seems to offer greater security as a result of this. Arbitrary components are unable to connect to the server. But if the server is in a scenario where this appears to be likely, it may have been affected already.

Furthermore, NGINX modules offer rate limiting, geolocation, proxying support, rewriting, encryption, mail functionality, compression, and more.

With Apache, the module system provides users with the option to load or unload modules dynamically based on your individual needs. Modules may be switched on and off even though the Apache core remains present at all times, so you can add or take extra functionality away and hook into the main server.

With Apache, this functionality is utilized for a wide range of tasks, and as this platform is so mature, users can choose from a large assortment of modules. Each of these may adjust the server’s core functionality in various ways, e.g. mod_php embeds a PHP interpreter into all of the running workers.

However, modules aren’t restricted to processing dynamic content: some of their functions include authenticating clients, URL rewriting, caching, proxying, encrypting, compression, and more. With dynamic modules, users can expand core functionality significantly — with no need for extensive extra work

NGINX or Apache: How do Support, Documentation, and Other Key Elements Work?

When trying to decide between Apache or Nginx, another important factor to bear in mind is actually getting set-up and the level of support with other software.

The level of support for NGINX is growing, as a greater number of users continue to implement it. However, it still has some way to go to catch up with Apache in certain areas.

Once upon a time, it was hard to gather detailed documentation for NGINX (in English), as the majority of its early documentation was in Russian. However, documentation has expanded since interest in NGINX has grown, so there’s a wealth of administration resources on the official NGINX website and third parties.

On the topic of third-party applications, documentation and support is easier to find. Package maintainers are starting to offer choices between NGINX and Apache auto-configuring. It’s easy to configure NGINX to complement alternative software without any support, as long as the specific project documents clear requirements (such as headers, permissions, etc.).

Support for Apache is fairly easy to find, as it’s been such a popular server for such a long time. An extensive library of first- and third-party documentation is on offer out there, for the core server and task-based situations that require Apache to be hooked up with additional software.

As well as documentation, numerous online projects and tools involve tools to be bootstrapped within an Apache setting. This could be present in the projects or the packages managed by the team responsible for the distribution’s packaging.

Apache receives decent support from external projects mainly due to the market share and the sheer number of years it’s been operating. There may be a higher likelihood of administrators having experience of using Apache, not just because it’s so prevalent but as a lot of them begin in shared-hosting scenarios which rely on Apache, due to the .htaccess distributed management capabilities.

NGINX vs Apache: Working with Both

Now that we’ve explored the advantages and disadvantages of NGINX and Apache, you could be in a better position to understand whether Apache or NGINX is best for you. But a lot of users discover they can leverage both server’s benefits by utilizing them together.

Traditional configuration for using NGINX and Apache in unison is to position NGINX ahead of Apache. This way, it serves as a reverse proxy — enabling it to accommodate every client request. Why is this important? Because it takes advantage of the quick processing speeds and NGINX’s capabilities to handle a lot of connections at the same time.

In the case of static content, NGINX is a fantastic server, as files are served to the client directly and quickly. With dynamic content, NGINX proxies requests to Apache to be processed. Apache will then bring rendered pages back. After this, NGINX is able to send content back to clients.

Plenty of people find this is the ideal setup, as it enables NGINX to perform as a sorting machine, handling all requests and passing on those which have no native capability to serve. If you reduce Apache’s level of requests, it’s possible to reduce the level of blocking which follows when Apache threads or processes are occupied.

With this configuration, users can scale out through the addition of extra backend servers as required. NGINX may be configured to pass to a number of servers with ease, boosting the configuration’s performance and its resistance to failure.

Apache vs NGINX – Final Thoughts

It’s fair to say that NGINX and Apache offer quality performance — they’re flexible, they’re capable, and they’re powerful. Choosing which server works best for your needs depends largely on assessing your individual requirements and testing with those patterns you believe you’re likely to see.

A number of differences between these projects have a tangible effect on capabilities, performance, and the time required to implement each solution effectively. But these tend to be the result of numerous trade-offs that shouldn’t be dismissed easily. When all is said and done, there’s no web server that meets everyone’s needs every single time, so it’s best to utilize the solution that suits your objectives best.

Linux Server Security – Best Practices for 2020

Linux Server Security

Linux server security is on sufficient level from the moment you install the OS. And that’s great to know because… hackers never sleep! They’re kind of like digital vandals. Taking pleasure – and sometimes money too – as they inflict misery on random strangers all over the planet.

Anyone who looks after their own server appreciates the fact that Linux is highly secure right out the box. Naturally, it isn’t completely watertight. But it does do a better job of keeping you safe than most other operating systems.

Still, there are plenty of ways you can improve it further. So here are some practical ways how you can keep the evil hordes from the gates. It will probably help if you’ve tinkered under the hood of a web server before. But don’t think that you have to be a tech guru or anything like that.

Deactivate network ports when not in use

Deactivate network ports when not in use

Leave a network port open and you might as well put out the welcome mat for hackers. To maintain web host security you can use the “netstat” command to inform you which network ports are currently open. And also which services are making use of them. This should close off another avenue of attack for hackers.

You also might want to set up “iptables” to deactivate open ports. Or simply use the “chkconfig” command to shut down services you won’t need. Firewalls like CSF let you automate the iptables rules, so you could just do that. If you use Plesk platform as your hosting management software – please pay attention to this article about Plesk ports.

The SSH port is usually 22, and that’s where hackers will expect to find it. To enhance Linux server security, change it to some other port number you’re not already using for another service. This way, you’ll be making it harder for the bad guys to inject malware into your server. To make the change, just go to /etc/ssh/sshd_config and enter the appropriate number.

Update Linux Software and Kernel

Update software for better Linux server security

Half of the Linux security battle is keeping everything up to date because updates frequently add extra security features. Linux offers all the tools you need to do this, and upgrading between versions is simple too. Every time a new security update becomes available, you need to review it and install it as soon as you can. Again, you can use an RPM package manager like yum and/or apt-get and/or dpkg to handle this.

# yum update

OR

# apt-get update && apt-get upgrade

It’s possible to set up RedHat / CentOS / Fedora Linux so that you get yum package update notifications sent to your email. This is great for Linux security and you can also apply all security updates using a cron job. Apticron can be used to send security mitigations under Debian / Ubuntu Linux. You can also use the apt-get command/apt command to configure unattended-upgrades for your Debian/Ubuntu Linux server:

$ sudo apt-get install unattended-upgrades apt-listchanges bsd-mailx

Reduce Redundant Software to Increase Linux Security

For greater Linux server security hardening It’s worth doing a spring clean (at any time of the year) on your installed web services. It’s easy for surplus apps to accumulate and you will probably find that you don’t need half of them. In the future, for better Linux server security try not to install software that you don’t need. It’s a simple and effective way to reduce potential security holes. Use an RPM package manager like yum or apt-get and/or dpkg to go through your installed software and remove any that you don’t need any more.

# yum list installed
# yum list packageName
# yum remove packageName

OR

# dpkg --list
# dpkg --info packageName
# apt-get remove packageName

Turn off IPv6 to boost Linux server security

Turn off IPv6

IPv6 is better than IPv4, but you probably aren’t getting much out of it – because neither is anyone else. Hackers get something from it though – because they use it to send malicious traffic. So shutting down IPv6 will close the door in their faces. Go to edit /etc/sysconfig/ network and change the settings to read NETWORKING_ IPV6=no and IPV6INIT=no. Simple as that.

Turn off root logins to improve Linux server security

Linux servers the world over allow the use of “root” as a username. Knowing this, hackers will often try subverting web host security to discover your password before slithering inside. It’s because of this that you should not sign in as the root user. In fact, you really ought to remove it as an option, creating one more level of difficulty for hackers. And thus, stopping them from being able to get past your security with just a lucky guess.

So, all it takes is for you to create a separate username. Then use the “sudo” special access command to execute root level commands. Sudo is great because you can give it to any users  you want to have admin commands, but not root access. Because you don’t want to compromise security by giving them both.

So you deactivate the root account, but before, check you’ve created and authorized your new user. Next, go to /etc/ssh/sshd_config in nano or vi, then locate the “PermitRootLogin” parameter. Change the default setting of “yes” to “no” and then save your changes.

GnuPG encryption for web host security

GnuPG encryption

When data is on the move across your network, hackers will frequently attempt to compromise Linux server security by intercepting it. Always make sure anything going to and from your server has password encryption, certificates and keys. One way to do this is with an encryption tool like GnuPG. It uses a system of keys to ensure nobody can snoop on your info when in transit.

Change/boot to read-only

All files related to the kernel on a Linux server are in the “/boot” directory. The standard access level for the directory is “read-write”, but it’s a good idea to change it to “read-only”. This stops anyone from modifying your extremely important boot files.

Just edit the /etc/fstab file and add LABEL=/boot /boot ext2 defaults, rows 1 2 to the bottom. It is completely reversible, so you can make future changes to the kernel by changing it back to “read-write” mode. Then, once you’re done, you can revert back to “read only”.

A better password policy enhances Web Host Security

better password policy - linux server security

Passwords are always a security problem because humans are. People can’t be bothered to come up with a lot of different passwords – or maybe they can’t. So what happens? They use the same ones in different places. Or worse yet – combinations that are easy to remember, like “password” or “abcde”. Basically, a gift to hackers.

Make it a requirement for passwords to contain a mix of upper AND lower case letters, numbers, and symbols. You can enable password ageing to make users discard previous passwords at fixed intervals. Also think about banning old passwords, so once people use one, it’s gone forever. The “faillog” command lets you put a limit on the amount of failed login attempts allowed and lock user accounts. This is ideal to prevent brute force attacks.

So just use a strong password all the time

Passwords are your first line of defense, so make sure they’re strong. Many people don’t really know what a good password looks like. That it needs to be complex, but also long enough to make it the strongest it can be.

At admin level, you can help users by securing Plesk Obsidian and enforcing the use of strong passwords which expire after a fixed period. Users may not like it, but you need to make them understand that it saves them a lot of possible heartache.

So what are the ‘best practices’ when setting up passwords?

  1. Use passwords that are as long as you can manage
  2. Avoid words that appear in the dictionary (like “blue grapes”)
  3. Steer clear of number replacements that are easy to guess (like “h3ll0”)
  4. Don’t reference pop culture (such as “TARDIS”)
  5. Never use a password in more than once place
  6. Change your password regularly and use a different one for every website
  7.  Don’t write passwords down, and don’t share them. Not with anybody. Ever!

The passwords you choose should increase Web Host Security by being obscure and not easy to work out. You’ll also help your security efforts if you give your root (Linux) or RDP (Windows) login its own unique password.

Linux security security needs a firewall

Firewall helps Linux server security - Plesk

A firewall is a must have for web host security, because it’s your first line of defense against attackers, and you are spoiled for choice. NetFilter is built into the Linux kernel. Combined with iptables, you can use it to resist DDos attacks.

TCPWrapper is a host-based access control list (ACL) system that filters network access for different programs. It has host name verification, standardized logging and protection from spoofing. Firewalls like CSF and APF are also widely used, and they also come with plugins for popular panels like cPanel and Plesk.

Locking User Accounts After Unsuccessful Logins

For Linux security, the faillog command shows unsuccessful login attempts and can assign limits to how many times a user can get their login credentials wrong before the account is locked. faillog formats the contents of the failure log from the /var/log/faillog database/log file. To view unsuccessful login attempts, enter:

faillog

To open up an account locked in this way, run:

faillog -r -u userName

With Linux security in mind be aware that you can use the passwd command to lock and unlock accounts:

lock Linux account

passwd -l userName

unlock Linux account

passwd -u userName

Try disk partitions for better Web host security

disk partitions - linux server security

If you partition your disks then you’ll be separating OS files from user files, tmp files and programs. Try disabling SUID/SGID access (nosuid) along with binaries (noexec) on the operating system partition

Avoid Using Telnet, FTP, and Rlogin / Rsh Services

With the majority of network configurations, anyone on the same network with a packet sniffer can intercept FTP, telnet, or rsh commands, usernames, passwords, and transferred files. To avoid compromising Linux server security try using either OpenSSH, SFTP, or FTPS (FTP over SSL), which gives FTP the benefit of SSL or TLS encryption. To move outdated services like NIS or rsh enter this yum command:

# yum erase xinetd ypserv tftp-server telnet-server rsh-server

For Debian/Ubuntu Linux server security, give the apt-get command/apt command a try to get rid of non-secure services:

$ sudo apt-get --purge remove xinetd nis yp-tools tftpd atftpd tftpd-hpa telnetd rsh-server rsh-redone-server

Use an Intrusion Detection System

NIDS or Network intrusion detection systems keep watch for malevolent activity against Linux server security like DOS attacks, port scans, and intrusion attempts.

For greater Linux server security hardening it’s recommended that you use integrity checking software before you take a system into a production environment online. You should install AIDE software before connecting the system to a network if possible. AIDE is a host-based intrusion detection system (HIDS) which monitors and analyses a computing system’s internals. You would be wise to use rkhunter rootkit detection software as well.

Logs and Audits

You can’t manage what you don’t measure, so if you want to stop hackers then your system needs to log every single time that intruders try to find a way in. Syslog is set up to store data in the /var/log/ directory by default and it can also help you to identify the potential surreptitious routes inside that misconfigured software can present.

Secure Apache/PHP/NGINX server

Edit httpd.conf file and add:

ServerTokens Prod
ServerSignature Off
TraceEnable Off
Options all -Indexes
Header always unset X-Powered-By

Restart the httpd/apache2 server on Linux, run:

$ sudo systemctl restart apache2.service

OR

$ sudo systemctl restart httpd.service

Activate CMS auto-updates

Activate CMS auto-updates

CMSs are quite complex, so hackers are always trying to exploit security loopholes with them. Joomla!, Drupal and WordPress, are all hugely popular platforms, so developers are constantly working on new security fixes. This means updates are important and should be applied straight away. The best way to ensure this happens is to activate auto-updates, so you won’t even have to think about it. Your host isn’t responsible for the content of your website. So it’s up to you to ensure you update it regularly. And it won’t hurt to back it up once in a while either.

Backup regularly

Backup regularly - linux server security - cloud

Regular and thorough backups are probably your most important security measure. Backups can help you recover from a security disaster. Typical UNIX backup programs use dump and restore, and these are we recommend them. For maximum Linux security, you need to backup to external storage with encryption, which means something like a NAS server or cloud-based service.

Protect Email Directories and Files

These Linux security tips wouldn’t be complete without telling you that Linux has some great ways to protect data against unauthorized access. File permissions and MAC are great at stopping intruders from getting at your data, but all the Linux permissions in the world don’t count for anything if they can be circumvented—for instance, by transplanting a hard drive to another machine. In such a case you need to protect Linux files and partitions with these tools:

  • For password-protected file encryption and decryption, use the gpg
  • Both Linux and UNIX can add password protection to files using openssl and other tools.
  • The majority of Linux distributions support full disk encryption. You should ensure that swap is encrypted too, and only allow bootloader editing via a password.
  • Make sure root mail is forwarded to an account that you check.

System Accounting with auditd

Auditd is used for system audits. Its job is to write audit records to the disk. This daemon reads the rules in /etc/audit.rules at start-up. You have various options for amending the /etc/audit.rules file such as setting up the location for the audit file log. Auditd will help you gain insight into these common events:

  • Occurrences at system startup and shutdown (reboot/halt).
  • Date and time an event happened.
  • The user who instigated the event (for example, perhaps they were attempting to access /path/to/topsecret.dat file).
  • Type of event (edit, access, delete, write, update file, and commands).
  • Whether the event succeeded or failed.
  • Records events that Modify time and date.
  • Discover who modified network settings.
  • Record actions that change user or group information.
  • Show who changed a file etc.

Use Kerberos

Kerberos is a third-party service offering authentication that aids Linux security hardening. It uses shared secret cryptography and assumes that packets moving on a non-secure network are readable and writable. Kerberos is based on symmetric-key cryptography and so needs a key distribution center. Kerberos lets you make remote login, remote copy, secure inter-system file copying, and other risky actions safer and it also gives you more control over them. Kerberos authentication prevents unauthorized users from spying on network traffic and grabbing passwords.

Linux Server Security Summary

That’s a lot of tips, but you need to keep your linux server security updated in a world of thieves and vandals. These despicable beings are hard at work all the time, always looking to exploit any chink in a website’s armor. If you give them the slimmest opportunity to disrupt your business, they will happily take advantage of it. Since there’s such a huge army of them, you need to make sure that your castle has extremely strong defenses.

Let us know how many of these tips you have implemented, or if you have any questions in the comments below.

Next Level Ops Podcast: Working with Self-hosting Email with Christian Mollekopf

Hello Pleskians! This week we’re back with the ninth episode of the Official Plesk Podcast: Next Level Ops. Only one more to go and we’re already at the close of Season 1! In this installment, Superhost Joe and Christian Mollekopf from Apheleia IT talk about working with self-hosting email.

In This Episode: Choosing An Email Hosting Provider, Reputation Management and Taking Back Control

What should you consider when choosing an email hosting provider? What are some of the options users have when searching for good email providers, especially if you also want to look at enterprise options? Is it good enough to opt for what your web host offers or to use a service like GSuite? What are some of the things you should think about when going the self-hosting route? In this episode, Joe and Christian discuss how to address options and issues surrounding email hosting. 

“I think usually it [email] is something that you are going to use for quite a long time. It’s like a very central part of your infrastructure typically. So, I think it’s definitely worth considering a couple of options,” says Christian. When choosing the right hosting provider, it’s worth considering things like what are the features you require, whether it’s simply email or also calendars and tasks, whether you need shared folders and calendars, and which type of client do you want. Another factor to consider is vendor lock in – just in case you want to transfer to another hosting provider and how easy will it be for you to migrate your data to another system. 

If vendor lock in is an issue of concern for you, then the question arises whether you can self-host your email. What happens when you do that? Some common issues to watch out for are to make sure that other servers can distinguish between genuine email coming from your server and spam coming from other servers, pretending to come from your server, to ensure that your server doesn’t send spam, and reputation management of your domain. To read some of the best practices of self-hosting email, go here.

Key Takeaways

  • What should someone consider when choosing an email hosting provider? Your email is probably going to be a central part of the infrastructure and you’ll use it for a long time to start out by keeping this in mind. The second thing is to consider the features you need, such as a calendar, for example. Do consider your email’s interoperability and vendor lock-in. You should be able to migrate away if you want to.
  • What are the benefits of self-hosting over using a service like Gmail? One word: Control. You maintain control over your solution. If you self-host, you have more control over your email.
  • As a hosting provider, what are some of the pitfalls of hosting email? The biggest pitfall is reputation management. Other services that receive email have to fight a lot of spam. Track the reputation of domains and IP addresses.
  • What features in Plesk help with email hosting? SPF, DMARK, DKIM are built-in. Other UIs for important measures like rate and message size limits and the Plesk Email Security extension with anti-spam. Find out more about the features here.

…Alright Pleskians, it’s time to hit the play button if you want to hear the rest. If you’re interested in hearing more from Next Level Ops, check out the rest of our podcasts. We’ll be back soon with our last installment.

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Christian Mollekopf

Christian is a Senior Software Engineer at Apheleia IT.

Did you know we’re also on Spotify and Apple Podcasts? In fact, you can find us pretty much anywhere you get your daily dose of podcasts. As always, remember to update your daily podcast playlist with Next Level Ops.  And stay on the lookout for our next episode!