Performance and Security Enhancements in WordPress 5.5 Release

More than 800 volunteers helped to make the release of WordPress 5.5 “Eckstine” possible. Performance and security enhancements take the top spot in this release. In this blog post, we want to highlight some of the long awaited features:

The New Sitemap

WordPress 5.5 is capable of creating an XML sitemap without the need of a plugin. This new feature should help you to enhance the SEO Score for your website by providing a list of all your URLs to search engines. You can find it under: https://<your-domainname>/wp-sitemap.xml.

Enhanced Speed

When it comes to page load times, lazy loading is a handy tool. It tells your browser when to load which resource while your visitors scroll through the page. This allows you to reduce the initial load time of your page while not compromising on your image selection.

Automated Plugin and Theme Updates

While it is common for the WordPress core to do automated updates, this feature is now also available both for Themes and Plugins in WordPress 5.5. This feature is opt-out, which means you need to manually activate it per plugin or theme in your WordPress Dashboard.

Block Patterns

Following the tradition, the new WordPress release also includes the latest changes to the Gutenberg Editor. Newly introduced is the Block pattern feature. It allows you to easily reuse a combination and layout of various Gutenberg blocks. With this change, the Gutenberg Editor catches up with sitebuilder plugins like Elementor.

Goodbye jQuery Migrate

Ever wondered why there is always that message about jQuery Migrate in your browser console? Following the recommendation of the jQuery team, the jQuery Migrate plugin was removed from the WordPress core. In the upcoming releases of WordPress, we see an update of jQuery and jQuery UI. 

Keep In Mind Before You Update

Benefit from all the features of the new WordPress 5.5 release but keep in mind to not simply hit the update button inside your WordPress instance. Especially due to the removal of the jQuery Migration code, it might lead to unwanted issues on your website. Our recommendation is to:

  1. Create a 1-to-1 staging environment of your website.
  2. Do the update on the stage.
  3. Check for errors and issues.
  4. Fix if needed.
  5. Repeat step 1-4.
  6. Update your live website with the fix.

Or, use Smart Updates and let Plesk automate your update process. Learn more about how you can optimize processes and your revenue with Smart Updates in this article.

Already running the latest WordPress release? Let us know your experience in the comments below!

Next Level Ops Podcast: Modern Web Development Tools with Brian Richards

Hello Pleskians! This week we’re back with the tenth and final episode of the Official Plesk Podcast: Next Level Ops. We’re already at the close of the season and we’d like to thank every single one of our guests and listeners, as well as our host for being a part of Next Level Ops! In this installment, Superhost Joe chats with Brian Richards, Creator of WPSessions, about essential web development tools for modern web developers.

In This Episode: jQuery Turns 14, Brian’s Toolkit for Web Development, and Leveling Up

What coding tools are there for the everyday web developer? With a great amount of web development tools out there, how do you decide which ones to have in your toolbox? How can you level up your skills and find new tools to use? All of this and more in this episode of Next Level Ops.

“Knowing which tools to look for is the entire battle. So, where do you find the tools that help make your job easier? How do you know that they actually work as advertised? Why should you trust them? When can you trust them?”

Brian Richards, Creator of WPsessions

Use Code Linting

First of all, you can start with some concepts to get familiar with. For example, code linting helps you find errors in your code while you’re writing your code. It shows you where you’ve inserted a character that breaks your code depending on the language you’re coding in.

Configure Your Code Editor

Second, Brian recommends that you find a code editor that you love. Moreover, you can configure the code editor of your choice to be more productive for you by changing short codes and adding code completion and formatting. A few changes like this and it will customize your code editor to be the best choice for you. Keep in mind that instead of looking for the next shiny product, use the tools that work for you and stick to them. Keep reading for recommended code editors and local development tools below. 

Follow Coding Standards

Additionally, for coding it’s important to adapt some kind of coding standards and making sure that you follow them. Following standards should help you avoid running into bugs. Learn about local development environments that help you build projects for the web while offline. There are many tools specialized for the platform and languages you want to work with.

Love the Command Line

And last but not least, become familiar with and begin to love the command line. So, read on to find the key takeaways of recommended tools and strategies from Brian to orient your web development. This list is a must-have for web developers so better bookmark this page!

Key Takeaways

A List of Great Tools

  • Free and open-source code editor: VSCode
  • Code sniffers that can check your code for compliance with regulatory requirements.
  • Github needs little introduction. Use it for testing, deploying and peer-review.
  • Laravel Valet is a fast local, development environment for Mac with minimal resource requirements.
  • Use Local by Flywheel for local WordPress development.
  • Lando is a local development dependency management and automation tool.
  • Know and love the Command Line:
  • Wait at least two years before adopting a new library. And if you’re picking up a code library, don’t forget to follow the coding standards set by the library.

Choose Your Learning Battles

…Alright Pleskians, it’s time to hit the play button if you want to hear all the details. If you’re interested in hearing more from Next Level Ops, check out the rest of our podcasts. This was the last installment this season, so keep checking in to find out our future plans!

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Brian Richards

Brian is the Creator of WPsessions and an independent web developer.

Did you know we’re also on Spotify and Apple Podcasts? In fact, you can find us pretty much anywhere you get your daily dose of podcasts. As always, remember to update your daily podcast playlist with Next Level Ops.  Until next time, stay safe.

Plesk Requirements – Hardware & Software

Plesk Requirements - Hardware and Software

Plesk Obsidian is the new generation of the very popular Plesk control panel for website hosts. Plesk Obsidian has numerous advanced features and includes support for the latest tech, including Git, AutoSSL and Docker.

Plesk Hardware Requirements

As any other complex software solution, Plesk Obsidian is dependent on hardware resources.

Plesk Minimum Requirements

  • The minimum amount of RAM required for installing and running Plesk on Linux is 512 MB + 1 GB swap. On Windows – 2 GB of RAM.
  • The minimum amount of free disk space required for installing and running Plesk is 10 GB on Linux and 30 GB on Windows

Plesk Recommended Requirements

For an ordinary shared hosting environment we recommend that you have at least 1GB of RAM per 40 – 50 websites. So, 200 websites would imply Plesk hardware requirements of a minimum of 4GB. We base this recommendation on the following assumptions:

  • On average, shared hosting servers see that about 10% of websites are active, in other words 10% of websites have a persistent level of traffic week in, week out
  • 128MB of RAM will handle most websites. For example:
    • 64MB for WordPress
    • 64MB for Joomla
    • 128MB for Drupal
    • 128MB for Symfony
  • A maximum of 1 to 3 simultaneous visitors for each website, with no more than 500 unique visitors per website on any given day

Websites with higher traffic (for example, 5 to 10 concurrent visitors and a total of 1,000 to 30,000 visitors per day) will require more RAM, in the range of 500MB to 1GB for each website. Also note that Plesk hardware requirements means that you need enough disk space for memory swapping.

Amount of RAM on the server    Recommended free disk space for swapping
Less than 1 GB 1 GB
1 GB or more 1/2 * the amount of RAM

As concerns disk space – we recommend having this much disk space for hosting:

Type of hosting Recommended free disk space for websites
Typical shared hosting (100-500 websites per server) Between 2 and 2.5 GB per website
Dedicated VPS hosting (1-10 websites per server) Between 4 and 12 GB per website

Plesk Software Requirements

Plesk Obsidian is dependent on operating system and its software environment.

Supported Operating Systems

Linux

Plesk Obsidian for Linux can run on the following operating systems:

Operating system SNI support IPv6 support
Debian 9 64-bit ** Yes Yes
Debian 10 64-bit ** Yes Yes
Ubuntu 16.04 64-bit ** Yes Yes
Ubuntu 18.04 64-bit ** Yes Yes
Ubuntu 20.04 64-bit ** Yes Yes
CentOS 7.x 64-bit Yes Yes
CentOS 8.x 64-bit *** Yes Yes
Red Hat Enterprise Linux 6.x 64-bit * Yes Yes
Red Hat Enterprise Linux 7.x 64-bit * Yes Yes
Red Hat Enterprise Linux 8.x 64-bit * Yes Yes
CloudLinux 7.1 and later 64-bit Yes Yes
Virtuozzo Linux 7 64-bit Yes Yes

* – You need to enable the “Optional” channel to install Plesk Obsidian on Red Hat Enterprise Linux.

** – Plesk only supports Debian and Ubuntu servers running the “systemd” init system. Compatibility with “sysvinit” has not been tested nor guaranteed.

Notes:

  1. Before you start a Plesk installation, ensure that the package manager repositories (apt/yum/zipper) are configured and that these can be accessed from the server
  2. Currently Plesk supports CloudLinux, CentOS and Red Hat Enterprise Linux. Plesk supports all available minor versions for these three OSs.

Windows

Plesk Obsidian for Microsoft Windows can run on the following operating systems:

Operating system SNI support IPv6 support
Windows Server 2012 (64-bit, Standard, Foundation, Datacenter editions), including Server Core installations Yes Yes
Windows Server 2012 R2 (64-bit, Standard, Foundation, Datacenter editions ), including Server Core installations Yes Yes
Windows Server 2016 (64-bit, Standard, Foundation, Datacenter editions ), including Server Core installations Yes Yes
Windows Server 2019 (64-bit, Standard, Foundation, Datacenter editions), including Server Core installations Yes Yes

Plesk no longer supports Windows Server 2003, we recommend that you pick a more recent version of Windows Server according to the life cycle policy.

Note that the Plesk life cycle policy states that support for Windows Server 2008 has ceased on January 13, 2017. Also note that some Plesk features are not supported on Windows Server 2008. Plesk currently recommends that you use Windows Server 2012 R2 or later for running Plesk on Windows.

You must configure a static IP address on the OS before you install Plesk for Windows.

Plesk for Windows only supports NTFS and is an essential element of Plesk software requirements for Windows.

Support for ASP (active server pages) and FrontPage Server Extensions requires manual installation – you must install these components yourself.

To install on Windows Server 2008 you must first acquire and install Windows Installer 4.5, available from Microsoft.

Using Microsoft SQL Server in Plesk for Windows requires that Microsoft SQL Server is configured to use either standard security mode or mixed security mode. If Microsoft SQL Server is not already on your machine, install it while you install Plesk for Windows. It will be configured username “sa” and a password which is randomly chosen.

Plesk Installation Requirements

When installing Plesk, pay attention to the following installation requirements:

CloudLinux Support

Note that link traversal protection on CloudLinux can cause many different Plesk issues. To avoid Plesk issues when link traversal protection is enabled, first disable the fs.protected_symlinks_create kernel option.

Active Directory Domain Controllers Support

We recommend that Plesk is not installed on a server that also acts as either a backup or a primary domain controller. If you do so you may find that the server crashes when domains with some names are created.

AppArmor Support

Plesk Obsidian supports AppArmor on Ubuntu 14.04 and Ubuntu 16.04 only. Before installing Plesk Obsidian on Ubuntu 12.04 or any supported Debian version, disable AppArmor first.

Supported browsers

The following browsers are supported:

Desktop

  • Mozilla Firefox (latest) for Windows and Mac OS
  • Microsoft Internet Explorer® 11.x for Windows
  • Microsoft Edge® for Windows 10
  • Apple Safari (latest) for Mac OS
  • Google Chrome (latest) for Windows and Mac OS
  • Opera (latest) for Windows and Mac OS

Smartphones and Tablets

  • Chrome Mobile
  • Default browser (Safari) on iOS 8
  • Default browser on Android 4.x
  • Default browser (IE) on Windows Phone 8

Supported virtualization

The following virtualization platforms are supported:

  • VMware
  • XEN
  • Virtuozzo 7
  • OpenVZ
  • KVM
  • Hyper-V
  • LXC

Notes:

  1. Included in support is ensuring that Plesk functions properly, supports also includes discount licenses for “virtual servers”.
  2. Note that your license key may restrict you to running Plesk on a specific platform only. Your license key may be considered invalid in a different environment, however some functions such as Plesk repair, installation and others will remain working.

Earlier Versions Supported for Upgrade

Plesk Obsidian supports upgrade from the following earlier versions:

  • Plesk Onyx 17.8 for Linux/Windows (x64 only)
  • Plesk Onyx 17.5 for Linux/Windows (x64 only)
  • Plesk Onyx 17.0 for Linux/Windows (x64 only)

Source Hosting Platforms Supported for Migration

Configuration and content from the following hosting platforms can be imported into Plesk Obsidian:

  • Plesk for Linux and Plesk for Windows: 8.6, 9.5, 10.4, 11.0, 11.5, 12.0, 12.5, and Plesk Onyx.
  • cPanel 11.5
  • Confixx 3.3
  • Helm 3.2
  • Plesk Expand 2.3.2
  • Parallels Pro Control Panel for Linux 10.3.6

Plesk Supported Components

Supplied Components

Linux

Plesk Obsidian for Linux distribution packages include the following components:

  • Plesk Premium Antivirus 6.0.2
  • Kaspersky Anti-Virus 8.5.1.102
  • ImunifyAV
  • AWStats 7.7
  • ProFTPD 1.3.6c
  • qmail 1.03
  • Courier-IMAP 5.0.8
  • Postfix 3.4.8 (for most OSes), 2.11.11 (CentOS 6, Red Hat Enterprise Linux 6, and CloudLinux 6)
  • Dovecot 2.3.10.1
  • Horde IMP 5 (requires PHP 5.3)
    • Horde 5.2.23
    • IMP 6.2.24.1
    • Ingo 3.2.16
    • Kronolith 4.2.29
    • Nag 4.2.19
    • Mnemo 4.2.14
    • Passwd 5.0.7
    • Turba 4.2.25
    • Pear 1.10.9
  • Roundcube 1.4.7
  • phpMyAdmin 5.0.2
  • nginx 1.18.0
  • OpenSSL 1.0.2r
  • OpenSSL used by nginx 1.1.1g
  • TLS 1.3 (in nginx for customers’ websites)
  • PHP 5.2.17, 5.3.29, 5.4.45, 5.5.38, 5.6.40, 7.0.33, 7.1.33, 7.2.33, 7.3.21, 7.4.9.
    Note: making changes to the /usr/local/psa/admin/conf/php.ini file may result in Plesk failing to operate properly.
  • Fail2ban 0.10.3.1
  • ModSecurity 2.9.3
  • ModSecurity Rule Set 2.2.9-30-g520a94b
  • Resource Controller (for CentOS 7, Debian 8, and Ubuntu 16 servers)
  • Node.js 4.6.1, 6.14.1, 7.0.0, 8.16.0, 9.0.0, 10.0.0, 12.0.0.
    Note: on CentOS 6, Debian 7.x, and Ubuntu 12.x, Node.js 12 is not supported.
  • Phusion Passenger 6.0.2
  • Ruby 2.1.10, 2.2.10, 2.3.8, 2.4.6, 2.5.5, 2.6.3.
    Note: on Debian 9, only Ruby 2.4.6 and later is supported.
  • Bundler 1.13.5
  • Rootkit Hunter 1.4.4

Windows

Plesk Obsidian for Microsoft Windows distribution packages include the following components:

  • Plesk Premium Antivirus 6.0.2
  • Kaspersky Anti-Virus 8.6.1.51
  • Microsoft SQL Server Express 2012 SP3
  • Microsoft SQL Server Express 2014 SP2
  • Microsoft SQL Server Express 2016 SP1
  • Microsoft SQL 2017 Express
  • MariaDB 10.3.23 (for Plesk database)
  • MariaDB 10.3.23 (for customer websites)
  • BIND DNS Server 9.16.4
  • MailEnable Standard 10.27
  • PHP 5.2.17, 5.3.29, 5.4.45, 5.5.38, 5.6.40, 7.0.33, 7.1.33, 7.2.33, 7.3.21, 7.4.9
  • ASP.NET Core 2.1.20, 3.1.6
  • .NET Core 3.1.3, 2.1.17
  • Webalizer V2.01-10-RB02 (Windows NT 5.2) English
  • Horde 5.2.23 and IMP 6.2.23
  • Microsoft Web Deploy 3.5 + WebMatrix 3.0
  • Microsoft Web Deploy 3.6
  • IIS URL Rewrite Module 7.2.1993
  • Node.js 4.6.1, 6.14.1, 8.16.1, 9.0.0, 10.21.0, 12.18.0
  • 7zip 18.05
  • Microsoft Visual C++ 2017
  • ionCube Loader 5.0.21
  • SpamAssassin 3.0-3.4.4
  • myLittleAdmin 3.8.20
  • phpMyAdmin 5.0.2
  • AWStats 7.7

Supported Third-Party Components

Linux

Web servers:

  • Apache 2.2, 2.4

Mail servers:

  • Postfix 2.11.3

DNS servers:

  • BIND 9.8–9.11

Web statistics:

  • Webalizer 2.x–3.x

Web scripting:

  • mod_perl 2.0.8
  • mod_python 3.3.1 *
  • PHP 5.2–7.3.6 **

Database servers and tools:

  • MySQL 5.1–5.7
  • MySQL community edition 5.5, 5.6, 5.7
  • PostgreSQL 8.4–10
  • MariaDB 5.5-10.3.17
  • MariaDB Connector 3.0.9

Anti-spam tools:

  • SpamAssassin 3.0–3.4

* – mod_python is not supported on Red Hat Enterprise Linux 7.x, CentOS 7.x, and CloudLinux 7.x.

** – the PHP used by Plesk for its webmail functionality (Roundcube, Horde) will be sourced from the repository supplied by your operating system vendor. Optionally, install PHP from a different repository by following the instructions of the repository vendor. Note that the package name must stay the same. If the package name is different (e.g. Webtatic or IUS) Plesk webmail will not work correctly. You will also risk issues with dependencies under future updates.

Windows

This list of third-part components is shortened. Not included in this list are components which come with the Plesk distribution – these are already stipulated as supported as they are included in the distribution.

Web servers

  • Microsoft Internet Information Services (IIS) 7.5, 8.0, 8.5, 10.0

Mail servers

  • MailEnable Standard / Professional / Enterprise / Enterprise Premium 6.91–10.27
  • SmarterMail 100, 16.3
  • IceWarp Mail Server 12.0.3.1

Webmail tools

  • MailEnable Web Client
  • SmarterMail Web Client
  • IceWarp (Merak) Mail Server Web Client

Spam filtering tools

  • SmarterMail Spamfilter
  • IceWarp (Merak) Mail Server Anti-Spam

Antivirus tools

  • SmarterMail Anti-Virus
  • IceWarp (Merak) Mail Server Anti-Virus

DNS servers

  • Microsoft DNS Server
  • Simple DNS Plus 6.0.115

Web statistics

  • SmarterStats 11.1

Web scripting

  • ASP
  • ASP.NET 2.0-4.x
  • ASP.NET Core 2.1, 2.2.2
  • Python 2.7.17

Database servers

  1. Microsoft SQL Server 2005–2016
  2. MySQL community edition 5.5, 5.6, 5.7
  3. MySQL ODBC connector 5.3.14

Plesk on Cloud Platforms

Plesk Obsidian is available and compatible with the following cloud platforms
Platform AWS Azure Google Alibaba Lightsail DigitalOcean Linode
CentOS 7 (WebHost) 18.0 18.0 18.0 17.8.11
CentOS 7 (BYOL*) 18.0 18.0 18.0 17.8.11 18.0
Ubuntu 16.04 (WebHost) 17.8.11
Ubuntu 16.04 (BYOL*) 17.8.11 18.0
Ubuntu 18.04 (WebHost) 18.0 18.0 18.0
Ubuntu 18.04 (BYOL*) 18.0 18.0 18.0 18.0
Windows 2012 R2 (WebHost) 17.8.11
Windows 2012 R2 (BYOL*) 17.8.11
Windows 2019 (WebHost) 18.0 18.0 18.0
Windows 2019 (BYOL*) 18.0 18.0 18.0
Plesk WordPress Edition – CentOS 7 18.0 18.0 18.0
Plesk Business & Collaboration Edition – CentOS 7 18.0 18.0 18.0
Web Admin SE – CentOS 7 18.0
Web Admin SE – Ubuntu** 18.0 17.8.11 18.0 18.0
  • * – “BYOL” equals to “Bring Your Own License”. As soon as Plesk is deployed, you can proceed to use it via 14-day trial license OR buy your own Plesk license
  • ** – Ubuntu 16.04 for Alibaba and Lightsail; Ubuntu 18.04 for DigitalOcean

Tech Skills for a Changing World: The 5 Most Popular Plesk University Courses

The world is ever-changing, especially when it comes to technology. Gaining tech skills strengthens your resume and teaches you expertise in your chosen area of specialization. Online courses can help orient you by providing knowledge and expertise in specific technology fields. These courses also bring you close to real-world scenarios you can encounter so you get a practical understanding of how to troubleshoot problems. There are many reasons why training in technology skills works for your benefit. Let’s take a look at these reasons.

Why Should You Specialize in Tech Skills?

Specializing in tech skills proves that you are striving for both personal and professional development. It can help you irrespective of your current career level. If you’re just starting out, then you can undergo training to show your passion for learning and your interest in a particular tech field. If you’re more experienced, then advanced training will help you move forward in a changing world.

Tech training programs can help broaden your network as you will be in contact with both young aspirants as well as experienced professionals. Exposure to a broad network will open new opportunities for you.

Organizations always look for individuals who bring value to their company. Successful companies appreciate a continuous training mindset. Enrolling in tech training, taking tests or earning any certificates shows companies that you’re highly trained in a particular area. Your track record of learning new skills will help you establish professional credibility.

Plesk University offers many courses that can give you the ins and outs of different Plesk products. With some courses, you can also earn certifications. Check out our leaderboard of companies that have Plesk certified professionals. Did you know that access to all courses and exams in Plesk University is free? So, set yourself apart from your peers, choose your area of expertise, and join one of our most popular Plesk University courses. If you’re uncertain of what to choose, we’ve prepared a guide into 5 of our top courses and how they can benefit you.

The Top 5 Plesk University Courses

1. The Plesk Professional Course

In the Plesk Professional interactive course, you learn how to install Plesk Obsidian and use it to provide hosting services to your customers. You may as well ask what value will this add to your life, business, or resume. But don’t be dismissive – there are many reasons why knowing how to work with server management platforms is important. And it’s proven by a whopping 1,767 course completions.

First of all, technology has completely changed the way businesses work today. However, if you have a small business or are a solopreneur, you can’t always afford to have a separate IT department to manage all your digital processes. Who upgrades and monitors the servers and who troubleshoots problems as they arise? To ease these processes, you can either go to server management professionals who help in setting up and monitoring these crucial processes or you can learn the skills on your own. 

Let’s look at how server management is useful in more detail. You can use server management platforms to cut down costs. Instead of hiring people separately to manage servers, monitoring and management can be done by the platform. Hiring people can also be a challenging task for small businesses as finding the right staff always takes time. Depending on different pricing packages, using server manage platforms can result in comparatively lower skilled human resource costs. The best server management platforms also often provide you with the best support. On-call support with the best response time is usually included in the packages offered.

Additionally, as your company grows larger or if you’re planning to scale, the need for more servers grows. More staff is required to maintain and monitor server activities. Server management platforms are specifically dedicated to maintaining a greater number of servers. Knowing how to work with server management platforms saves you time, cost, and increases efficiency. An optimized server management platform automates regular administrative tasks, which can be time-consuming. This helps free your time up to perform other essential tasks.

2. The Plesk Associate Course

In the Plesk Associate course, you learn how to bundle infrastructure, Plesk, extensions, and services to create a managed WordPress hosting solution. Let’s take a look at how Managed WordPress solutions are a crucial add-on to the portfolio of services you can offer your customers. Or maybe you want to host your own WordPress. Either way, this is the course that can teach you the skills to do so and 1,461 course graduates agree.

Why is it useful to have Managed WordPress solutions? You need Managed WordPress to use WordPress efficiently as these solutions have the required resources and technology to maintain and update WordPress websites. The WordPress market share is 35% of all websites in the world. So, companies big or small want the management team that specializes in that Content Management System (CMS). Managed WordPress offers you many benefits, the most important of which are security, performance, and expertise on the platform. 

Ever hosted your own WordPress and woke up one day to find all those warnings and notifications for updates? Expertise is a crucial attribute when using WordPress. You may face extremely sophisticated security threats and down-time, which will ultimately affect your online presence and website performance. Managed WordPress acts like your hosting expert and takes care of these problems for you in the least possible time so that any losses are minimized. Wouldn’t it be great if you could gain this expertise?

The above points bring us to the next top Plesk University course:

3. The WordPress Toolkit Course

In the Plesk WordPress Toolkit course, you learn how easy it is to deploy, secure, and maintain a WordPress website with the WordPress Toolkit extension for Plesk. 

When talking about WordPress hosting, one of the first critical issues to address is security. A good Managed WordPress solution will provide you with the best security. Regular security helps in removing any malware present on your website. Hackers, bots, and other threats are tackled with the help of a specialized environment. The WordPress Toolkit is configured to do all of this for you with very little action needed from you. This is also a benefit when you want to offer Managed WordPress to your customers.

Additionally, if your business is growing or your customers are rapidly scaling, website traffic is going to fluctuate rapidly as well. Most likely you will require resources to keep the website running smoothly and avoid downtime if you’re scaling. Managed WordPress solutions take care of the smooth running of websites in such conditions. Crashing and downtime of the site can result in the loss of money and reputation of your brand and you want to avoid that at all costs. 

Last but not least is the issue of regular WordPress updates. WordPress releases updates on a regular basis. Keeping up with these updates is crucial for the high performance of your website. Excellent hosting management will help your site to cope with the updates and monitor how each update is impacting the website through automated tests. You’ll know if any issue is detected and have some action steps to follow to avoid downtime or security threats. Regular software updates also ensure high security for your website.

So, if you go for the WordPress Toolkit course, you’ll learn the tools to deal with the three most important issues – security, avoiding downtime, and regular updates – both for yourself and your business or your clients and customers.

4. The Plesk Obsidian: What’s New Course

The fourth course on our list is the Plesk Obsidian: What’s New course. This course is regularly updated and it showcases all the new features and changes in Plesk Obsidian. 

If you’re using Plesk every day and it’s a big part of your role, then this is the course for you. And if you haven’t kept up to date about Plesk Obsidian, now’s your chance. So, if you want to take a look at all the new features on Plesk Obsidian before signing up for the course, you can check out this guide

And so, on to the last course.

5. The SEO Toolkit Course

The fifth and final course on our list is the SEO Toolkit course. In this course, you learn how to use the SEO Toolkit extension to make your websites more visible by improving their search engine optimization (SEO).

Why is SEO important for your online presence? SEO helps in creating more visibility for your website or business. Good SEO management can drive more traffic to your website and a higher rank on search engines. SEO experts study and observe patterns that lead to higher rankings and the correct SEO tool can give you insights about developing better SEO strategies. Higher rankings lead to increased brand awareness, generating more leads, and ultimately increasing your sales revenue. With SEO tools, you also observe your site analytics, helping you to know your customers better and aligning your offers with their needs. You can also get to know where your SEO game is lacking.

So, there you have it. A wrap-up of the top 5 Plesk University courses for you to take this year. Build your skills and add more pizazz to your resume to take you to the next level! If you’ve taken any Plesk University course, let us know in the comments below. 

Until next time, arrivederci.

NGINX vs Apache – Which Is the Best Web Server in 2020?

NGINX vs Apache – which server is superior? NGINX and Apache are two of the biggest open source web services worldwide, handling more than half of the internet’s total traffic. They’re both designed to handle different workloads and to complement various types of software, creating a comprehensive web stack.

But which is best for you? They may be similar in many ways, but they’re not identical. Each has its own advantages and disadvantages, so it’s crucial that you know when one is a better solution for your goals than the other.

In this in-depth guide, we explore how these servers compare in multiple crucial ways, from connection handling architecture to modules and beyond.

First, though, let’s look at the basics of both Nginx and Apache before we take a deeper dive.

NGINX - NGINX vs Apache - Plesk

NGINX Outline

NGINX came about because of a grueling test, where a server has to reach 10,000 client connections all at the same time. It uses a non-synchronized, event-driven architecture to cope with this prodigious load. And its design means that it can take high loads and loads that vary wildly all in its stride, leveraging predictions for RAM usage, CPU usage, and latency to achieve greater efficiently.

NGINX is the brainchild of Igor Sysoev in 2002. He saw it as a solution to the C10K issue causing issues for web servers handling thousands of connections at the same time. He released it initially in 2004, and this early iteration achieved its objective through the utilization of an events-driven, asynchronous architecture.

Since its public release, NGINX has continued to be a popular choice, thanks to its lightweight utilization of resources and its flexibility to scale simply even with minimal equipment. As fans will testify, NGINX is excellent at serving static content with speed and efficiency, due to its design to pass dynamic requests to different software, which suits the specific purposes more effectively.

Administrators tend to choose NGINX because of such resource efficiency and responsiveness.

Apache - NGINX vs Apache - Plesk

Apache Outline

Robert McCool is credited with producing the Apache HTTP Server back in 1995. But as of 1999, it has been managed and maintained by the Apache Software Foundation instead. Apache HTTP Server is generally known as “Apache”, due to the HTTP web server being the foundation’s initial — and most popular — project.

Since 1996, Apache has been recognized as the internet’s most popular server, which has led to Apache receiving considerable integrated support and documentation from subsequent software projects. Administrators usually select Apache because of its power, wide-ranging support, and considerable flexibility.

It can be extended with its dynamically loadable module system, and is capable of processing various interpreted languages with no need to connect to external software

Apache vs NGINX – Handling Connections

One of the most significant contrasts between Nginx and Apache is their respective connection- and traffic-handling capabilities.

As NGINX was released following Apache, the team behind it had greater awareness of concurrency issues plaguing sites at scale. The NGINX team was able to use this knowledge to build NGINX from scratch to utilize a non-blocking, asynchronous, event-driven algorithm for handling connections. NGINX is designed to spawn worker processes capable of handling many connections, even thousands, courtesy of a fast-looping function. This searches for events and processes them continuously. As actual work is decoupled from connections easily, every worker is free to make connections only after new events are activated.

Every connection handled by the workers is situated in the event loop, alongside numerous others. Events inside the loop undergo asynchronous processing, so that work is handled in a non-blocking way. And whenever each connection closes, it will be taken out of the loop. NGINX can scale extremely far even with limited resources, thanks to this form of connection processing. As the single-threaded server doesn’t spawn processes to handle every new connection, CPU and memory utilization remains fairly consistent during periods of heavy load.

Apache offers a number of multi-processing modules. These are also known as MPMs, and are responsible for determining how to handle client requests. This enables administrators to switch its connection handling architecture simply, quickly, and conveniently.

So, what are these modules?

mpm-prefork

This Apache module creates processes with one thread to handle each request, and every child is able to accommodate one connection at one time. Provided the volume of requests remains less than that of processes, this module is capable of extremely fast performance.

But it can demonstrate a serious drop in quality when the number of requests passes the number of processes, which means this module isn’t always the right option.

Every process with this module has a major effect on the consumption of RAM, too, which makes it hard to achieve effective scaling. However, it could still be a solid choice when utilized alongside additional components built without consideration of threads. E.g. as PHP lacks thread safety, this module could be the best way to work with mod_php (Apache’s module for processing these specific files) safely.

mpm_worker

Apache’s mpm_worker module is designed to spawn processes capable of managing numerous threads each, with each of those handling one connection. Threads prove more efficient than processes, so this MPM offers stronger scaling than the module discussed above.

As there are no more threads than processes, fresh connections can take up one of the free threads rather than waiting for another suitable process to come along.

mpm_event

Apache’s third module can be considered similar to the aforementioned mpm_worker module in the majority of situations, though it’s been optimised to accommodate keep-alive connections. This means that, when using the worker module, connections continue to hold threads, whether or not requests are made actively for the full period during which the connection remains alive.

It’s clear that Apache’s connection handling architecture offers considerable flexibility when selecting various connections and request-handling algorithms. Options provided are primarily a result of the server’s continued advancement, as well as the growing demand for concurrency as the internet has changed so dramatically.

Apache vs NGINX – Handling Static and Dynamic Content

When pitting Nginx vs Apache, their ability to handle static and dynamic content requests is a common point of comparison. Let’s take a closer look.

NGINX is not designed for native processing of dynamic content: it has to pass to an external processor to handle PHP and other dynamic content requests. It will wait for content to be returned when it has been rendered, before relaying the results back to the client.

Communication has to be set up between NGINX and a processor across a protocol which NGINX can accommodate (e.g. FastCGI, HTTP, etc.). This can make things a little more complicated than administrators may prefer, particularly when attempting to anticipate the volume of connections to be allowed — an extra connection will be necessary for every call to the relevant processor.

Still, there are some benefits to using this method. As the dynamic interpreter isn’t integrated within the worker process, the overhead applies to just dynamic content. On the other hand, static content may be served in a simpler process, during which the interpreter is only contacted when considered necessary.

Apache servers’ traditional file-based methods mean they’re capable of handling static content, and their performance is primarily a function of those MPM methods covered earlier.

But Apache is designed to process dynamic content too, by integrating a processor of suitable languages into every worker instance. As a result, Apache can accommodate dynamic content in the server itself, with no need to depend on any external components. These can be activated courtesy of the dynamically-loadable modules.

Apache’s internal handling of dynamic content allows it to be configured more easily, and there’s no need to coordinate communication with other software. Modules may be swapped out if and when requirements for content shift.

NGINX or Apache – How Does Directory-level Configuration Work?

Another of the most prominent differences administrators discuss when discussing Apache vs NGINX relates to directory-level configuration, and whether it’s allowed in their content directories. Let’s explore what this means, starting with Apache.

With Apache, additional configuration is permitted on a per-directory level, through the inspection of files hidden within content directories — and the interpretation of their directives. They’re referred to as .htaccess.

As .htaccess files are located inside content directories, Apache checks every component on the route to files requested, applying those directives inside. Essentially, this allows the web server to be configured in a decentralized manner, typically utilized for the implementation of rewritten URLs, accessing restrictions, authentication and authorization, as well as caching policies.

Though these offer configuration in Apache’s primary configuration file, there are some key advantages to .htaccess files. First and foremost, they’re implemented instantly without needing to reload the server, as they’re interpreted whenever they’re located on a request path.

Secondly, .htaccess files enable non-privileged users to take control of specific elements of their web content without granting them complete control over the full configuration file.

This creates a simple way for certain software, such as content management systems, to configure environments without giving entry to central configuration files. It’s used by shared hosting providers for maintaining control of primary configurations, even while they offer clients their own directory control.

With NGINX, interpretation of .htaccess files is out of the question.It also lacks a way to assess per-directory configuration beyond the primary configuration file. As a result, it could be said to offer less flexibility than Apache, though it has a number of benefits too.

Specifically, improved performance is one of the main advantages compared to the .htaccess directory-level configuration system. In the case of standard Apache setups that accommodate .htaccess in any one directory, the server assesses the files in every parent directory leading to the file requested, whenever a request is made. Any .htaccess files found throughout this search will be read before being interpreted.

So, NGINX can serve requests in less time, due to its single-directory searches and file-reads for every request. Of course, this is based on files being located in a directory with a conventional structure.

Another benefit NGINX offers with directory-level configuration relates to security. Distributing access also leads to a distribution of security responsibility to single users, and they might not all be trustworthy. When administrators retain control of the whole server, there’s less risk of security-related problems which grant access to people who can’t be relied upon.

How does File and URI-based Interpretation Work with NGINX and Apache?

When discussing Nginx vs Apache, it’s important to remember the way in which the web server interprets requests, and maps them to system resources, is another vital issue.

When NGINX was built, it was designed to function as a web and proxy server. The architecture demanded to fulfil both roles means NGINX works with URIs mainly, and translates to the filesystem as required. This is evident in a number of ways in which its configuration files function.

NGINX has no means of determining filesystem directory configuration. As a result, it’s designed to parse the URI. NGINX’s main configuration blocks are location and server blocks: the former matches parts of the URI which come after the host and port, while the latter interprets hosts requested. Requests are interpreted as a URI, rather than one of the filesystem’s locations.

In the case of static files, requests are eventually mapped to a filesystem location. NGINX chooses the location and server blocks for handling the specific request, before combining the document root with the URI. It also adapts whatever’s required, based on the configuration specified.

With NGINX designed to parse requests as URIs rather than filesystem positions, it makes for simpler functionality in various areas. Specifically, in the following server roles: web, proxy, and mail. This means NGINX is easily configured by laying out appropriate responses to varied request patterns, and NGINX only checks filesystems when it’s prepared to serve the request. This is why it doesn’t implement .htaccess files.

Interpret requests as physical resources on a filesystem. It may also interpret requests as a URI location, which demands an assessment that’s a little less specific. Generally, Apache utilizes <Directory> or <Files> blocks for these purposes, and <Location> blocks for resources that are more abstract.

As Apache was conceived as a server for the web, its standard function is interpreting requests as traditional filesystem resources. This process starts with the document root and changing the part of the request which comes after host and port numbers, as it attempts to locate an actual file. So, on the web, filesystem’s hierarchy appears in the form of the available document tree.

Apache gives various alternatives for when requests fail to match underlying filesystems. E.g., Alias directives may be utilized for mapping alternative placements. Leveraging <Location> blocks is a way to work with the URI rather than the filesystem. A number of expression variants may be utilized to apply configuration throughout filesystems with greater flexibility.

As Apache is capable of functioning on the webspace and underlying filesystems, it has a heavier focus on filesystem methods. This is evident in a number of the design choices, such as the presence of .htaccess files in per-directory configuration. Apache documentation advises not to utilize URI-based blocks for inhibiting access when requests match those underlying filesystems.

NGINX vs Apache: How Do Modules Work?

When considering Apache vs NGINX, bear in mind that they can be extended with module systems, though they work in significantly different ways.

NGINX modules have to be chosen and compiled into its core software, as they cannot be dynamically loaded. Some NGINX users it’s less flexible as a result. This may be particularly true for those who feel unhappy managing their compiled software that’s positioned external to the distribution’s conventional packaging system.

Even though packages typically include modules which are used commonly, you would need to create the server from source if you need a non-standard module. Still, NGINX is incredibly useful, allowing users to dictate what they want out of their server by including only the functionality you plan to utilize.

For many people, NGINX seems to offer greater security as a result of this. Arbitrary components are unable to connect to the server. But if the server is in a scenario where this appears to be likely, it may have been affected already.

Furthermore, NGINX modules offer rate limiting, geolocation, proxying support, rewriting, encryption, mail functionality, compression, and more.

With Apache, the module system provides users with the option to load or unload modules dynamically based on your individual needs. Modules may be switched on and off even though the Apache core remains present at all times, so you can add or take extra functionality away and hook into the main server.

With Apache, this functionality is utilized for a wide range of tasks, and as this platform is so mature, users can choose from a large assortment of modules. Each of these may adjust the server’s core functionality in various ways, e.g. mod_php embeds a PHP interpreter into all of the running workers.

However, modules aren’t restricted to processing dynamic content: some of their functions include authenticating clients, URL rewriting, caching, proxying, encrypting, compression, and more. With dynamic modules, users can expand core functionality significantly — with no need for extensive extra work

NGINX or Apache: How do Support, Documentation, and Other Key Elements Work?

When trying to decide between Apache or Nginx, another important factor to bear in mind is actually getting set-up and the level of support with other software.

The level of support for NGINX is growing, as a greater number of users continue to implement it. However, it still has some way to go to catch up with Apache in certain areas.

Once upon a time, it was hard to gather detailed documentation for NGINX (in English), as the majority of its early documentation was in Russian. However, documentation has expanded since interest in NGINX has grown, so there’s a wealth of administration resources on the official NGINX website and third parties.

On the topic of third-party applications, documentation and support is easier to find. Package maintainers are starting to offer choices between NGINX and Apache auto-configuring. It’s easy to configure NGINX to complement alternative software without any support, as long as the specific project documents clear requirements (such as headers, permissions, etc.).

Support for Apache is fairly easy to find, as it’s been such a popular server for such a long time. An extensive library of first- and third-party documentation is on offer out there, for the core server and task-based situations that require Apache to be hooked up with additional software.

As well as documentation, numerous online projects and tools involve tools to be bootstrapped within an Apache setting. This could be present in the projects or the packages managed by the team responsible for the distribution’s packaging.

Apache receives decent support from external projects mainly due to the market share and the sheer number of years it’s been operating. There may be a higher likelihood of administrators having experience of using Apache, not just because it’s so prevalent but as a lot of them begin in shared-hosting scenarios which rely on Apache, due to the .htaccess distributed management capabilities.

NGINX vs Apache: Working with Both

Now that we’ve explored the advantages and disadvantages of NGINX and Apache, you could be in a better position to understand whether Apache or NGINX is best for you. But a lot of users discover they can leverage both server’s benefits by utilizing them together.

Traditional configuration for using NGINX and Apache in unison is to position NGINX ahead of Apache. This way, it serves as a reverse proxy — enabling it to accommodate every client request. Why is this important? Because it takes advantage of the quick processing speeds and NGINX’s capabilities to handle a lot of connections at the same time.

In the case of static content, NGINX is a fantastic server, as files are served to the client directly and quickly. With dynamic content, NGINX proxies requests to Apache to be processed. Apache will then bring rendered pages back. After this, NGINX is able to send content back to clients.

Plenty of people find this is the ideal setup, as it enables NGINX to perform as a sorting machine, handling all requests and passing on those which have no native capability to serve. If you reduce Apache’s level of requests, it’s possible to reduce the level of blocking which follows when Apache threads or processes are occupied.

With this configuration, users can scale out through the addition of extra backend servers as required. NGINX may be configured to pass to a number of servers with ease, boosting the configuration’s performance and its resistance to failure.

Apache vs NGINX – Final Thoughts

It’s fair to say that NGINX and Apache offer quality performance — they’re flexible, they’re capable, and they’re powerful. Choosing which server works best for your needs depends largely on assessing your individual requirements and testing with those patterns you believe you’re likely to see.

A number of differences between these projects have a tangible effect on capabilities, performance, and the time required to implement each solution effectively. But these tend to be the result of numerous trade-offs that shouldn’t be dismissed easily. When all is said and done, there’s no web server that meets everyone’s needs every single time, so it’s best to utilize the solution that suits your objectives best.

Linux Server Security – Best Practices for 2020

Linux Server Security

Linux server security is on sufficient level from the moment you install the OS. And that’s great to know because… hackers never sleep! They’re kind of like digital vandals. Taking pleasure – and sometimes money too – as they inflict misery on random strangers all over the planet.

Anyone who looks after their own server appreciates the fact that Linux is highly secure right out the box. Naturally, it isn’t completely watertight. But it does do a better job of keeping you safe than most other operating systems.

Still, there are plenty of ways you can improve it further. So here are some practical ways how you can keep the evil hordes from the gates. It will probably help if you’ve tinkered under the hood of a web server before. But don’t think that you have to be a tech guru or anything like that.

Deactivate network ports when not in use

Deactivate network ports when not in use

Leave a network port open and you might as well put out the welcome mat for hackers. To maintain web host security you can use the “netstat” command to inform you which network ports are currently open. And also which services are making use of them. This should close off another avenue of attack for hackers.

You also might want to set up “iptables” to deactivate open ports. Or simply use the “chkconfig” command to shut down services you won’t need. Firewalls like CSF let you automate the iptables rules, so you could just do that. If you use Plesk platform as your hosting management software – please pay attention to this article about Plesk ports.

The SSH port is usually 22, and that’s where hackers will expect to find it. To enhance Linux server security, change it to some other port number you’re not already using for another service. This way, you’ll be making it harder for the bad guys to inject malware into your server. To make the change, just go to /etc/ssh/sshd_config and enter the appropriate number.

Update Linux Software and Kernel

Update software for better Linux server security

Half of the Linux security battle is keeping everything up to date because updates frequently add extra security features. Linux offers all the tools you need to do this, and upgrading between versions is simple too. Every time a new security update becomes available, you need to review it and install it as soon as you can. Again, you can use an RPM package manager like yum and/or apt-get and/or dpkg to handle this.

# yum update

OR

# apt-get update && apt-get upgrade

It’s possible to set up RedHat / CentOS / Fedora Linux so that you get yum package update notifications sent to your email. This is great for Linux security and you can also apply all security updates using a cron job. Apticron can be used to send security mitigations under Debian / Ubuntu Linux. You can also use the apt-get command/apt command to configure unattended-upgrades for your Debian/Ubuntu Linux server:

$ sudo apt-get install unattended-upgrades apt-listchanges bsd-mailx

Reduce Redundant Software to Increase Linux Security

For greater Linux server security hardening It’s worth doing a spring clean (at any time of the year) on your installed web services. It’s easy for surplus apps to accumulate and you will probably find that you don’t need half of them. In the future, for better Linux server security try not to install software that you don’t need. It’s a simple and effective way to reduce potential security holes. Use an RPM package manager like yum or apt-get and/or dpkg to go through your installed software and remove any that you don’t need any more.

# yum list installed
# yum list packageName
# yum remove packageName

OR

# dpkg --list
# dpkg --info packageName
# apt-get remove packageName

Turn off IPv6 to boost Linux server security

Turn off IPv6

IPv6 is better than IPv4, but you probably aren’t getting much out of it – because neither is anyone else. Hackers get something from it though – because they use it to send malicious traffic. So shutting down IPv6 will close the door in their faces. Go to edit /etc/sysconfig/ network and change the settings to read NETWORKING_ IPV6=no and IPV6INIT=no. Simple as that.

Turn off root logins to improve Linux server security

Linux servers the world over allow the use of “root” as a username. Knowing this, hackers will often try subverting web host security to discover your password before slithering inside. It’s because of this that you should not sign in as the root user. In fact, you really ought to remove it as an option, creating one more level of difficulty for hackers. And thus, stopping them from being able to get past your security with just a lucky guess.

So, all it takes is for you to create a separate username. Then use the “sudo” special access command to execute root level commands. Sudo is great because you can give it to any users  you want to have admin commands, but not root access. Because you don’t want to compromise security by giving them both.

So you deactivate the root account, but before, check you’ve created and authorized your new user. Next, go to /etc/ssh/sshd_config in nano or vi, then locate the “PermitRootLogin” parameter. Change the default setting of “yes” to “no” and then save your changes.

GnuPG encryption for web host security

GnuPG encryption

When data is on the move across your network, hackers will frequently attempt to compromise Linux server security by intercepting it. Always make sure anything going to and from your server has password encryption, certificates and keys. One way to do this is with an encryption tool like GnuPG. It uses a system of keys to ensure nobody can snoop on your info when in transit.

Change/boot to read-only

All files related to the kernel on a Linux server are in the “/boot” directory. The standard access level for the directory is “read-write”, but it’s a good idea to change it to “read-only”. This stops anyone from modifying your extremely important boot files.

Just edit the /etc/fstab file and add LABEL=/boot /boot ext2 defaults, rows 1 2 to the bottom. It is completely reversible, so you can make future changes to the kernel by changing it back to “read-write” mode. Then, once you’re done, you can revert back to “read only”.

A better password policy enhances Web Host Security

better password policy - linux server security

Passwords are always a security problem because humans are. People can’t be bothered to come up with a lot of different passwords – or maybe they can’t. So what happens? They use the same ones in different places. Or worse yet – combinations that are easy to remember, like “password” or “abcde”. Basically, a gift to hackers.

Make it a requirement for passwords to contain a mix of upper AND lower case letters, numbers, and symbols. You can enable password ageing to make users discard previous passwords at fixed intervals. Also think about banning old passwords, so once people use one, it’s gone forever. The “faillog” command lets you put a limit on the amount of failed login attempts allowed and lock user accounts. This is ideal to prevent brute force attacks.

So just use a strong password all the time

Passwords are your first line of defense, so make sure they’re strong. Many people don’t really know what a good password looks like. That it needs to be complex, but also long enough to make it the strongest it can be.

At admin level, you can help users by securing Plesk Obsidian and enforcing the use of strong passwords which expire after a fixed period. Users may not like it, but you need to make them understand that it saves them a lot of possible heartache.

So what are the ‘best practices’ when setting up passwords?

  1. Use passwords that are as long as you can manage
  2. Avoid words that appear in the dictionary (like “blue grapes”)
  3. Steer clear of number replacements that are easy to guess (like “h3ll0”)
  4. Don’t reference pop culture (such as “TARDIS”)
  5. Never use a password in more than once place
  6. Change your password regularly and use a different one for every website
  7.  Don’t write passwords down, and don’t share them. Not with anybody. Ever!

The passwords you choose should increase Web Host Security by being obscure and not easy to work out. You’ll also help your security efforts if you give your root (Linux) or RDP (Windows) login its own unique password.

Linux security security needs a firewall

Firewall helps Linux server security - Plesk

A firewall is a must have for web host security, because it’s your first line of defense against attackers, and you are spoiled for choice. NetFilter is built into the Linux kernel. Combined with iptables, you can use it to resist DDos attacks.

TCPWrapper is a host-based access control list (ACL) system that filters network access for different programs. It has host name verification, standardized logging and protection from spoofing. Firewalls like CSF and APF are also widely used, and they also come with plugins for popular panels like cPanel and Plesk.

Locking User Accounts After Unsuccessful Logins

For Linux security, the faillog command shows unsuccessful login attempts and can assign limits to how many times a user can get their login credentials wrong before the account is locked. faillog formats the contents of the failure log from the /var/log/faillog database/log file. To view unsuccessful login attempts, enter:

faillog

To open up an account locked in this way, run:

faillog -r -u userName

With Linux security in mind be aware that you can use the passwd command to lock and unlock accounts:

lock Linux account

passwd -l userName

unlock Linux account

passwd -u userName

Try disk partitions for better Web host security

disk partitions - linux server security

If you partition your disks then you’ll be separating OS files from user files, tmp files and programs. Try disabling SUID/SGID access (nosuid) along with binaries (noexec) on the operating system partition

Avoid Using Telnet, FTP, and Rlogin / Rsh Services

With the majority of network configurations, anyone on the same network with a packet sniffer can intercept FTP, telnet, or rsh commands, usernames, passwords, and transferred files. To avoid compromising Linux server security try using either OpenSSH, SFTP, or FTPS (FTP over SSL), which gives FTP the benefit of SSL or TLS encryption. To move outdated services like NIS or rsh enter this yum command:

# yum erase xinetd ypserv tftp-server telnet-server rsh-server

For Debian/Ubuntu Linux server security, give the apt-get command/apt command a try to get rid of non-secure services:

$ sudo apt-get --purge remove xinetd nis yp-tools tftpd atftpd tftpd-hpa telnetd rsh-server rsh-redone-server

Use an Intrusion Detection System

NIDS or Network intrusion detection systems keep watch for malevolent activity against Linux server security like DOS attacks, port scans, and intrusion attempts.

For greater Linux server security hardening it’s recommended that you use integrity checking software before you take a system into a production environment online. You should install AIDE software before connecting the system to a network if possible. AIDE is a host-based intrusion detection system (HIDS) which monitors and analyses a computing system’s internals. You would be wise to use rkhunter rootkit detection software as well.

Logs and Audits

You can’t manage what you don’t measure, so if you want to stop hackers then your system needs to log every single time that intruders try to find a way in. Syslog is set up to store data in the /var/log/ directory by default and it can also help you to identify the potential surreptitious routes inside that misconfigured software can present.

Secure Apache/PHP/NGINX server

Edit httpd.conf file and add:

ServerTokens Prod
ServerSignature Off
TraceEnable Off
Options all -Indexes
Header always unset X-Powered-By

Restart the httpd/apache2 server on Linux, run:

$ sudo systemctl restart apache2.service

OR

$ sudo systemctl restart httpd.service

Activate CMS auto-updates

Activate CMS auto-updates

CMSs are quite complex, so hackers are always trying to exploit security loopholes with them. Joomla!, Drupal and WordPress, are all hugely popular platforms, so developers are constantly working on new security fixes. This means updates are important and should be applied straight away. The best way to ensure this happens is to activate auto-updates, so you won’t even have to think about it. Your host isn’t responsible for the content of your website. So it’s up to you to ensure you update it regularly. And it won’t hurt to back it up once in a while either.

Backup regularly

Backup regularly - linux server security - cloud

Regular and thorough backups are probably your most important security measure. Backups can help you recover from a security disaster. Typical UNIX backup programs use dump and restore, and these are we recommend them. For maximum Linux security, you need to backup to external storage with encryption, which means something like a NAS server or cloud-based service.

Protect Email Directories and Files

These Linux security tips wouldn’t be complete without telling you that Linux has some great ways to protect data against unauthorized access. File permissions and MAC are great at stopping intruders from getting at your data, but all the Linux permissions in the world don’t count for anything if they can be circumvented—for instance, by transplanting a hard drive to another machine. In such a case you need to protect Linux files and partitions with these tools:

  • For password-protected file encryption and decryption, use the gpg
  • Both Linux and UNIX can add password protection to files using openssl and other tools.
  • The majority of Linux distributions support full disk encryption. You should ensure that swap is encrypted too, and only allow bootloader editing via a password.
  • Make sure root mail is forwarded to an account that you check.

System Accounting with auditd

Auditd is used for system audits. Its job is to write audit records to the disk. This daemon reads the rules in /etc/audit.rules at start-up. You have various options for amending the /etc/audit.rules file such as setting up the location for the audit file log. Auditd will help you gain insight into these common events:

  • Occurrences at system startup and shutdown (reboot/halt).
  • Date and time an event happened.
  • The user who instigated the event (for example, perhaps they were attempting to access /path/to/topsecret.dat file).
  • Type of event (edit, access, delete, write, update file, and commands).
  • Whether the event succeeded or failed.
  • Records events that Modify time and date.
  • Discover who modified network settings.
  • Record actions that change user or group information.
  • Show who changed a file etc.

Use Kerberos

Kerberos is a third-party service offering authentication that aids Linux security hardening. It uses shared secret cryptography and assumes that packets moving on a non-secure network are readable and writable. Kerberos is based on symmetric-key cryptography and so needs a key distribution center. Kerberos lets you make remote login, remote copy, secure inter-system file copying, and other risky actions safer and it also gives you more control over them. Kerberos authentication prevents unauthorized users from spying on network traffic and grabbing passwords.

Linux Server Security Summary

That’s a lot of tips, but you need to keep your linux server security updated in a world of thieves and vandals. These despicable beings are hard at work all the time, always looking to exploit any chink in a website’s armor. If you give them the slimmest opportunity to disrupt your business, they will happily take advantage of it. Since there’s such a huge army of them, you need to make sure that your castle has extremely strong defenses.

Let us know how many of these tips you have implemented, or if you have any questions in the comments below.

6 Things to Keep in Mind When Choosing an Ideal Server for Big Data Requirements

Big data refers to a massive volume of data sets that can not be processed by typical software or conventional computing techniques. Along with high volume, the term also indicates the diversity in tools, techniques, and frameworks that make it challenging to tackle and process the data. When stored and processed properly, this massive data can offer deep insights to the businesses. There are a number of ways in which big data can help businesses grow at an accelerated rate.

How Can Businesses Benefit From Big Data?

The businesses can store and process high amounts of data from diverse internal and external sources. Like company databases, social networks, and search engines to get excellent business ideas. It can also allow them to forecast the events that can have a direct impact on business operations and performance. On the marketing front, it can help you increase the conversion rate by offering only relevant schemes, launches, and promo offers to the customers based on their buying behavior. The progressive companies are using big data for new product development, understanding the market conditions, and utilizing the present and upcoming trends for direct business benefits.

The Role of Server in Big Data

For enjoying the optimum business benefits out of big data it’s important to choose the ideal hardware that can proactively assist in big data operations without significantly inflating the costs or complications. There are some challenges to address like determining the processing requirements, high volume data storage at superfast speed, and supporting simultaneous computations of massive levels without compromising with the output. An important part of this strategy is to choose the right type of server. 

The standard servers generally lack the resource volume and technical configuration required for various big data operations. So you would need the premium, purpose-built servers that are specially tailored to accommodate the massive data volume. As well as support the computational, analytical, and processing tasks. However, the final decision should be based on your specific requirements as no two customers are the same. You can find additional information on big data hosting in this previous article

In this blog we are going to present some of the ideal factors to keep in mind while deciding on the ideal server for ensuring optimum big data benefits:

1. Choose Servers with High-Capacity

The ideal properties of a big data server are massive storage, ultra-fast recovery, and high-end analytical capability. So, you need the servers that have the right configuration and capacities to meet all these requirements without any compromise.

  • Volume. As the name suggests, the big data feeds on loads of data that can go up to petabytes. For the uninformed, a single Petabyte is equal to 1,000,000 GB. So, make sure that your server can not only handle this massive amount of data but can also continue working consistently while handling it.
  • Real-Time Analysis. The USP of big data is organizing and structuring a huge volume of diverse and unstructured data and seamlessly adding the latter to the available structured data. So, you would need the servers with very high processing capacities to handle this requirement efficiently without fail.
  • Retrieval capabilities. Big data has big objectives too. For instance, real-time stock trading analysis where even a fraction of seconds matters a lot and can introduce multiple changes. For that, your server should fully support multiple users who are concurrently adding multiple inputs every second.

2. Sufficient Memory

RAM is one of the prime requirements for big data analytics tools and applications. Using RAM instead of storage will significantly accelerate the processing speed and help you to gain more output in relatively less time. It translates to better productivity and quicker time-to-market – the two factors that offer you a competitive edge in the industry. Due to varying requirements in terms of volumes and operations, it is not possible to advise on a typical RAM volume. However, to be on the safer side it is good to go with at least 64GB RAM. The readers are advised to discuss their requirements with the providers to know about the ideal memory requirements for their purpose.

3. Better RoI with NoSQL Databases, MPP and MapReduce

You also need to assist your clients in neatly segregating the analytical and operational requirements. It requires wisely optimizing the server hardware to meet the purpose. It is best to go for the NoSQL databases.

Unlike traditional databases, the NoSQL databases are not limited to a single server but can be widely spread across multiple servers. It helps it in dealing with tremendous computations by multiplying its capabilities manifolds and instantly scale up to the changing requirements in a fraction of seconds.

NoSQL databases can be defined as a mechanism that doesn’t use the tabular methodology for saving the data. Its non-relational data storage technology efficiently helps the businesses overcome the limitations and complexity inherent in traditional relational databases. To the end-users, this mechanism offers high speed scaling at relatively very less cost.

To accelerate the analytical big data capabilities you can rely on MPP databases (massively parallel processing) and MapReduce. These databases can significantly outscale the traditional single severs. You may also look for the NoSQL systems with inbuilt MapReduce functionality that allows it to scale to the cloud or a cluster of servers along with NoSQL.

4. Sufficient Network Capacity

You would need to send massive data volumes to the server. Lack of sufficient network capacity can throttle your operations. Be considerate of the fluctuations as well. You wouldn’t regularly be writing huge data volumes, which means that buying high bandwidth plans isn’t a cost-efficient solution for you. So, opt for the bespoke bandwidth solutions that allow you to select the ideal bandwidth to competently fulfill your data transfer requirements.

You can choose different bandwidth packages starting from 20 TB and going up to 1000 TB per month. To make things easier you may like to inform your provider about your expected data transfer requirements and ask them about the ideal bandwidth volume. Reputed providers can also offer you unmetered bandwidth for more demanding enterprise clients. Depending upon the volume and frequency of data 1Gbps is the least amount of bandwidth you require for your server.

5. Purpose-Specific Storage Capabilities

Along with storing permanent data your server also needs to accommodate huge amounts of intermediate data produced during various analytical processes. So, you would need sufficient data storage, Instead of choosing storage based on their capacity, think about their relevance for your purpose. The reputed vendors would always suggest you check your requirements before buying the storage. For instance, investing huge amounts on expensive SSD storage doesn’t make sense if your data storage requirements are modest and the traditional HDD can solve your purpose at much lower prices. 

6. High-End Processing Capacity

The analytics tools related to big data generally divide the processing operations across different threads. These threads are distributed across different cores of the machine and are executed simultaneously. For a modest, to average load, you need 8-16 cores but may require more than that depending on the load. The rule of thumb is to prefer a higher number of cores rather than a smaller volume of highly powerful cores if you are looking for more competent performance. 

Should You Use Software for Server Optimization to Meet Big Data Requirements?

The big data ecosystem has very specific needs that standard data servers with limited capabilities in terms of multitasking, output, and analytical insights can’t support. It also lacks the ultra-speed needed for real-time analytical data processing. So, you would require bespoke enterprise servers that seamlessly adapt to your particular needs in terms of volume, velocity, and diverse logical operations. For massive big data operations, you may need white box servers.

While technically it’s possible to employ software for optimizing the server environment. It may prove to be an expensive option in the long run by significantly reducing the RoI.

It also exposes your system to various security risks while at the same time increasing the management hassles like license acquisition/maintenance, etc. Moreover, you would have limited opportunities to fully utilize the available resources and infrastructure. 

On the other hand, using a purpose-specific server for the big data requirements offers multiple benefits like:

  • More operations per I/O that translate to better computational power 
  • Higher capabilities for parallel processing 
  • Improved virtualization power
  • Better scalability
  • Modular design benefits
  • Higher memory
  • Better utilization of the processor

Additionally, specially tailored servers can smartly work in collaboration. To assure the best possible utilization, virtualization, and parallel processing requirements. Due to their specific architecture, it’s easier to scale and manage them.

Conclusion

Big data can help your business grow at a very high rate. However, in order to get the best benefits out of your big data strategy, you need to build a purpose-specific ecosystem that also includes ideal hardware.

So, we mentioned some major factors to keep in mind while choosing the ideal server for your big data requirements. And now it’s time for you to let us know in the comments section below how do you think you can benefit from it. We want to hear from you!

The Plesk WordPress Toolkit 4.9 Release – What’s New?

We’re happy to announce that the Plesk WordPress Toolkit 4.9.0 release is now available for the general public. As most of you probably know, this year we’ve been pretty busy working on WordPress Toolkit for cPanel. And even though 4.9 is not a huge update in terms of customer features, it certainly brings some long-awaited surprises in store. So, let’s deep dive into details to see what’s new.

Find out more about the Plesk WordPress Toolkit

Limit the Number of WordPress Installations in Service Plans

Hosters could always limit the access to WordPress Toolkit or some of its functionality through Plesk Service Plans. However, it wasn’t possible to set a limit on how many WordPress sites any given user could manage via WordPress Toolkit. This made things unnecessarily harder for some people. Because many Managed WordPress hosters do have these site limits as a part of their business. We’ve decided to address this glaring omission in WordPress Toolkit 4.9 and added this limit on the Resources tab of a Service Plan management screen:

Now, it’s possible to directly customize a particular subscription and change the limit. Service Plan add-ons also have this limit available. So, most kinds of possible upsell scenarios are covered.

The website limit will affect the ability to install WordPress sites via WordPress Toolkit. Add new sites using the Scan feature and create clones of existing sites. Note that so-called “technical” installations – e.g. clones made by Smart Updates don’t count towards the site’s limit, as they’re not visible to users in the interface.

By default, the limit is set to Unlimited. So nothing will change for users out of the box after the update to WordPress Toolkit 4.9. Some of you may ask what happens if the hoster defines a limit that’s lower than the number of sites the customer already has at the moment. In this case, the user won’t be able to add more sites. But existing sites won’t suddenly disappear from the interface. 

However, if the user removes or detaches a site, it won’t be possible to add another site back if the limit is reached. In other words, you can reduce the number of sites as you see fit. But you can’t increase it beyond the limit set for your subscription:

Configure Default Database Table Name Prefix

WordPress Toolkit generates a random prefix for database table names every time someone installs a new WordPress. This is to alleviate the impact of automated bot attacks looking for vulnerable WordPress databases using the default table prefix. For some users – especially WordPress developers, this behavior is quite annoying. So we added the ability to configure a specific default prefix for database table names whenever someone installs a WordPress on a server:

Here comes the tricky part. Generating a random prefix for database table names is a security measure in WordPress Toolkit. That it’s applied automatically during the installation of WordPress. If you set the default prefix back to ‘wp_‘, WordPress Toolkit will respect your choice and will not change this prefix. But it will set the site security status to ‘Danger‘ to tell you that this isn’t secure. This shouldn’t be an insurmountable challenge, like any other predefined prefix (be it ‘wp‘ or ‘wp___‘, or whatever else that is not ‘wp_‘) won’t trigger the security warnings.

If users want to return to the old behavior with a randomized prefix, all they need to do is to leave this field empty. This small QoL (Quality of Life) improvement should provide a number of users with more control over their WordPress Toolkit experience.

Working on WordPress Toolkit for cPanel

We’ve been doing a lot of work on the WordPress Toolkit for cPanel front during the development of WordPress Toolkit 4.9. For instance, we’ve added the capability to update the product in cPanel. And we started to really dig into the security and performance aspects. Addressing a lot of issues that both WordPress Toolkit and cPanel teams found. 

Features like Sets and Reseller support were also added in the scope of the current release. We’re actively working on licensing and test infrastructure at the moment. And while there’s still quite a lot of stuff left to do, we can already foresee a finishing date. Our WordPress Toolkit for cPanel will be good enough to be ready for a demo very soon. And we’re already seeing a lot of interest from various partners – woohoo!

Testing Amazon AWS Infrastructure and Other Stuff

There’s another hidden but very important activity going on behind the scenes for quite some time. And that’s the initiative of moving our regression testing to Amazon AWS infrastructure for extra speed, flexibility, and on-demand availability. This should allow us to test WordPress Toolkit on cPanel as often and as thoroughly as WordPress Toolkit on Plesk. 

Using AWS for testing should also allow us to run a suite of tests per each developer commit in the future. Bringing us closer to our goal of “green master” initiative – or in other words, having a product that could be released in a production-ready state at any given time.

Speaking of improving the product, some of the security and performance improvements done in the scope of WordPress Toolkit for cPanel should also affect WordPress Toolkit for Plesk in a positive way. WordPress Toolkit 4.9 includes a number of important customer bugfixes as well.

Future Plans

Our next major release will be Plesk WordPress Toolkit 4.10, tentatively scheduled to be launched by the end of summer 2020. This upcoming release coincides with the peak of the vacation season. So we won’t have the manpower to push any groundbreaking changes – they’re reserved for the next upcoming releases. 

However, you can rest assured that WordPress Toolkit 4.10 will include some in-demand customer features, bug fixes, and other interesting stuff on top of changes required for cPanel support. We’re also planning to release a small WordPress Toolkit 4.9.1 update very soon with a couple of new CLI utilities as a part of the CLI completeness initiative. The future of the product looks very busy, so stay tuned for updates – and especially, stay healthy! 

…So that’s all for the Plesk WordPress Toolkit 4.9 release. Remember that our teams are always on the lookout for new features to implement or bugs to crash. And here’s where your feedback is essential. You can share your suggestions or ideas for new functionalities to one of our channels – Uservoice, Plesk Community Discussion Forum, and Plesk Online Community

Or while you’re here, you can also leave your feedback in the comments below – our teams have eyes everywhere! Once again, thank you for reading. And cheers from the whole WordPress Toolkit team!