Top 10 PHP CMS Platforms For Developers in 2021

Top CMS platforms Plesk blog

If you didn’t already know, CMS is short for Content Management System. There are many different types available and they all exist to make creating websites easier for people who didn’t learn to program. Some of these systems are aimed at customers with at least some understanding of code, but the majority are pitched at website owners who just want to get their sites built fast and maintenance to be easy. Since there are so many choices, this article explores a few of the criteria to keeping mind when choosing between the different PHP CMS platforms that are available. 

What’s a CMS?

A CMS is an application design to make website building easy, so you can add different features and manage whatever content you want to populate your site with.

Webpages are usually put together by developers using various languages and technologies like PHP, ASP, HTML, JavaScript, and CSS. A CMS platform does use languages like these, but the website creator doesn’t necessarily need to see them or understand them, because there’s an interface that simplifies all of the stuff that goes on “under the hood”. You can still get your hands dirty with coding if you want to, but if you’re a beginner who just wants to build a blog or a shop for yourself, a CMS will let you do that just by dragging and dropping the various elements into place.

Choosing the Right CMS Platform for Your Website

Before you look for a new car it’s good to make a list of the features that are important to you, and shopping for a CMS is no different. 


Your ideal CMS should be intuitive, with an interface that does not require you to have a degree in software engineering before you can pick it up. You should be able to understand it quickly so that in no time you’re able to add images, audio clips, and text (along with other things). The interface should allow you to make changes easily, and the tools should be self-explanatory.

Design Templates

One of the strengths of PHP CMS software is the availability of design templates. Some CMSs can offer whole galleries of pre-existing examples which means that you don’t have to build them yourself. But it does help if you can customize them without too much trouble too (which in this case means “not needing to know any code”).

Data Portability

You might not stay with the same host forever so, your ideal platform should come with tools that let you manage your data and move it wherever you need to put it next with relative ease.

Optional Extras

Websites come in all shapes and sizes to suit all pockets and purposes. That’s why there isn’t a one size fits all CMS platform that will suit every single website.  One way around this is with extensions and add-ons. These are additional apps that can add to the basic set of features that come with the CMS software. If you think of this as a Swiss army knife, then extensions and add-ons are like extra blades that you can add to it to make it do more. 

User Support

While a good CMS platform will be straightforward to use and easy to pick up in the first place, you’re always going to have questions at some point. Some platforms have very large and loyal fan bases, and you might find that you can pick up all the help you need by consulting with existing users on forums. These good people will usually be only too happy to share their knowledge and experience to benefit others in the community. 

Of course, a really good CMS provider will also offer around-the-clock official support too.  

Pricing considerations

Some CMS platforms are totally free, while others charge you by the month. But even with the free ones, you’re probably going to have to part with some cash for the add-ons and templates. If you don’t want to leave your website at the mercy of the provider, then your web hosting services will also cost something too. That shouldn’t worry you though, because thanks to CMS platforms it’s never been cheaper and simpler for non-experts to get their website off the page and onto the web.

So, keep all of these points we’ve mentioned in mind and you should be able to start tracking down the perfect PHP-based content management system for your needs. Please find our TOP 10:

These CMS platforms make traditional development work a lot less of a chore for the developer. Dynamic web sites can swell up to include thousands of pages, and when they do it’s much easier to manage the process with the best PHP CMS platform as it can streamline development work in clever ways.


WordPress has risen to become one of the best known and most widely used open-source PHP CMSs. It can accommodate lots of apps and is flexible enough to handle a wide range of different user scenarios. It’s as good at providing the foundation for a basic blog as it is a large e-commerce store, and you only have to look to the 75 million currently active websites that rely on it for confirmation of how universally popular it is.

Since WordPress is an open-source platform, it’s benefited from the ongoing attention of thousands of developers. This is one of the biggest reasons for its rapid evolution and why it’s turned into the preferred choice of many web app developers. It offers the widest selection of additional widgets, themes, and plug-ins, and it can be readily tailored and turned to almost any end.

It also ships with a suite of integrated SEO tools to optimize search engine visibility, and that’s one of the reasons why developers rate it so very highly.


  • WordPress accounts for 76.4% of the CMS market
  • It supports over 68 languages
  • Plug-ins have been downloaded 1.48 billion times
  • WordPress powers many government websites around the world



  • Themes and plugins can require annoyingly frequent updates
  • Open source can mean ‘more open to hackers’
  • Customization requires a deep level of understanding


Joomla is another one of the best PHP CMS platforms and it’s garnered a reputation for being good for portfolio and blogging websites. It may sit somewhat in the shadow of WordPress, but it still comes with enough high-quality features to create effective blogs and dynamic websites. It meshes well with a few versions of SQL, which means database integration should not be a problem.

This PHP CMS can integrate the site with its hosting provider in just one click and makes the creation of responsive websites a breeze. Its multitude of available designs and extensions make it easy to add extra features to any web apps that you may be designing. As one of the best PHP CMS platforms, Joomla has proved to be popular among big names that include eBay, Barnes & Noble, IKEA, and many others.


  • 6% of all websites rely on Joomla
  • 2 million sites and counting
  • One of the top three CMSs which offer free plug-ins and themes
  • Supports over 64 languages



  • Not as SEO enabled as some PHP CMSs
  • Difficult for non-developers to add custom designs
  • Not many modules for sale
  • Some plug-ins not completely compatible without modification


Drupal is one of the best PHP CMS platforms on the market. It’s open-source and well-suited to eCommerce stores, beginning its life initially as a message board but then evolving into one of the most popular PHP based content management systems. Drupal makes it easy for developers to build enhanced online stores thanks to its rich feature set. It’s ideal for developing modern apps which is one of the reasons why many developers are drawn to it.

While WordPress functionality can be extended further with plugins, Drupal refers to its add-ons as modules, although it already comes with many features and options. Top companies like NBC, Harvard University, Tesla, Princess Cruises, and MTV UK rely on Drupal for their web operations. It also benefits from active community support.


  • Drupal has around a million users
  • It’s available in over 90 languages
  • Many American government websites are Drupal-powered
  • Acquia spent half a million dollars to accelerate the migration of Drupal 7 modules to Drupal 8
  • Drupal powers around 1 million websites


  • The platform can be greatly expanded upon
  • Frequent patches and updates enhance platform security
  • Drupal is well-suited to eCommerce
  • Best PHP CMS for websites with lots of traffic


  • Hard to understand for non-developers
  • Not well suited to blogs or other publications
  • Installing custom modules is not easy


OctoberCMS is a free, open-source PHP CMS that a great many company websites have been built on. The CMS is flexible, simple, and ready to deliver retina-ready websites and apps.

OctoberCMS is a self-hosted open-source PHP CMS and you can install it on your hosting service if you want to. It integrates well with third-party apps and features more than 700+ plugins and themes. It has a large and supportive community.


  • Own community
  • Ecosystem of plugins & themes
  • Based on Laravel framework


  • Open source and free
  • Versatile and extendable
  • Many and varied themes and plugins


  • Requires developer input to customize
  • Fewer users than WordPress


Opencart is another of the PHP based content management systems that are ideally suited to the creation of eCommerce websites. It’s open-source so PHP developers can easily add their own updates, and for users, it’s not hard to get to grips with thanks to its intuitive UI. The platform caters to a great many languages and offers unlimited product categories for the biggest inventories out there. Opencart is a well-featured PHP CMS that gives plenty of scope to developers while keen to create comprehensively featured online stores.


  • Opencart allows more than 20 ways to pay
  • 12k+ extensions on offer
  • Powers 790k+ websites
  • 95k+ forum members


  • Easy to set up and get started
  • Free themes in abundance
  • Thousands of available modules and extensions
  • Makes it easy to set up sites in different managers


  • Some technical knowledge needed for customization
  • Not very SEO-friendly
  • Bogs down when web traffic spikes
  • No event system so users can’t set up tasks from within modules


ExpressionEngine is one of the best PHP based content management systems for sites that need to handle large amounts of content. It is an excellent PHP based CMS with an architecture that can be modified with custom scripts to introduce additional functions.

Any newly added content becomes visible to the customer straight away. ExpressionEngine is versatile enough that when it creates pages, it does so by pulling content from the database and then formatting it so that every user gets the best available view for their device. This dynamic approach to content generation makes it very flexible.


  • Custom edit forms are available. You can navigate and fill them out easily
  • HTML agnostic template system
  • Preview window the cheque work before saving changes
  • Integrated SEO for content
  • Excellent security


  • Some content boxes in certain templates don’t expand, making navigation and editing difficult
  • Poor developer network support
  • Fewer 3rd party add-ons and plugins


PyroCMS is one of the best PHP CMSs and it’s powered by the Laravel framework. Popularity has been growing thanks to its intuitive backend design and lightweight modular architecture. Was designed to be simple, flexible, easy to learn, and easy to understand. PyroCMS’s modular design gives developers plenty of scope to bring together the right components to suit any given project.


  • Versatile PHP CMS can be adapted to any project
  • Readily accommodates third-party APIs and apps
  • Easy to install and learn


Magento was designed with eCommerce applications in mind, and it’s now the preferred platform for building innovative online stores. Brands such as Ford, Nike, Foxconnect, and many others rely on Magento’s extremely capable eCommerce features to power their sites. The major advantage of using Magento is that it’s tailor-made for designing rich and varied online shopping experiences for customers.

Another part of Magento’s appeal is its great emphasis on security. It uses hashing algorithms for maximum security password management and has additional defenses to defend apps from attackers. Also, Magento benefits from an active developer community which frequently contributes with numerous updates and patches. With Magento 2 the platform has benefited from a variety of enhancements to further strengthen its position as one of the best PHP-based content management systems for online retail.


  • The platform is feature-rich enough to power modern eCommerce stores
  • Magento is very accessible
  • The community regularly develops plug-ins and extensions
  • The platform is very scalable and can accommodate big apps


  • The premium and enterprise versions are pricey
  • Slightly slower to load than other platforms
  • Only works with dedicated hosting
  • Product support is quite pricey

Craft CMS

Craft is one of the more recent PHP-based content management systems but its low user account shouldn’t put you off though because it’s tailored towards pleasing developers. If you’re a user that may be a point against it, but from a developer’s point of view it’s easy to work with.

Craft gives users the scope to create their own front ends, or at least it does in principle because doing so requires a knowledge of HTML and CSS. Despite that, it offers a clean backend, so it’s relatively easy for content editors to easily find their desired features and publish content frequently.


  • Lightweight
  • Commercial features
  • Developer-centric
  • Highly functional
  • Performs well
  • Effective security


  • Pricey
  • More for advanced users
  • Not so many plugins
  • Not open source


TYPO3 is one of the best PHP CMS platforms available. It works on various operating systems including Windows, Linux, macOS, FreeBSD, and OS/2. It’s best suited to powering the portals and eCommerce platforms of large companies and it’s supported by a sizeable community for ongoing support and discussion.

Content and code are handled separately which makes TYPO3 a very flexible proposition for users. With support for over 50 languages and integrated localization built-in, it will fit in with users no matter where they may be in the world. Installation can be completed in just a few steps.


  • Sizeable community
  • Flexible with lots of functions
  • Enterprise-level


  • Hard to configure
  • Entry-level training is hard to find

Inspiring Speeches from Plesk at CloudFest ‘21

Another year of CloudFest has come and gone, leaving excitement for the future and inspiration for the hosting and developer community. Although for the first time CloudFest was held completely online, the challenge created a unique and fulfilling experience for all attendees.

On day one, the talks explored the theme, ‘The Intelligent Cloud’, looking at the possibility of AI in the cloudscape. Then, the topic ‘Web Pros in the Cloud’ was at the forefront for day two, an area of interest that directly impacts the Plesk community. So what better day to present advice and news from our expert team!

So, as part of day two, there were multiple opportunities to hear directly from Plesk insiders, giving insights into the landscape for web professionals in 2021. Among other news, during the conference we had the pleasure of announcing the acquisition of our newest family member, NIXStats, and we were inspired by Jens Meggers’ call to action that now is the best time to build.

Here’s a rundown of the speeches by key members of the Plesk team:


March 24th 2021

Speaker: Jens Meggers – CEO of Plesk & cPanel

Speaking as CEO of the server Saas group, WebPros, Jens Meggers delivered a motivational speech, encouraging the attending community of web professionals to keep building. In this climate of rapid online growth, he argues, the time to build is now. Listen in to hear his insights into the world of hosting and server management in an era of speedy innovation and change.

Speaker: Jan Loeffler - CTO of Plesk

In this speech - delivered by the tech genius who keeps Plesk products top-of-the-line - we learned about the importance of choosing the best, reliable platform right from the start that builds an environment suitable for developers. Jan’s engaging report took us through the essential checklist for attracting the best DevOps talent, taking the CloudFest audience through success stories from the last 12 months.

Lucas Radke CloudFest Plesk workshop

Panelist: Lucas Radke - Product Manager at Plesk

Taking part in an enlightening workshop hosted by WordPress experts and our very own Lucas Radke, this educational event at CloudFest drew a considerable crowd, eager to learn from the best. The panel discussion was aimed at championing success on the web, just like the rest of our Plesk products.


Speaker: Jens Meggers – CEO of Plesk & cPanel
Speaking as CEO of the server Saas group, WebPros, Jens Meggers delivered a motivational speech, encouraging the attending community of web professionals to keep building. In this climate of rapid online growth, he argues, the time to build is now. Listen in to hear his insights into the world of hosting and server management in an era of speedy innovation and change.

Speaker: Jan Loeffler – CTO of Plesk
In this speech – delivered by the tech genius who keeps Plesk products top-of-the-line – we learned about the importance of choosing the best, reliable platform right from the start that builds an environment suitable for developers. Jan’s engaging report took us through the essential checklist for attracting the best DevOps talent, taking the CloudFest audience through success stories from the last 12 months.
Lucas Radke CloudFest Plesk workshop

Panelist: Lucas Radke – Product Manager at Plesk
Taking part in an enlightening workshop hosted by WordPress experts and our very own Lucas Radke, this educational event at CloudFest drew a considerable crowd, eager to learn from the best. The panel discussion was aimed at championing success on the web, just like the rest of our Plesk products.
Alongside the talks, the virtual booths for this year’s CloudFest got the Plesk team of account managers, marketers, product managers and support engineers networking! With live chatting, meeting rooms and networking opportunities, the whole team was thrilled to meet with partners another year. Did you attend CloudFest this year? What did you learn? Would you like another chance to network with our dedicated team? Leave us a comment below, or get in touch with your account manager! See you at the next conference for web professionals!

How To Find a File In Linux From the Command Line

Linux locate files blog Plesk

Need to know how to find a file in Linux? Well, surprise, surprise, you’re going to need the find command in Linux to scour your directory or file system. The Linux find command can filter objects recursively using a simple conditional mechanism, and if you use the -exec flag, you’ll also be able to find a file in Linux straightaway and process it without needing to use another command.

Locate Linux Files by Their Name or Extension

Type find into the command line to track down a particular file by its name or extension. If you want to look for *.err files in the /home/username/ directory and all sub-directories, try this: find /home/username/ -name "*.err"

Typical Linux Find Commands and Syntax

find command expressions look like this:

find command options starting/path expression

The options attribute controls the behavior and optimization method of the find process. The starting/path attribute defines the top-level directory where the find command in Linux begins the filtering process. The expression attribute controls the assessments that scour the directory tree to create output.

Let’s break down a Linux find command where we don’t just want Linux find file by name:

find -O3 -L /var/www/ -name "*.html"

It enables the top-level optimization (-O3) and permits find to follow symbolic links (-L). The find command in Linux searches through the whole directory hierarchy under /var/www/ for files that have .html on the end.

Basic Examples

1. find . -name thisfile.txt

If you need to know how to find a file in Linux called thisfile.txt, it will look for it in current and sub-directories.

2. find /home -name *.jpg

Look for all .jpg files in the /home and directories below it.

3. find . -type f -empty

Look for an empty file inside the current directory.

4. find /home -user randomperson-mtime 6 -iname ".db"

Look for all .db files (ignoring text case) that have been changed in the preceding 6 days by a user called randomperson.

Options and Optimization for Find Command for Linux

find is configured to ignore symbolic links (shortcut files) by default. If you’d like the find command to follow and show symbolic links, just add the -L option to the command, as we did in this example.

find can help Linux find file by name. The Linux find command enhances its approach to filtering so that performance is optimised. The user can find a file in Linux by selecting three stages of optimisation-O1, -O2, and -O3. -O1 is the standard setting and it causes find to filter according to filename before it runs any other tests.

-O2 filters by name and type of file before carrying on with more demanding filters to find a file in Linux. Level -O3 reorders all tests according to their relative expense and how likely they are to succeed.

  • -O1 – (Default) filter based on file name first
  • -O2 – File name first, then file-type
  • -O3 – Allow find to automatically re-order the search based on efficient use of resources and likelihood of success
  • -maxdepth X – Search this directory along with all sub-directories to a level of X
  • -iname – Search while ignoring text case.
  • -not – Only produce results that don’t match the test case
  • -type f – Look for files
  • -type d – Look for directories

Find Files by When They Were Modified

The Linux find command contains the ability to filter a directory hierarchy based on when the file was last modified:

find / -name "*jpg" -mtime 5

find /home/randomuser/ -name "*jpg" -mtime 4

The initial Linux find command pulls up a list of files in the whole system that end with the characters jpg and have been modified in the preceding 5 days. The next one filters randomuser’s home directory for files with names that end with the characters “conf” and have been modified in the preceding 4 days.

Use Grep to Find Files Based on Content

The find command in Linux is great but it can only filter the directory tree according to filename and meta data. To search files based on what they contain you’ll need a tool like grep. Take a look:

find . -type f -exec grep "forinstance" '{}' \; -print

This goes through every object in the current directory tree (.) that’s a file (-type f) and then runs grep ” forinstance ” for every file that matches, then prints them on the screen (-print). The curly braces ({}) are a placeholder for those results matched by the Linux find command. The {} go inside single quotes (‘) so that grep isn’t given a misshapen file name. The -exec command is ended with a semicolon (;), which also needs an escape (\;) so that it doesn’t end up being interpreted by the shell.

Before -exec was implemented, xargs would have been used to create the same kind of output:

find . -type f -print | xargs grep "forinstance"

How to Locate and Process Files Using the Find Command in Linux

The -exec option runs commands against every object that matches the find expression. Let’s see how that looks:

find . -name "rc.conf" -exec chmod o+r '{}' \;

This filters all objects in the current directory tree (.) for files named rc.conf and runs the chmod o+r command to alter file permissions of the results that find returns.

The root directory of the Linux is where the commands that -exec runs are executed. Use -execdir to execute the command you want in the directory where the match is sitting, because this might be more secure and improve performance under certain circumstances.

The -exec or -execdir options will continue to run on their own, but if you’d like to see prompts before they do anything, swap out -exec  -ok or -execdir for -okdir.

How To Manage Files Using Plesk?

Let’s say you have a website that’s all ready to go on your laptop/desktop and you’d like to use File Manager to upload it to the Plesk on Linux server:

  1. On your machine, you’ll need to take the folder with all of your website’s files on it and add it to a compressed archive in one of the usual formats (ZIP, RAR, TAR, TGZ, or TAR.GZ).
  2. In Plesk, go to Files, click the httpdocs folder to open it, click Upload, choose the archive file, and then click Open.
  3. As soon as you’ve uploaded it, click in the checkbox you see alongside and then on Extract Files.

How to Edit Files in File Manager

File Manager lets you edit your website pages by default. To do this you can use:

  • An HTML editor or a “what-you-see-is-what-you-get” style of editor, which is a nice option because it adds the HTML tags for you. If you aren’t all that confident with HTML then this can be a helpful option.
  • Code editor. When you open HTML files with this one you’ll be presented with text where the HTML syntax is highlighted. If you’re comfortable with adding HTML tags yourself then code editor is for you.
  • Text editor. HTML files are opened as ordinary text with this one.

Your Plesk administrator may have already et up the Rich Editor extension, in which case you can use it for HTML file editing. Rich Editor works in a what-you-see-is-what-you-get fashion, just like Code Editor, although it’s better specced with features like a spellchecker for instance.

Here’s how to use File Manager to edit a file:

  1. Put the cursor over the file and the line that corresponds with it will show a highlight.
  2. Open the context menu for the file by clicking on it.
  3. Click Edit in … Editor (this will vary depending on your chosen editor).

How to Change Permissions with File Manager

There are some web pages and files that you don’t necessarily want to share with the world, and that’s where altering their permissions settings can come in handy.

To achieve this, find the item you want to restrict Internet access for like this:

  1. Place your cursor over it and wait for the highlight to appear as in the previous example.
  2. Click on the file to open its context menu and do the same again on Change Permissions.
  3. Make your change and then hit OK. If you’d like to find out more about how to look at and alter permissions in Setting File and Directory Access Permissions.

File Manager’s default approach is to change permissions in a non-recursive manner, so consequently, sub-files and directories don’t aren’t affected by the changed permissions of the higher-level directories they belong to. With Plesk for Linux, you can make File Manager modify permissions in a recursive manner, assuming that your Plesk administrator set up the Permissions Recursive extension and that you understand the octal notation of file permissions.

To enable recursive editing of access permissions:

  1. Place the cursor over the directory and wait for the highlight.
  2. Click to open its context menu and then again on Set Permissions Recursive.
  3. Now you can edit them. “Folder Permissions” is talking about the higher-level directory and any of its associated sub-directories. “File Permissions” applies to sub-files in this instance.
  4. When you’ve completed your permission amendments, click OK.

File Search in File Manager

You’ve got a little bit of latitude with file searches. You can have File Manager hunt for a specific bit of text either in the file name, in the content, or in both. You can choose how you want it to search for files by clicking on the icon that appears adjacent to your chosen search field, and then clicking on whichever type you prefer.

What is Website Downtime and Why Should You Take it Seriously?

Downtime solutions Plesk blog

If you run a business or personal website, downtime can be a major issue. Especially if you’re unaware of why it happened, or what you can do to fix it. But don’t worry — we’ll explore everything you need to know about website downtime in this simple guide.

Website downtime defined

A website is described as being “down” when it is:

  • totally inaccessible
  • unable to perform its core functions (e.g. playing videos)

But why do you need to take site downtime so seriously?

Because it’s a serious threat to any business’s online success today. Site outage, even if only brief, can:

  • leave customers frustrated and dissatisfied with your brand
  • damage your company’s reputation (particularly if site outage is a frequent occurrence)
  • contribute to a drop in your search engine ranking
  • cause you to lose clients and miss out on valuable revenue

Your site is the face of your company today. And in so many cases, it’s the most important touch point in your customers’ journey: if they can’t access your website to learn more about your brand or purchase products/services, they’ll have little choice but to look elsewhere.

So, it’s essential that your website is available 24/7 — no matter how complicated that might seem.

What are the biggest website downtime causes?

Various issues can cause a site outage. Here are the most common:

Hardware problems

For more than half of downtime cases affecting SMBs, hardware is the culprit.

You might think you have all your bases covered with network controllers, several power supplies, and levels of redundancy, but nobody can predict when a major power outage will strike or when cables will become damaged.

Even when you take several steps to protect your hardware, there’s still a risk that it will fail at one time or another — taking your site down with it.

Inferior site hosting

So, your hosting provider offered an uptime guarantee of 99%?

Sadly, that’s not worth the paper it’s printed on.

Even if the provider gave you an uptime promise of just 75%, they wouldn’t provide you with compensation for the money you lose during downtime. They’ll just compensate you for the price you paid for their service while your site was out of action. And that probably won’t be much.

Poor website hosting is one of the biggest reasons for site outage, and you won’t know the exact amount of time that your site is down because your provider is unlikely to share your monthly downtime stats with you.

But website monitoring can help. You’ll know exactly how long your site is down for, and whether it’s time to start approaching new providers to organize a better deal.

DNS flaws

DNS issues are another of the most common website downtime causes. In some cases, this is due to waiting for it to propagate, and in others, the DNS has simply been figured wrong. And something so simple as misspelling a nameserver may be responsible.

If your website isn’t loading because of a DNS issue, you have to identify the exact cause and fix it immediately.

DDoS cybersecurity attack

A DDoS (distributed denial of service) attack occurs when one or more culprits flood a server with requests. Their aim is to overwhelm the server, crash it, and cause disruptive server downtime.

Your site might not even be the intended target, but it could go down if a site that you share a server with is attacked. That’s something you need to think about when browsing for a reliable shared hosting plan.

Even a personal blog or any other non-commercial site, no matter how innocuous, could fall prey to a DDoS attack if it shares a server with the target domain.

Consider website and server downtime a real danger if you opt for a shared server, rather than a dedicated one.

Malicious hacks

Hackers can find any security weaknesses or other penetrable breaches, exploit them, and bring your website down with ruthless efficiency. Even if they have no real motivation: they might do it just because they can.

Malicious hackers may target your site specifically (unlike with DDoS attacks), rather than inflicting downtime on multiple websites that happen to share a server. Hackers use bots to find sites with vulnerabilities and take them down, which makes their “work” easier than ever.

Your website could be vulnerable to attack from hackers unless you implement the most advanced cybersecurity solutions on the market. And that means your site could enter an enforced period of downtime from even the tiniest vulnerability.

CMS flaws

A CMS (Content Management System) can create issues that lead to downtime, no matter what system your website is built on. Installing an incompatible plugin on a WordPress site, for example, could cause a site outage.

Database errors and other internal problems can also leave your site loading partial or blank pages — or not at all.

Maintenance oversights

Maintaining your website regularly is critical to reduce the risk of downtime. Leaving it unchecked for long periods could cause you to be unaware of key issues, and lead to unexpected site outage in the future.

You need to be vigilant in checking that your site is functioning properly. Otherwise, it could experience a massive failure and an extended period of downtime.

How to prevent site downtime

Let’s explore how you can avoid downtime bringing your business to a halt.

Take advantage of a CDN

A CDN (Content Delivery Network) is a layer between a website’s server and its users. This improves the site’s speed and makes it easier to access.

CDNs function via a network of caching servers, based in various locations across the globe, that stores a cached version of your website’s content. This can be quickly delivered to users in immediate proximity.

Essentially, a CDN serves as a buffer capable of providing your users with content even if your site goes down. It can also prevent malicious bots penetrating your website, filter traffic via analysis of IP addresses, and act as a terrific safety net in case of a brief outage.

Utilize a monitoring service for your website

A website monitoring service will monitor your website continuously, and alert you if it goes down.

Using a website monitoring service won’t prevent website downtime, but it will ensure you’re the first to know about it.

Choose monitoring software that will check your website in the shortest intervals possible, and sends alerts through numerous channels. Also, create a simple status page that informs users of the problem when they land on it.

You should be the one to announce that your website is down before users start asking or complaining across social media. Take a proactive approach: keep your audience informed via status pages and social media until the problem is resolved.

Choose your host carefully

One of the most important steps in preventing site downtime is choosing the right hosting service provider. They should be equipped to handle the traffic volumes you expect at present and in the future, regardless of how high they might become.

You could endanger your service, and your entire business, if you opt for a poor-quality provider just because they offer the cheapest prices. Especially when you run an online-only business and generate huge traffic.

Only trust your website with a host that guarantees high uptime in the service level agreement (most will promise more than 99.9% uptime). Choose the provider that aligns with your budget, needs, and expectations the most. Take as much time as you need, and see what a prospective provider’s other clients have to say before you make your decision.

Back your data up

Your business should back up data regularly, as your website is prone to downtime despite the measures you’ve already taken. Store your data locally and in the cloud for total peace of mind.

A lot of hosting providers will offer backup tools. Backup hosting services bring you an additional layer of protection if your site becomes unavailable.

It’s best to set up an additional hosting account with a different provider, so your data will be stored on a separate server. Small issues, such as domain expiry, may also cause site outage. You can solve this by configuring your domain to auto-renew or purchase it for long periods at a time.

Say Hello to NIXStats: Welcoming the Newest Family Member

Plesk is proud to be a member of a growing group of innovators that are all working towards supporting web professionals.

As the WebPros group expands to welcome new and exciting technologies and softwares, we proudly welcome the newest member of the family: NIXStats, the OS-agnostic open-source server monitoring agent, is now a part of WebPros group.

What is NIXStats?

NIXStats is, in short, a tool for monitoring. It measures the performance of your websites and servers, and alerts you via almost any preferred channel of any issues or updates: you could get a ping via Slack, a Discord notification, or even RocketChat for your DevOps team.

On a user-friendly dashboard, you can access network usage data, CPU activity charts, and loading times, along with your usual alerts.

Not to mention the integration of NGiNX, Apache, MongoDB, Docker, and other key development tools, allowing you to access their essential metrics as well. 

NIXStats interface Plesk

What’s New for NIXStats and Plesk?

As of March 2021, the WebPros group announced the acquisition of NIXstats, and the forthcoming plans for integration within the software of their existing digital ecosystem.

So at Plesk, we’re the lucky ones, being an active member of the WebPros group. We love welcoming new team members into our family and serving our customers with ever more solutions, for whatever web management scenarios they might encounter. Our hosting and web management platform works daily to integrate new innovations that resolve customer inconveniences. NIXStats’ web monitoring with extension-supported metrics is sure to fit the bill.

Watch this space for updates on this customizable software in Plesk.

Get to know NixStats

You can learn more about the features of NixStats here.

CentOS Project Announces Early End-of-Life Date for CentOS 8

CentOS 8 Announces Early End-of-Life Date - Plesk

We recently found out that the CentOS Project accelerated the End-of-Life date for CentOS 8, meaning that no further operating system updates will be available after December 31, 2021. In the meantime, though, Plesk will continue supporting both CentOS 7 and 8 and CloudLinux 7 and 8 until their planned end of life dates.

CentOS also announced other critical changes to its roadmap that have an impact on the Plesk products and our users and partners:

  • CentOS 8 will be transformed to an upstream (development) branch of Red Hat Enterprise Linux called CentOS Stream, where previous CentOS versions are part of the stable branch.
  • Although CentOS 7 life cycle remains unchanged, updates and security patches will be available until June 30, 2024. The life cycle timing is subject to change.

For additional information on the CentOS Project changes, you can also read their detailed blog post or refer to the CentOS FAQ page.

Plesk Support for CentOS 8

Plesk Support for CentOS 8 - CentOS 8 Announces Early End-of-Life Date - Plesk

If you’re wondering how CentOS 8 End-of-Life policy could affect your Plesk, here are some workarounds that you may want to hear. The good news is that Plesk has already been investing in product support for Ubuntu for decades, and will continue to support CentOS 8. 

Plesk Obsidian supports Ubuntu 20.04 LTS starting from Plesk Obsidian 18.0.29, and Plesk Onyx 17.8, Ubuntu 18.04 LTS. Nonetheless, if you’re a Plesk Onyx user, note that from April 22, 2021, it will no longer be available for new purchases and will stop receiving further development and technical support requests. Please read this article to learn how to upgrade to the latest Plesk Obsidian and how to automate renewals to keep your Plesk updated at all times.

When to Transition and Other Alternatives

CentOS 7 is the most popular choice of Plesk users. Therefore, it will be officially supported by RHEL until June 30, 2024, and will be supported by Plesk to that date. CentOS 7 remains a good choice for a new server.

We will consider supporting CentOS Stream as an alternative to CentOS 8 based on actual industry flow. So, people who will make a decision to follow the official RHEL distro will have CentOS Stream as an option. RHEL states that switching from CentOS 8 to CentOS Stream will be in-place and smooth. 

Additionally, we also plan to deliver AlmaLinux OS support for Plesk in summer 2021. AlmaLinux OS is a free new RHEL fork from the CloudLinux team, and it’s been developed in close co-operation with the community. 

Another good thing is that Plesk will also keep supporting CloudLinux OS 8. This additional supported operating system provides an upgrade path for customers with CloudLinux 6 or 7 deployments. CloudLinux is another commercially supported operating system that many of our partners benefit from. CloudLinux includes many advanced features such as improved user resource limitations, increased user visibility, and advanced customer isolation.

If you need additional information about this topic, please reach out to our support team. They will be happy to support you. And if you want to share your thoughts with us, drop us a line in the comment section below. 

Podcast | The Importance of Digital Presence with Jens Meggers

Plesk podcast digitalization

Hello Pleskians! This week we’re back and we’re kicking off Season 2 of the Official Plesk Podcast: Next Level Ops. In the episode, we’ve got Jens Meggers setting the stage for the entire season, as we talk about the importance of having a digital presence, and the growth of eCommerce in 2020.

Podcast Jens Meggers Plesk WebPros

In This Episode: You Need a Digital Presence, The Future of eCommerce, and how WebPros can help

COVID-19 and the global pandemic changed how many people are doing business. Anyone who relied on foot traffic and in-person shopping needed to make a serious pivot if they wanted to survive. As a result, eCommerce grew considerably. But as Jens points out, it’s not just a the website that matters:

“It’s all about getting the results fastest with the least amount of effort, because they also don’t necessarily want to hire a whole bunch of people…And it’s also not only just creating, it’s entire workflow. We already know that [a bagel shop] has bagels. But you want to pick your favorite bagel Monday morning 8am, without touching anyone. So the workflow and the automation around that is the most important thing.”

This lead us into the future of eCommerce. Jens believes there’s no going back to the way things were. We’ve shown customers what they can have, and we can’t take it away now. The convenience of ordering online and picking something up is too great. Similarly, lots of people are learning for the first time that they can make good money online. That’s the highest priority now, so we will see more people creating digital presences and better experiences. “As many experiences as possible will move into the virtual world,” says Jens.With that in mind, WebPros is equipped with all of the tools necessary to help all businesses create better digital experiences. cPanel and Plesk, their flagship products, are designed to help you create and manage your website, giving you a huge suite of tools to do so.

WHMCS, the fastest growing product, allows you to automate your hosting business. XOVI helps you see how traffic is being driven to your website. Jens says, “We see this as a responsibility…to help you after [your site launches]. SolusIO and SolusVM are virtual machines done right! They simplify virtual infrastructure management – an important task in an increasingly remote-work centric world.”

Key Takeaways

  • We saw industries digitize over the last 10 years. Video meetings, retail stores, digital price tags, and much more. This got accelerated with pandemic.
  • Customers thought, “Get me to the results faster.” It’s not just website, it’s workflow. And during the pandemic it needed to be good, fast, and cheap.
  • We have set the standard on digitization and business digitization, and it’s here to stay.
  • A lot of people are moving to transact and make money online. It is the highest priority for a lot of people.
  • WebPros supports business owners by supplying the technology that creates digital experiences (not just a shopping cart or web page). Their vision is to create a digital presence for anyone.
  • Soon, anything we can do in the physical world, we’ll be able to do in the virtual world.
  • “It’s always a good time to start a business if you have a good idea.”

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer and podcast consultant. You can find him at

Jens Meggers

Jens is the CEO of WebPros
Did you know we’re also on Spotify and Apple Podcasts? In fact, you can find us pretty much anywhere you get your daily dose of podcasts. As always, remember to update your daily podcast playlist with Next Level Ops. And stay on the lookout for our next episode!

Using Fail2ban to Secure Your Server

Fail2Ban guide Plesk blog

Meet Fail2ban. This log-parsing application is designed to monitor system logs and recognize signs that indicate automated attacks on your VPS instance.

By the time you reach the last line of this tutorial, you’ll have a better understanding of how to use Fail2ban to keep your server secure.

When Fail2ban identifies and locates an attempted compromise using your chosen parameters, it will add a new rule to iptables to block the IP address from which the attack originates. This restriction will stay in effect for a specific length of time or on a long-term basis. You can also set your Fail2ban configuration to ensure you’re notified of attacks via email as they occur.

While Fail2ban is mainly designed to focus on SSH attacks, you can also experiment with Fail2ban configuration to suit any service that utilizes log files and is at potential risk of being compromised.

Fail2ban Installation – A Step-By-Step Walkthrough

Setup on CentOS 7

  1. Make sure that your system has been updated as required and start the EPEL repository installation:

  2. yum update && yum install epel-release

  3. Proceed with the Fail2Ban installation:

  4. yum install fail2ban

  5. If you want to receive email support, begin the Sendmail installation. But be aware: Sendmail is not mandatory if you wish to take advantage of Fail2Ban.:

  6. yum install sendmail

  7. Start and enable Fail2ban (as well as Sendmail, if you want to use that too):

  8. systemctl start fail2ban

  9. systemctl enable fail2ban

  10. systemctl start sendmail

  11. systemctl enable sendmail

Please be aware:

In case you’re confronted by this error: no directory /var/run/fail2ban to contain the socket file /var/run/fail2ban/fail2ban.sock, you’ll need to set up the directory through a manual process instead:

mkdir /var/run/fail2ban

Setup on Debian

  1. Confirm that your system is updated and ready:

  2. apt-get update && apt-get upgrade -y

  3. Proceed with Fail2ban installation:

  4. apt-get install fail2ban

Now, the service will start automatically.

  1. (Optional step) For email support, start the Sendmail installation:

  2. apt-get install sendmail-bin sendmail

Please be aware:

In its present iteration, Sendmail in Debian Jessie includes an upstream bug known to trigger a number of errors (see below) as a result of installing sendmail-bin. The installation will pause for a brief period before it reaches completion. Errors:

Creating /etc/mail/

ERROR: FEATURE() should be before MAILER() MAILER('local') must appear after FEATURE('always_add_domain')

ERROR: FEATURE() should be before MAILER() MAILER('local') must appear after FEATURE('allmasquerade')

Setup on Fedora

  1. Ensure that your system has been updated before you proceed, with:

  2. dnf update

  3. Start the Fail2ban installation:

  4. dnf install fail2ban

  5. (Optional step) You can proceed with the Sendmail installation step if you would prefer email support:

  6. dnf install sendmail

  7. Start and enable Fail2ban (along with Sendmail, as you see fit):

  8. systemctl start fail2ban

  9. systemctl enable fail2ban

  10. systemctl start sendmail

  11. systemctl enable sendmail

Setup on Ubuntu

  1. Check that your system has been updated:

  2. apt-get update && apt-get upgrade -y

  3. Continue with the Fail2ban installation:

  4. apt-get install fail2ban

You’ll see that the service will start automatically.

  1. (Optional step) Install Sendmail if you want email support:

  2. apt-get install sendmail

  3. Grant SSH access via UFW before you proceed with enabling the firewall:

  4. ufw allow ssh

  5. ufw enable


The Fail2ban Configuration Process

In this next part of this tutorial, you’ll find a number of examples exploring popular Fail2ban configurations utilizing fail2ban.local and jail.local files. Fail2ban will read.conf configuration files initially before .local files override any settings.

As a result, any configuration adjustments tend to be performed in .local files while the .conf files remain unaffected.

How to Configure fail2ban.local

  1. fail2ban.conf carries the default configuration profile, and these standard settings offer a decent working setup. However, if you would prefer to create any edits, you should do this in a separate file (fail2ban.local). This will override fail2ban.conf. Be sure to rename a copy fail2ban.conf to fail2ban.local.

  2. cp /etc/fail2ban/fail2ban.conf /etc/fail2ban/fail2ban.local

  3. From this point, you may choose to adjust the definitions located within fail2ban.local to align with the configuration you want to set up. You can change the following values:

    • loglevel: You can set the detail level provided by the Fail2ban logs to: 1 (error), 2 (warn), 3 (info), or 4 (debug).

    • logtarget: This will log actions in a defined file (the default value of /var/log/fail2ban.log adds all logging into it). On the other hand, you could edit the value to:

      • STDOUT: output any data

      • STDERR: output any errors

      • SYSLOG: message-based logging

      • FILE: output to a file

    • socket: The socket file’s location.

    • pidfile: The PID file’s location.

How to Configure the Fail2ban Backend

  1. By default, the jail.conf file enables Fail2ban for SSH for Debian and Ubuntu, though not for CentOS. Alternative protocols and configurations (such as FTP, HTTP, and so on) will be commented out. You can adjust this if you wish. You’ll need to make a jail.local for editing:

  2. cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

  3. Do you use Fedora or CentOS? You’ll have to switch the backend option in jail.local from auto  to systemd . Be aware, though, that this isn’t needed on Debian 8 or Ubuntu 16.04, despite both being capable of using systemd too.

File: /etc/fail2ban/jail.local

# "backend" specifies the backend used to get files modification.

# Available options are "pyinotify", "gamin", "polling", "systemd" and "auto".

# This option can be overridden in each jail as well.

. . .

backend = systemd

Please be aware:

When the backend configuration has been set to auto, Fail2ban will monitor log files by utilizing pyinotify first. After this, Fail2ban will attempt gamin. However, if neither is available, a polling algorithm will choose the next attempt.

By default, there are no jails enabled in CentOS 7. For instance, if you wish to proceed with enabling the SSH daemon jail, you should uncomment these lines in jail.local:

File: /etc/fail2ban/jail.local


enabled = true

How to Configure Fail2ban jail.local

Want to familiarize yourself with the settings available in Fail2ban? Start by opening your jail.local file and locate the configurations available:

File: /etc/fail2ban/jail.local


ignoreip =

bantime = 600

findtime = 600

maxretry = 3

backend = auto

usedns = warn

destemail = [email protected]

sendername = Fail2Ban

banaction = iptables-multiport

mta = sendmail

protocol = tcp

chain = INPUT

action_ = %(banaction)...

action_mw = %(banaction)...


action_mwl = %(banaction)s...

Let’s consider an example. If you were to switch the usedns setting to no, Fail2ban will not utilize reverse DNS to implement its bans. It will ban the IP address instead. When you set it as warn, Fail2ban will undertake a reverse lookup to find the hostname and utilize that to initiate a ban.

What does the chain setting relate to? The range of iptables rules where jumps can be added in ban-actions. This has been set to the INPUT chain by default. If you want to learn more about iptables chains, feel free to check out our comprehensive What is iptables resource.

How to Configure Fail2ban Chain Traffic Drop

If you want to look at your Fail2ban rules, use the iptables’ –line-numbers option.

iptables -L f2b-sshd -v -n --line-numbers

You should see an output that’s similar:

Chain fail2ban-SSH (1 references)

num pkts bytes target prot opt in out source destination

1 19 2332 DROP all -- * *

2 16 1704 DROP all -- * *

3 15 980 DROP all -- * *

4 6 360 DROP all -- * *

5 8504 581K RETURN all -- * *

If you would like to, you may utilize the iptables -D chain rulenum command to remove a rule that has been applied to a specific IP address. Swap rulenum with the corresponding IP address rule number found in the num column. For instance, you can remove the IP address by issuing this command:

iptables -D fail2ban-SSH 2

How to Configure Ban Time and Retry Amount Fail2Ban

Set bantime, findtime, and maxretry to configure a ban’s circumstances and the amount of time it lasts:

File: /etc/fail2ban/jail.local

# “bantime” is the number of seconds that a host is banned.

bantime = 600

# A host is banned if it has generated "maxretry" during the last "findtime"

# seconds.

findtime = 600

maxretry = 3

  • findtime: This relates to how much time will pass between login attempts before a ban is implemented. As an example, let’s say Fail2ban is set to ban an IP following four (4) failed log-in attempts. These four attempts must take place during the predefined findtime limit of 10 minutes, and the findtime value should be a set number of seconds.

  • maxretry: To determine if a certain ban will be justified, Fail2ban uses findtime and maxretry. Should the amount of attempts be higher than the limit set at maxretry and fall within the findtime time limit, Fail2ban will set a band. The default is set at 3.

  • bantime: This applies to the duration of time (in seconds) an IP will be banned for, and this will be permanent if set to a negative number. The default value is 600, which will ban an IP for a period lasting 10 minutes.

How to Configure ignoreip for Fail2ban

You can add specific IPs you wish to ignore by adding them to the ignoreip line. This won’t ban the localhost by default. Adding the ignore list may be to your benefit if you tend to frequently leverage an individual IP address:

File: /etc/fail2ban/jail.local


# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not

# ban a host which matches an address in this list. Several addresses can be

# defined using space separator.

ignoreip =

ignoreip: With this setting, you can define which IP addresses are to be excluded from Fail2ban rules. You should add specific IPs you want to ignore to the ignoreip configuration (as per the example). This command doesn’t band the localhost by default. If you regularly work from a single IP address, you may want to add it to the ignore list.

Want to whitelist IPs only for specific jails? Utilize the fail2ban-client command. Just switch JAIL with your jail’s name, and with the IP you intend to be whitelisted.

fail2ban-client set JAIL addignoreip

How to Set up Fail2ban Email Alerts

You may want to get email alerts whenever something triggers Fail2ban. You can do this by changing the email settings:

  • destemail: The address at which you want to get your emails.

  • sendername: The name attributed to the email.

  • sender: The address which Fail2ban sends emails from.

Pleas e be aware:

Run the command sendmail -t [email protected], switching [email protected] with your email address if you’re not what to put under sender. Look at your email, along with spam folders if required, and check the sender email. You can use that address for the configuration above.

You’re also required to edit the action setting. This defines the actions undertaken if the band threshold is met. The default, %(action_)s, will only ban the user. %(action_mw)s will ban and distribute an email including a WhoIs report. With %(action_mwl)s, a ban is implemented and an email with the WhoIs report (and any relevant lines in the log file) will be sent. You can also adjust this on a jail-specific basis.

How to Configure Fail2ban banaction and ports

Outside of the above basic settings address, jail.local also has numerous jail configurations for multiple common services (such as iptables and SSH). Just SSH is enabled by default, and the action is to ban the problematic host/IP address through modification of the iptables firewall rules.

Expect the standard jail configuration to look like this:

File: /etc/fail2ban/jail.local

# Default banning action (e.g. iptables, iptables-new,

# iptables-multiport, shorewall, etc) It is used to define

# action_* variables. Can be overridden globally or per

# section within jail.local file

banaction = iptables-multiport

banaction_allports = iptables-allports


enabled = true

port = ssh

filter = sshd

logpath = /var/log/auth.log

maxretry = 6

  • banaction: This defines the action that should be taken if the threshold is met. When you configure the firewall to use firewalld, set the value to firewallcmd-ipset. If you configure the firewall to use UFW, then the value should be set to ufw.

  • banaction_allports: This will block a remote IP in each port. If you configure the firewall to use firewalld, the value should be set to firewallcmd-ipset.

  • enabled: Determine if the filter should be activated or not.

  • port: This is the port that Fail2ban should reference in regards to the service. If you utilize the default port, you can put the service name here. But if you opt for a port that’s not traditional, this must be the port number instead. E.g. if you changed your SSH port to 3775, you would replace ssh with that number.

  • filter: This is the name of the file found in /etc/fail2ban/filter.d containing the failregex information used for parsing log files correctly. You don’t need to include the .conf suffix.

  • logpath: Provides the service’s logs location.

  • maxretry: This overrides the global maxretry for the service you define. You may also add findtime and bantime.

  • action: You may add this as an extra setting when the default action is inappropriate for the jail. You can find other in the action.d folder.

Please be aware:

You may choose to configure jails as individual .conf files withing the jail.d directory. But the format will stay the same.

Securing Servers with Fail2ban Filters

Now, we’ll explore your system’s Fail2ban filters defined within their respective configuration files.

You will see your system’s filters in the /etc/fail2ban/jail.conf file or the /etc/fail2ban/jail.d/defaults-*.conf file, depending on your version of Fail2ban.

Look up your /etc/fail2ban/jail.conf file and check out the ssh/sshd filter:

File: /etc/fail2ban/jail.conf


enabled = true

port = ssh

filter = sshd

logpath = /var/log/auth.log

maxretry = 5

When you use a version of Fail2ban higher than 0.8, examine your defaults-*.conf and jail.conf files.

If you have version 0.8 of Fail2ban or higher, your jail.conf file will look like this:

File: /etc/fail2ban/jail.conf


port = ssh

logpath = %(sshd_log)s

Next, if your system uses Fail2ban 0.8 or beyond, it will have a defaults-*.conf including these filters:

File: /etc/fail2ban/jail.d/defaults-*.conf


enabled = true

maxretry = 3

If you want to try testing current filters, run the example command and switch logfile, failregex, and ignoreregex with your preferred values.

fail2ban-regex logfile failregex ignoreregex

If we use those examples from this section’s start, the command will look like this:

fail2ban-regex /var/log/auth.log /etc/fail2ban/filter.d/sshd.conf

Your Fail2ban filters will need to work with:

  1. Different logs types created by varied software

  2. Varied configurations and a number of operating systems

Alongside the above, your filters should be log-format agnostic too. They should also be protected against DDoS attacks, and must be compatible with other versions of the software to be released in the future.

How to Customize ignoreregex Configurations

Before you can make adjustments to the failregex configuration, customization of ignoreregex is required. Fail2ban needs to understand what server activity is regarded as normal, and what isn’t.

For instance: you may exclude activity cron from running on your server or MySQL if you set up ignoreregex to filter logs created by those programs:

File: /etc/fail2ban/filter.d/sshd.conf

ignoreregex = : pam_unix\((cron|sshd):session\): session (open|clos)ed for user (daemon|munin|mysql|root)( by \(uid=0\))?$

: Successful su for (mysql) by root$

New session \d+ of user (mysql)\.$

Removed session \d+\.$

You’re free to tweak failregexs to block whatever you like now that you’ve filtered for each program’s logs.

How to Customize Failregexs

Fail2ban includes numerous filters, but you might prefer to customize them further or make your own based on your personal needs. Fail2ban utilizes regular expressions (regex) for log files parsing, searching for password failures and attempted break-ins. Python’s regex extensions are used by Fail2ban.

What’s the most effective way to learn how failregex functions? Write one yourself. While we don’t recommend letting Fail2ban monitor your WordPress’s access.log on websites with heavy traffic because of CPU concerns, it does give an instance of an easily-understood log file that you can utilize to learn about any failregex creation.

Writing a Fail2ban Regex

  1. Go to your website’s access.log (usually found at /var/www/ and locate a failed login attempt. This will look like:

File: /var/www/ - - [01/Oct/2015:12:46:34 -0400] "POST /wp-login.php HTTP/1.1" 200 1906 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:40.0) Gecko/20100101 Firefox/40.0"

You just need to track up to the 200:

File: /var/www/ - - [01/Oct/2015:12:46:34 -0400] "POST /wp-login.php HTTP/1.1" 200

  1. The IP address that the unsuccessful attempt came from will always be defined as . The few characters after never change and you can enter them as literals:

  2. - - \[

The \ before the [ indicates that you should read the square bracket literally.

  1. You can use regex expressions to write the subsequent section (the date on which the login attempt occurred) as grouped expressions. So, as per this example, the first portion (here, 01) may be written as (\d{2}): The parentheses form the expression group and \d searches for numerical digits. However, {2} notes that the expression is searching for a pair of digits in a row (e.g. the date, as in 24, 25, etc.).

By this point, you will have:

- - \[(\d{2})

The following forward slash is called using a literal forward slash. This is followed by \w{3}: this is looking for a series of three alpha-numeric characters (such as A-Z, 0-5, in any case). The next forward slash will also be literal:

- - \[(\d{2})/\w{3}/

The year section will be written in a similar way to the day but you don’t require a capture group, and for four characters in a row along with a literal colon:

- - \[(\d{2})/\w{3}/\d{4}:

  1. The subsequent sequence consists of a run of two-digit numbers that represent the time. As we defined the day of the month as a two-digit number in a capture group (in the parentheses), we’re able to backreference it with \1. This is because it’s the first capture group. The colons will be literals again:

  2. - - \[(\d{2})/\w{3}/\d{4}:\1:\1:\1

If you prefer not to utilize backreferences, you can also write this as:

- - \[\d{2}/\w{3}/\d{4}:\d{2}:\d{2}:\d{2}

  1. Write the -0400 segment in a similar way to the year, including the extra literal -: -\d{4}. You should close the square bracket (escaping with a backslash first) and end the rest with the literal string:

  2. - - \[(\d{2})/\w{3}/\d{4}:\1:\1:\1 -\d{4}\] "POST /wp-login.php HTTP/1.1" 200


- - \[\d{2}/\w{3}/\d{4}:\d{2}:\d{2}:\d{2} -\d{4}\] "POST /wp-login.php HTTP/1.1" 200

How to Apply the Failregex

Now that the failregex has been set up, it should be added to a filter:

  1. Go to Fail2ban’s filter.d directory:

  2. cd /etc/fail2ban/filter.d

  3. Make a file named wordpress.conf then add your failregex:

File: /etc/fail2ban/filter.d/wordpress.conf

# Fail2Ban filter for WordPress


failregex = - - \[(\d{2})/\w{3}/\d{4}:\1:\1:\1 -\d{4}\] "POST /wp-login.php HTTP/1.1" 200

ignoreregex =

Save and quit.

  1. Add a WordPress section to jail.local:

File: /etc/fail2ban/jail.local


enabled = true

filter = wordpress

logpath = /var/www/html/andromeda/logs/access.log

port = 80,443

This utilizes the default ban and email action, though you may define additional actions if you add an action = line.

Save and exit. Restart Fail2ban.

How to Use the Fail2ban Client

Fail2ban gives you a command fail2ban-client, and you can utilize this for running Fail2ban from the command line:

fail2ban-client COMMAND

  • start: For starting the Fail2ban server and jails.

  • reload: To reload the Fail2ban configuration files.

  • reload JAIL: For switching JAIL with a Fail2ban jail’s name; this causes the jail to reload.

  • stop: To terminate the server.

  • status: For displaying the server status and enabling jails.

  • status JAIL: For displaying the jail status, including IPs that are banned currently.

If you wanted to check that the Fail2Ban is operating and the SSHd jail has been enabled, for example, you would run:

fail2ban-client status

The output would be:


Number of jail: 1

Jail list: sshd

You can find more on fail2ban-client commands in the Fail2ban wiki.

Understanding Lockout Recovery

Imagine that you lock yourself out of your vps instance because of Fail2ban. But don’t worry: you’ll still be able to get entry via console access.

From here, you’re able to check your firewall rules to make sure Fail2ban blocked your IP, rather than something else. You can do this by inputting:

iptables -n -L

Search for your IP address within the source column of any Fail2ban chains (which are prefixed by fail2ban or fail2ban) to verify whether the Fail2ban service blocked you:

Chain f2b-sshd (1 references)

target prot opt source destination

REJECT all -- reject-with icmp-e

If you want to take your IP address from a jail, you can enter the following command (but switch and jailname with the IP address and jail name you intend to unban:

fail2ban-client set jailname unbanip

Please be aware:

If you’re unable to recall the name of your jail, you can list all jails with the following:

fail2ban-client status

You may input the following if you decide that you want to stop utilizing your Fail2ban service at any point:

fail2ban-client stop

However, with CentOS 7 and Fedora, two additional commands are required for fully stopping and disabling:

systemctl stop fail2ban

systemctl disable fail2ban

How Plesk and Fail2ban Work Together

In this section, we’ll look at how Plesk and Fail2ban work together.

Fail2Ban is enabled by default in Plesk Obsidian: every jail available will be turned on and Fail2Ban’s default settings will be utilized.

You can safeguard your server from brute force attacks through IP address banning ( Fail2Ban ). Fail2Ban utilizes regular expressions for monitoring log files and spotting patterns that may correspond to authentication failures, looking for exploits, and additional entries that may appear to be suspicious.

Log entries of these types are counted, and when their number reaches a predefined value, Fail2Ban will issue a notification email or ban the offending IP for a set period. But the IP address will be automatically unbanned when the ban period ends.

A number of jails determine Fail2Ban logic. A jail is a rule set related to a specific scenario. The jail settings define what will be done when an attack has been detected according to a preset filter (a set of one or more regular expressions for log monitoring).

You can adjust Fail2Ban settings like so:

  1. Navigate to Tools & Settings > IP Address Banning (Fail2Ban) (under “Security”).

  2. Make your way to the “Settings” tab, where you can tweak:

    • IP address ban period – the time interval that an IP address is banned for (in seconds). The IP address is automatically unbanned once this period has ended.

    • Time interval for detection of subsequent attacks – the time interval during which the system will count the amount of failed sign-in attempts and additional undesirable behaviors from an IP address (in seconds).

    • Number of failures before the IP address is banned – the amount of unsuccessful login attempts connected to the IP address.

  3. Click on OK .

You’ll see these limitations and peculiarities in Fail2Ban in Plesk:

  • Fail2Ban defends against attacks with IPv4 and IPv6 addresses.

  • Fail2Ban depends entirely on IPs (without hostname lookups) unless it’s reconfigured.

  • Fail2Ban is unable to safeguard against distributed brute force attacks, as it recognizes intruders through their IP address.

  • The VPS iptables records limit (numiptent) could have an impact on Fail2ban’s work if your Plesk is installed on a VPS. Fail2Ban will cease operating as it should once this limited is exceeded, and you’ll find a line like this in the Fail2ban log: fail2ban.actions.action: ERROR iptables -I fail2ban-plesk-proftpd 1 -s -j REJECT --reject-with icmp-port-unreachable returned 100 In this situation, you should get in touch with your VPS hosting provider for a resolution.

If you don’t want to block an IP address:

  1. Navigate to Tools & Settings > IP Address Banning (Fail2 b an) > Trusted IP Addresses > Add Trusted IP .

  2. Next, enter an IP address n the IP address field, along with an IP range or a DNS host name before clicking OK .

You can look at (and download) Fail2ban log files by going to Tools & Settings > IP Address Banning (Fail2 b an) > the Logs tab.

You’re free to look at the banned IP addresses, unban them, or add them to your trusted address list in Tools & Settings > IP Address Banning (Fail2 b an) > the Banned IP Addresses tab.

You may check out your list of IP addresses you never want to be banned, add/remove IP addresses to/from this list in Tools & Settings > IP Address Banning (Fail2 b an) > the Trusted IP Addresses tab.

Thank you for reading this comprehensive Fail2ban configuration tutorial. Now, you should have all the insights you need to take advantage of Fail2ban to fully secure your server.

How to Install and Configure CSF

CSF installation guide Plesk blog

As a firewall application suite designed for Linux servers, Config Server Firewall ( CSF ) is a Login/Intrusion Detection that’s effective for such applications as SSH, Pop3, IMAP, SMTP and others.

CSF will recognize when a user is signing into the server through SSH and send you an alert if they attempt to utilize the “su” command to attain higher privileges on the server.

Another key function of CSF is that it will check for failed login authentications on mail servers (IMAP, Exim, uw-imap, Dovecot, Kerio), Ftp servers (Pure-ftpd, Proftpd, vsftpd), OpenSSH servers, and Plesk & cPanel servers for replacing software such as fail2ban.

CSF is a solid security solution for server hosting, and it can be integrated easily into Plesk and WHM/cPanel’s user interface.

Steps to follow:

Step One – Install CSF Dependencies

As CSF is based on Perl, you’ll need to install this on our server to begin. You should have wget for downloading the CSF installer as well as vim (or an editor of your choosing) to make changes to the CSF configuration file.

When ready, you should install the packages using the following command:

yum install wget vim perl-libwww-perl.noarch perl-Time-HiRes

Step Two – CSF Installation

Navigate to the “/usr/src/” directory to download CSF using this wget command:

cd /usr/src/

Extract the tar.gz file and head to the CSF directory. Then, install the tar.gz file:

tar -xzf csf.tgz
cd csf

If this has gone smoothly, you’ll be presented with a message stating that the CSF installation has been completed. Next, check that CSG actually works as required on this server. How? Make your way to the “/usr/local/csf/bin/” directory. Then, you’ll need to run “”, like so:

cd /usr/local/csf/bin/

You’ll know that CSF is operating on your server with no issues if you see the following response:

RESULT: csf should function on this server

Step Three – Configuration of CSF

There’s one thing you should know before you dive into the process of configuring CSF: CentOS 7’s default firewall application (“firewalld”) must be stopped and removed from the startup.

To stop it:

systemctl stop firewalld

To disable and remove firewalld from the startup:

systemctl disable firewalld

Next, head to the CSF Configuration directory “/etc/csf/” and change the file “csf.conf” using the vim editor:

cd /etc/csf/
vim csf.conf

To apply the CSF firewall configuration, change line 11 “TESTING” to “0”.


CSF enables traffic (incoming and outgoing) for the SSH standard port 22 by default. If you choose to utilize an alternative SSH port, add your port of choice to the configuration in line 139 “TCP_IN”.

Next, start CSF and LFD with the following command:

systemctl start csf
systemctl start lfd

Set up the csf and lfd services to start when booting:

systemctl enable csf
systemctl enable lfd

Now, you’ll see the CSF list default rules with command:

csf -l

Step Four – Basic CSF Commands

1. Starting the CSF firewall (enabling firewall rules):

csf -s

2. Flushing/stopping firewall rules.

csf -f

3. Reloading firewall rules.

csf -r

4. To allow an IP and add it to csf.allow.

csf -a

Here are the results:

Adding to csf.allow and iptables ACCEPT...
ACCEPT all opt -- in !lo out * ->
ACCEPT all opt -- in * out !lo ->

5. Removal and deletion of an IP from csf.allow.

csf -ar

Here are the results:

Removing rule...
ACCEPT all opt -- in !lo out * ->
ACCEPT all opt -- in * out !lo ->

6. Denial of an IP and adding to csf.deny:

csf -d

Here are the results:

Adding to csf.deny and iptables DROP...
DROP all opt -- in !lo out * ->
LOGDROPOUT all opt -- in * out !lo ->

7. Removal and deletion of an IP from csf.deny.

csf -dr


Removing rule...
DROP all opt -- in !lo out * ->
LOGDROPOUT all opt -- in * out !lo ->

8. Removal and unblocking every entry from csf.deny.

csf -df


DROP all opt -- in !lo out * ->
LOGDROPOUT all opt -- in * out !lo ->
DROP all opt -- in !lo out * ->
LOGDROPOUT all opt -- in * out !lo ->
csf: all entries removed from csf.deny

9. Searching for a pattern match on iptables (such as CIDR, IP, Port Number)

csf -g

Step Five – Advanced Configuration

Want to configure as and when you need to? Check out these CSF tweaks.

Go back to the csf configuration directory and change the csf.conf configuration file like so:

cd /etc/csf/
vim csf.conf

1. Non-blocking of IP addresses in your csf.allow files:

By default, LFD will block IPs under csf.allow files. But if you’re looking to make sure that a certain IP in csf.allow will never be blocked by LFD, navigate to the line 272 and edit “IGNORE_ALLOW” to “1”.

This can be helpful when you use a static IP at work or home and would like to make sure that the internet server or firewall never blocks it.


2. Enable incoming and outgoing ICMP

Head to the line 152 for incoming ping/ICMP:

ICMP_IN = "1"

And for outgoing ping ping/ICMP, go to line 159:

ICMP_OUT = "1"

3. Blocking specific countries

CSF gives you the option to deny or allow access to certain countries, through the CIDR (Country Code).

How? Go to line 836 and add the codes of those countries you want to allow or deny:


4. Emailing the Su and SSH Login log

Another trick you can try is setting an address that LFD can use for sending alert emails about “SSH login” events and occasions when users run the “su” command.

To do this, find the line 1069 and edit the value to “1”:



Input the email address you would like to use for this in line 588:

LF_ALERT_TO = "[email protected]"

Looking for extra changes you can make? Take a look at the options in the “/etc/csf/csf.conf” configuration files.


CSF is a valuable application-based firewall for iptables available Linux servers, offering a number of features. It is supported by Plesk, cPanel/WHM, DirectAdmin and Webmin.

Fortunately, CSF installation and configuration is simple, and it’s easy to use on the server, so it has the power to make security management much more efficient for sysadmins.

The Beginner’s Guide to LiteSpeed Cache for WordPress

Litespeed for WordPress Plesk

Congratulations! You’ve installed the LiteSpeed Cache for WordPress plugin, activated it, and are ready to take the next step.

But what does that mean?

For a lot of us, the sight of the settings tabs is impressive enough to make you want to dive in. But others can become overwhelmed and feel almost frozen by the sheer number of options available.

Sound familiar? Don’t worry—you’ve come to the right place.

In this post, we’ll look at setting up LSCache in a quick, simple way. We’ll explore the major details you need to know to take full advantage of the LiteSpeed WordPress cache plugin.


What do I do now that I’ve installed LSCache for WordPress and activated it?

LSCache for WordPress basically serves two roles: it’s a full-page cache for a website’s dynamically-generated pages and a site-optimization plugin.

Many users who install LSCache focus on utilizing its caching functions and consider everything else to be the cherry on the cake.

The crucial thing to remember is you can enable the caching functions and ignore the rest of it. You have that freedom, which is one of the most appealing aspects of the LiteSpeed cache for WordPress.

When you activate it, you’ll see that everything is disabled. You can turn caching on by going to LiteSpeed Cache > Cache > Cache and switching Enable Cache to ON.

Now, you could leave your LSCache configuration there if you wanted to. You could forget about experimenting with additional settings and this WordPress cache plugin would likely cache your website brilliantly. We selected the default settings to work with most sites straight away.

As we move on, we’ll consider the Cache section’s first four tabs and their functions. They’re the cache’s most basic settings.

Using LSCache for WordPress as a Beginner

Cache Tab

On the Cache tab, the first option enables or disables the caching functionality. The rest of the settings let you define the content types to be cached. Everything is enabled by default. Feel unsure what these settings actually do? You may be best keeping them set to their respective defaults for the time being.


TTL (Time To Live) applies to the length of time, in seconds, that a page can stay in cache before it’s regarded as being stale. When a page’s TTL is reached, it’ll be cleared out of the cache. We selected default TTLs that should be suitable for the majority of websites, but you can feel free to adjust them as you see fit.

Purge Tab

In certain scenarios, pages should be cleared from the cache ahead of their natural date of expiry. In this section, you can set the rules for this behavior. Default selections should be suitable for most sites, though you can tweak them if that works best for you.

A Brief Example

Let’s say you create a fresh post. You can give it the tag “cakes” and publish it in your “cooking” category. When you do this, a number of pages will change: the homepage, the cooking category archive page, the cooking tag archive page, the author archive page, and possibly some others.

Each of the pages affected will have to be cleared to avoid stale content being served. These settings make it easier to change the rules to suit your site’s requirements.

Excludes Tab

You might find you don’t want to cache certain pages. The Excludes Tab options enable you to define which parts of your site should be excluded from caching. It’s unlikely that you’ll need to adjust these settings for most sites, and they’re available so that you can make custom exceptions to caching rules as required.


The Remaining Four or Five Cache Tabs

You will have either four or five remaining Cache tabs (depending on whether you enabled WooCommerce). They cover caching types that are more advanced. Let’s take a closer look as we continue your LSCache configuration guide.


ESI (Edge Side Includes) is a method allowing you to “punch holes” in public content, so you can fill them with content that is uncached or private. It’s helpful for a number of things, including personalized greetings and shopping cart widgets. However, it’s deactivated by default.


The Object tab’s settings give you the flexibility to control an external object cache (such as LSMCD, Redis, or Memcached) that is enabled and configured by the server admin.


Browser cache is a client-level cache for static files. When this has been switched on, static files (e.g. images) will be stored locally on a user’s computer/device when they’re requested for the first time. In the future, the content will be retrieved from this local storage until the browser cache expires. This tab’s settings control the browser cache.


This tab’s name makes it pretty obvious that only users with a little more experience should check it out. You’re unlikely to use this, though you might if you have a conflict of some sort with a different cache plugin.


LiteSpeed Cache can be utilized with WooCommerce. When you enable WooCommerce, this tab will appear. It gives you the flexibility to configure settings for caching shop content.


Additional LSCache Plugin Sections

We still have a number of other LSCache plugin sections to explore:


In the LiteSpeed Cache Dashboard, you can view the status of your LiteSpeed Cache and services at a glance. These include Low-Quality Image Placeholders, Image Optimization, Cache Crawler, Critical CSS Generation, and others. You also have options to assess your page load times and page speed score, both of which are vital to user experience.


In this section, the settings control your services usages, as well as allowing you to upgrade the plugin automatically and determine which messages should be presented on your dashboard.


With this section, you can configure your Content Delivery Network to be used with WordPress. But don’t worry if you don’t bother with a Content Delivery Network. By default, CDN support is deactivated.

Image Optimization

With LiteSpeed Cache for WordPress, you can optimize images to make them smaller and less time-consuming to transmit. You can do this via a service, and can control it in this section.

Page Optimization

You can take a number of non-cache measures to speed up your WordPress site, many of which are supported in this tab. For instance, CSS and JavaScript minification and combination, as well as HTTP/2 push, asynchronous and deferred load, etc.

Don’t know what any of this means? That’s fine. By default, they’re disabled anyway, so there’s no need to worry about them.


In this section, you’re free to optimize your WordPress database. This is useful for speeding up your site. The LiteSpeed for WordPress cache DB Optimizer makes executing a number of these tasks in your WordPress database easier.


By default, the crawler is disabled, but when it’s active, it will travel your site and refresh pages that have expired from the cache. But be aware: crawling can be a resource-intensive process, so not every hosting provider permits it. If your hosting provider allows crawling, though, it’s an effective way to ensure your cache stays fresh.


The Toolbox section has what you need if you’re looking to export your site settings, purge the cache manually, or debug issues. But the Environment Report is likely to be the most helpful thing here.

So, that’s the end of our LSCache configuration guide for newcomers! You should have the details you need to get set up quickly, efficiently, and confidently.

LiteSpeed Cache for WordPress and Plesk

To utilize the full power of LiteSpeed Cache for WordPress you need to use it with LiteSpeed web server. Plesk hosting control panel provides an opportunity to install, configure and manage LiteSpeed web server easily. To have a better idea about Litespeed on Plesk installation process please read this LiteSpeed installation and configuration guide.