Next Level Ops Podcast: Working with Self-hosting Email with Christian Mollekopf

Hello Pleskians! This week we’re back with the ninth episode of the Official Plesk Podcast: Next Level Ops. Only one more to go and we’re already at the close of Season 1! In this installment, Superhost Joe and Christian Mollekopf from Apheleia IT talk about working with self-hosting email.

In This Episode: Choosing An Email Hosting Provider, Reputation Management and Taking Back Control

What should you consider when choosing an email hosting provider? What are some of the options users have when searching for good email providers, especially if you also want to look at enterprise options? Is it good enough to opt for what your web host offers or to use a service like GSuite? What are some of the things you should think about when going the self-hosting route? In this episode, Joe and Christian discuss how to address options and issues surrounding email hosting. 

“I think usually it [email] is something that you are going to use for quite a long time. It’s like a very central part of your infrastructure typically. So, I think it’s definitely worth considering a couple of options,” says Christian. When choosing the right hosting provider, it’s worth considering things like what are the features you require, whether it’s simply email or also calendars and tasks, whether you need shared folders and calendars, and which type of client do you want. Another factor to consider is vendor lock in – just in case you want to transfer to another hosting provider and how easy will it be for you to migrate your data to another system. 

If vendor lock in is an issue of concern for you, then the question arises whether you can self-host your email. What happens when you do that? Some common issues to watch out for are to make sure that other servers can distinguish between genuine email coming from your server and spam coming from other servers, pretending to come from your server, to ensure that your server doesn’t send spam, and reputation management of your domain. To read some of the best practices of self-hosting email, go here.

Key Takeaways

  • What should someone consider when choosing an email hosting provider? Your email is probably going to be a central part of the infrastructure and you’ll use it for a long time to start out by keeping this in mind. The second thing is to consider the features you need, such as a calendar, for example. Do consider your email’s interoperability and vendor lock-in. You should be able to migrate away if you want to.
  • What are the benefits of self-hosting over using a service like Gmail? One word: Control. You maintain control over your solution. If you self-host, you have more control over your email.
  • As a hosting provider, what are some of the pitfalls of hosting email? The biggest pitfall is reputation management. Other services that receive email have to fight a lot of spam. Track the reputation of domains and IP addresses.
  • What features in Plesk help with email hosting? SPF, DMARK, DKIM are built-in. Other UIs for important measures like rate and message size limits and the Plesk Email Security extension with anti-spam. Find out more about the features here.

…Alright Pleskians, it’s time to hit the play button if you want to hear the rest. If you’re interested in hearing more from Next Level Ops, check out the rest of our podcasts. We’ll be back soon with our last installment.

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Christian Mollekopf

Christian is a Senior Software Engineer at Apheleia IT.

Did you know we’re also on Spotify and Apple Podcasts? In fact, you can find us pretty much anywhere you get your daily dose of podcasts. As always, remember to update your daily podcast playlist with Next Level Ops.  And stay on the lookout for our next episode!

Top 10 PHP CMS Platforms For Developers in 2020

Choosing the best PHP CMS for the job could be the most important stage of web development. Here are some of the most popular platforms on the market at the moment. What you require from your apps will largely be the determiner of which PHP CMS you end up settling on because some are more suited to creating a simple dynamic website while others are inherently better at being the building blocks for an eCommerce store that’s packed with functionality for thousands of items. It’s not every developer that wants to write HTML and CSS to create web pages anymore. More and more would prefer to have much of the legwork done for them because time is money! If you’re one of those that want to get things done faster then you need to choose the right tool for the job, and luckily there are many different PHP-based content management systems available choose from. Here are TOP 10:

These CMS platforms make traditional development work a lot less of a chore for the developer. Dynamic web sites can swell up to include thousands of pages, and when they do it’s much easier to manage the process with the best PHP CMS platform as it can streamline development work in clever ways.


WordPress has risen to become one of the best known and most widely used open-source PHP CMSs. It can accommodate lots of apps and is flexible enough to handle a wide range of different user scenarios. It’s as good at providing the foundation for a basic blog as it is a large e-commerce store, and you only have to look to the 75 million currently active websites that rely on it for confirmation of how universally popular it is.

Since WordPress is an open-source platform, it’s benefited from the ongoing attention of thousands of developers. This is one of the biggest reasons for its rapid evolution and why it’s turned into the preferred choice of many web app developers. It offers the widest selection of additional widgets, themes, and plug-ins, and it can be readily tailored and turned to almost any end.

It also ships with a suite of integrated SEO tools to optimize search engine visibility, and that’s one of the reasons why developers rate it so very highly.


  • WordPress accounts for 76.4% of the CMS market
  • It supports over 68 languages
  • Plug-ins have been downloaded 1.48 billion times
  • WordPress powers many government websites around the world



  • Themes and plugins can require annoyingly frequent updates
  • Open source can mean ‘more open to hackers’
  • Customization requires a deep level of understanding


Joomla is another one of the best PHP CMS platforms and it’s garnered a reputation for being good for portfolio and blogging websites. It may sit somewhat in the shadow of WordPress, but it still comes with enough high-quality features to create effective blogs and dynamic websites. It meshes well with a few versions of SQL, which means database integration should not be a problem.

This PHP CMS can integrate the site with its hosting provider in just one click and makes the creation of responsive websites a breeze. Its multitude of available designs and extensions make it easy to add extra features to any web apps that you may be designing. As one of the best PHP CMS platforms, Joomla has proved to be popular among big names that include eBay, Barnes & Noble, IKEA, and many others.


  • 6% of all websites rely on Joomla
  • 2 million sites and counting
  • One of the top three CMSs which offer free plug-ins and themes
  • Supports over 64 languages



  • Not as SEO enabled as some PHP CMSs
  • Difficult for non-developers to add custom designs
  • Not many modules for sale
  • Some plug-ins not completely compatible without modification


Drupal is one of the best PHP CMS platforms on the market. It’s open-source and well-suited to eCommerce stores, beginning its life initially as a message board but then evolving into one of the most popular PHP based content management systems. Drupal makes it easy for developers to build enhanced online stores thanks to its rich feature set. It’s ideal for developing modern apps which is one of the reasons why many developers are drawn to it.

While WordPress functionality can be extended further with plugins, Drupal refers to its add-ons as modules, although it already comes with many features and options. Top companies like NBC, Harvard University, Tesla, Princess Cruises, and MTV UK rely on Drupal for their web operations. It also benefits from active community support.


  • Drupal has around a million users
  • It’s available in over 90 languages
  • Many American government websites are Drupal-powered
  • Acquia spent half a million dollars to accelerate the migration of Drupal 7 modules to Drupal 8
  • Drupal powers around 1 million websites


  • The platform can be greatly expanded upon
  • Frequent patches and updates enhance platform security
  • Drupal is well-suited to eCommerce
  • Best PHP CMS for websites with lots of traffic


  • Hard to understand for non-developers
  • Not well suited to blogs or other publications
  • Installing custom modules is not easy


OctoberCMS is a free, open-source PHP CMS that a great many company websites have been built on. The CMS is flexible, simple, and ready to deliver retina-ready websites and apps.

OctoberCMS is a self-hosted open-source PHP CMS and you can install it on your hosting service if you want to. It integrates well with third-party apps and features more than 700+ plugins and themes. It has a large and supportive community.


  • Own community
  • Ecosystem of plugins & themes
  • Based on Laravel framework


  • Open source and free
  • Versatile and extendable
  • Many and varied themes and plugins


  • Requires developer input to customize
  • Fewer users than WordPress


Opencart is another of the PHP based content management systems that are ideally suited to the creation of eCommerce websites. It’s open-source so PHP developers can easily add their own updates, and for users, it’s not hard to get to grips with thanks to its intuitive UI. The platform caters to a great many languages and offers unlimited product categories for the biggest inventories out there. Opencart is a well-featured PHP CMS that gives plenty of scope to developers while keen to create comprehensively featured online stores.


  • Opencart allows more than 20 ways to pay
  • 12k+ extensions on offer
  • Powers 790k+ websites
  • 95k+ forum members


  • Easy to set up and get started
  • Free themes in abundance
  • Thousands of available modules and extensions
  • Makes it easy to set up sites in different managers


  • Some technical knowledge needed for customization
  • Not very SEO-friendly
  • Bogs down when web traffic spikes
  • No event system so users can’t set up tasks from within modules


ExpressionEngine is one of the best PHP based content management systems for sites that need to handle large amounts of content. It is an excellent PHP based CMS with an architecture that can be modified with custom scripts to introduce additional functions.

Any newly added content becomes visible to the customer straight away. ExpressionEngine is versatile enough that when it creates pages, it does so by pulling content from the database and then formatting it so that every user gets the best available view for their device. This dynamic approach to content generation makes it very flexible.


  • Custom edit forms are available. You can navigate and fill them out easily
  • HTML agnostic template system
  • Preview window the cheque work before saving changes
  • Integrated SEO for content
  • Excellent security


  • Some content boxes in certain templates don’t expand, making navigation and editing difficult
  • Poor developer network support
  • Fewer 3rd party add-ons and plugins


PyroCMS is one of the best PHP CMSs and it’s powered by the Laravel framework. Popularity has been growing thanks to its intuitive backend design and lightweight modular architecture. Was designed to be simple, flexible, easy to learn, and easy to understand. PyroCMS’s modular design gives developers plenty of scope to bring together the right components to suit any given project.


  • Versatile PHP CMS can be adapted to any project
  • Readily accommodates third-party APIs and apps
  • Easy to install and learn


Magento was designed with eCommerce applications in mind, and it’s now the preferred platform for building innovative online stores. Brands such as Ford, Nike, Foxconnect, and many others rely on Magento’s extremely capable eCommerce features to power their sites. The major advantage of using Magento is that it’s tailor-made for designing rich and varied online shopping experiences for customers.

Another part of Magento’s appeal is its great emphasis on security. It uses hashing algorithms for maximum security password management and has additional defenses to defend apps from attackers. Also, Magento benefits from an active developer community which frequently contributes with numerous updates and patches. With Magento 2 the platform has benefited from a variety of enhancements to further strengthen its position as one of the best PHP-based content management systems for online retail.


  • The platform is feature-rich enough to power modern eCommerce stores
  • Magento is very accessible
  • The community regularly develops plug-ins and extensions
  • The platform is very scalable and can accommodate big apps


  • The premium and enterprise versions are pricey
  • Slightly slower to load than other platforms
  • Only works with dedicated hosting
  • Product support is quite pricey

Craft CMS

Craft is one of the more recent PHP-based content management systems but its low user account shouldn’t put you off though because it’s tailored towards pleasing developers. If you’re a user that may be a point against it, but from a developer’s point of view it’s easy to work with.

Craft gives users the scope to create their own front ends, or at least it does in principle because doing so requires a knowledge of HTML and CSS. Despite that, it offers a clean backend, so it’s relatively easy for content editors to easily find their desired features and publish content frequently.


  • Lightweight
  • Commercial features
  • Developer-centric
  • Highly functional
  • Performs well
  • Effective security


  • Pricey
  • More for advanced users
  • Not so many plugins
  • Not open source


TYPO3 is one of the best PHP CMS platforms available. It works on various operating systems including Windows, Linux, macOS, FreeBSD, and OS/2. It’s best suited to powering the portals and eCommerce platforms of large companies and it’s supported by a sizeable community for ongoing support and discussion.

Content and code are handled separately which makes TYPO3 a very flexible proposition for users. With support for over 50 languages and integrated localization built-in, it will fit in with users no matter where they may be in the world. Installation can be completed in just a few steps.


  • Sizeable community
  • Flexible with lots of functions
  • Enterprise-level


  • Hard to configure
  • Entry-level training is hard to find

How to Market in Times of COVID-19?

The coronavirus situation has been a difficult experience for businesses of all sizes. However, the most vulnerable are small and medium-sized businesses from different industries. In particular, the ones facing interaction with customers in this challenging social distancing and self-isolation. 

While many cities are still in lockdown and people keep avoiding public places, most of the nonessential businesses have been forced to close down. This new reality brings significant changes in the overall shopping behavior of people all around the world. Customers start changing what they buy, where, when, and how.

Fortunately, many business owners and managers realized the new shopping behavior of their customers and the need to adapt and be flexible to meet changing necessities. And the importance of online presence for most online shops. Those businesses will need management software, cyber-security measures, as well as digital marketing tools for advertising.

Marketing Tips to Help Your Business

It doesn’t matter if you’re new to marketing or already have developed a marketing strategy for your business. Marketing during hard times requires sensitivity. And here are some general tips on how to market during COVID-19. Let’s go through them: 

1. Stay Relevant to Your Consumer’s Needs

One of the best ways to stay relevant in these uncertain times is to tell your customers how your company is dealing with the pandemic. Tell relevant, authentic stories – to show your customers that you’re there during these difficult times. And stay hopeful for the future.

2. Be Supportive of Your Customers

Try and show empathy and support in all your communications with your customers. Tell your customers what you’re doing to help the community and try to make their life easier. This will show your customers that they’re in good hands and that they made the right decision for choosing your company.

3. Focus on Your Digital Channels

This is a good time to review and implement new digital strategies for your online business. Here are some key areas to focus on:

  • Reanalyze your overall marketing plan focusing on your story, site, and data.
  • Identify user questions related to your business and address them on your blog.
  • Create your SEO strategy and start optimizing your website. Extensions like SEO Toolkit can help you maximize your search engine traffic.
  • Do some keywords research. 
  • Update your Google My Business listings.

4. Be Present on Social Media

Publish content that’s relevant to the reality of your audience. And tell how your brand can solve their problems. Focus on strengthening your relationship with your customers by answering their questions and sharing their stories. It’s important to encourage your customers to be kind and help each other in your own example. You can take a look at other resources from our blog for tips on how to use social media effectively.

5. Implement Email Marketing to Your Overall Marketing Strategy

It’s no secret that email marketing is one of the cheapest and most profitable marketing activities you can apply in your marketing strategy. Nowadays, it’s a good time to keep your customers updated about your business, products, and novelties. So, don’t forget to send good vibes and remind people of the good in the world.

6. Offer Discounts and Promotions

Offering discounts in days of crisis can help not only attract new customers, but also in gaining loyalty among existing customers, driving traffic to your site, and increasing your sales. You can do charitable sales promotions which can boost conversion as customers may purchase items for themselves by doing good for others.

A Final Word

We know that we’re all somehow being affected by these unprecedented times. And therefore at Plesk, we want to help bring businesses like yours online with our special offers. Take a look at our current plans and find the option that best fits your business. 

Is your business suffering the effects of the pandemic? What changes have you implemented to keep your business afloat? Let us know in the comments below. We’re here to help.

Plesk System Maintenance: How The Command Line Helps Administrators

In this article we provide overview on how to manage Plesk through the command line and execute scripts or binaries on certain Plesk events. In addition, you will learn how to adjust Plesk settings to fit a new network environment or server configuration, and restart Plesk to apply new settings.

Managing Plesk Objects Through the Command Line

Plesk Command Line Interface (CLI) is designed for integrating Plesk with third-party applications. Plesk administrators can also use it to create, manage, and delete customer and domain accounts, and other Plesk objects from the command line. CLI utilities require administrative permissions on Plesk server to run.

The utilities reside in the following directories:

  • On RPM-based systems: /usr/local/psa/bin
  • On DEB-based systems: /opt/psa/bin

Upon successful execution, utilities return the 0 code. If an error occurs, utilities return code 1 and display the error details.

Executing Custom Scripts on Plesk Events

Plesk provides a mechanism that allows administrators to track specific Plesk events and make Plesk execute custom scripts when these events occur. The events include operations that Plesk users perform on accounts, subscriptions, websites, service plans, and various Plesk settings. For example, you can save each added IP address to a log file or perform other routine operations.

Changing IP Addresses in Plesk

During the lifetime of a Plesk server, you may need to change the IP addresses employed by Plesk. Two typical cases when IP addresses may need to be changed are the following:

  • Reorganization of the server IP pool. For example, substitution of one IP address with another.
  • Relocation of Plesk to another server. Changing all addresses used by Plesk (including the one on which Plesk resides) to those on the new server.

Every time the change happens, you should reconfigure all related system services. To help you do this promptly, we offer the reconfigurator command line utility located in the following directory:

  • on RPM-based systems: /usr/local/psa/bin.
  • on DEB-based systems: /opt/psa/bin.

The reconfigurator replaces IP addresses and modifies Plesk and services configuration to make the system work properly after the replacement. To do this, the utility requires a mapping file, that includes instructions on what changes to make. Each line of the file should describe a single change. For example, the following line instructs Plesk to change the IP address to

eth0: -> eth0:

The utility also helps you with creation of the mapping file. If you call the utility with a new file name as an option, it will create the file and write all available IP addresses to it. The IP addresses in the file are mapped to themselves. If you want to perform a change, modify the change instruction for a certain IP address.

When editing the mapping file, consider the following:

  • A replacement IP address must not exist in the Plesk IP pool before changing; however, it may be in the server IP pool. To make sure the IP is not in the Plesk IP pool, go to Server Administration PanelTools & Settings > IP Addresses and remove the IP if necessary.
  • If a replacement IP address does not exist in the server IP pool, the utility adds it to both Plesk and server IP pools.

To change IP addresses used by Plesk:

  1. Generate a mapping file with current Plesk IP addresses by running the command:
  2. ./reconfigurator <ip_map_file_name>
  3. Edit the file as described above and save it.
  4. Reconfigure Plesk and its services by running the following command one more time:

./reconfigurator <ip_map_file_name>

Changing Paths to Services

Plesk uses various external components, for example, Apache web server, mail service, antivirus, and so on. When interacting with these components, Plesk gets the information on their locations from the configuration file /etc/psa/psa.conf.

Plesk configuration file provides an easy way of reconfiguring Plesk if a service is installed into another directory or migrated from the current partition to another. Note that you can only modify paths present in this file; other paths are hard-coded in Plesk components.

Each line of psa.conf has the following format:

<variable_name> <value>

A sample part of the psa.conf file is displayed below. To change a path to a service, utility, or package, specify the new path as a value of a corresponding variable.

# Plesk tree

PRODUCT_ROOT_D /usr/local/psa 

# Directory of SysV-like Plesk initscripts

PRODUCT_RC_D /etc/init.d # Directory for config filesPRODUCT_ETC_D /usr/local/psa/etc 

# Directory for service utilities

PLESK_LIBEXEC_DIR /usr/lib/plesk-9.0 

# Virtual hosts directory

HTTPD_VHOSTS_D /var/www/vhosts 

# Apache configuration files directory

HTTPD_CONF_D /etc/httpd/conf 

# Apache include files directory

HTTPD_INCLUDE_D /etc/httpd/conf.d 

# Apache binary

HTTPD_BIN /usr/sbin/httpd

#Apache log files directory

HTTPD_LOG_D /var/log/httpd 

#apache startup script


# Qmail directory

QMAIL_ROOT_D /var/qmail

Note: Be very careful when changing the contents of psa.conf. Mistakes in paths specified in this file may lead to Plesk malfunctioning.

Restarting Plesk

If you experience problems with Plesk, for example, malfunctioning of a service, you can try to resolve them by restarting Plesk or the administrative web server sw-cp-server. Also, a restart is necessary to apply configuration changes that cannot be made while Plesk is running.

To restart Plesk, run the following command:

/etc/init.d/psa restart

To restart sw-cp-server, run the following command:

/etc/init.d/sw-cp-server restart

Managing Services from the Command Line and Viewing Service Logs

Here we explain how to stop, start, and restart services managed by Panel, and access their logs and configuration files.

Plesk web interface

To start the service through the command line:

/etc/init.d/psa start

To stop the service through the command line:

/etc/init.d/psa stop

To restart the service through the command line:

/etc/init.d/psa restart

Plesk log files are located in the following directories:

  • Error Log: /var/log/sw-cp-server/error_log
  • Access log: /var/log/plesk/httpsd_access_log

Panel configuration files are the following:

  • php: $PRODUCT_ROOT_D/admin/conf/php.ini
  • www: /etc/sw-cp-server/applications.d/plesk.conf

Presence Builder

Log files are located in:

  • Error log: /var/log/httpd/sitebuilder_error.log
  • Logs: /usr/local/sitebuilder/tmp/

Configuration files are accessible at:

  • /usr/local/sitebuilder/config
  • /usr/local/sitebuilder/etc/php.ini


The error log is located in: /var/log/sw-cp-server/error_log

The configuration file is accessible at: /usr/local/psa/admin/htdocs/domains/databases/phpMyAdmin/libraries/config.default.php


The error log is located in: /var/log/sw-cp-server/error_log

The configuration file is accessible at: /usr/local/psa/admin/htdocs/domains/databases/phpPgAdmin/conf/


To start the service through the command line:

/etc/init.d/courier-imap start

To stop the service through the command line:

/etc/init.d/courier-imap stop

To restart the service through the command line:

/etc/init.d/courier-imap restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at:

  • /etc/courier-imap/imapd
  • /etc/courier-imap/imapd-ssl
  • /etc/courier-imap/pop3d
  • /etc/courier-imap/pop3d-ssl

DNS / Named / BIND

To start the service through the command line:

/etc/init.d/named start

To stop the service through the command line:

/etc/init.d/named stop

To restart the service through the command line:

/etc/init.d/named restart

Log files are located in: /var/log/messages

The configuration file is accessible at: /etc/named.conf


Log files are located in: /var/log/plesk/xferlog

Configuration files are accessible at:

  • /etc/xinetd.d/ftp_psa
  • /etc/proftpd.conf
  • /etc/proftpd.include


To start the service through the command line:

/etc/init.d/postfix start

To stop the service through the command line:

/etc/init.d/postfix stop

To restart the service through the command line:

/etc/init.d/postfix restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at: /etc/postfix/


To start the service through the command line:

/etc/init.d/qmail start

To stop the service through the command line:

/etc/init.d/qmail stop

To restart the service through the command line:

/etc/init.d/qmail restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at:

  • /etc/xinetd.d/smtp_psa
  • /etc/xinetd.d/smtps_psa
  • /etc/xinetd.d/submission_psa
  • /etc/inetd.conf(Debians)
  • /var/qmail/control/


To start the service through the command line:

/etc/init.d/psa-spamassassin start

To stop the service through the command line:

/etc/init.d/psa-spamassassin stop

To restart the service through the command line:

/etc/init.d/psa-spamassassin restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at:

  • /etc/mail/spamassassin/
  • /etc/mail/spamassassin/
  • /var/qmail/mailnames/%d/%l/.spamassassin

Kaspersky antivirus

To start the service through the command line:

service kavehost start

To stop the service through the command line:

service kavehost stop

To restart the service through the command line:

service kavehost restart

Log files are located in:

  • /var/log/maillog
  • /var/log/mail.log

Configuration files are accessible at:


Odin Premium Antivirus

To start the service through the command line:

/etc/init.d/drwebd start

To stop the service through the command line:

/etc/init.d/drwebd stop

To restart the service through the command line:

/etc/init.d/drwebd restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at: /etc/drweb/


To start the service through the command line:

/etc/init.d/tomcat5 start

To stop the service through the command line:

/etc/init.d/tomcat5 stop

To restart the service through the command line:

/etc/init.d/tomcat5 restart

Log files are located in: /var/log/tomcat5/

Configuration files are accessible at: /usr/share/tomcat5/conf/


To start the service through the command line:

/etc/init.d/mysqld start

To stop the service through the command line:

/etc/init.d/mysqld stop

To restart the service through the command line:

/etc/init.d/mysqld restart

Log file is located in: /var/log/mysqld.log

The configuration file is accessible at: /etc/my.cnf


To start the service through the command line:

/etc/init.d/postgresql start

To stop the service through the command line:

/etc/init.d/postgresql stop

To restart the service through the command line:

/etc/init.d/postgresql restart

Startup log is located in: /var/lib/pgsql/pgstartup.log

The configuration file is accessible at: /var/lib/pgsql/data/postgresql.conf


To start the service through the command line:

/etc/init.d/xinetd start

To stop the service through the command line:

/etc/init.d/xinetd stop

To restart the service through the command line:

/etc/init.d/xinetd restart

Log files are located in: /var/log/messages/

The configuration file is accessible at: /etc/xinetd.conf

Watchdog (monit)

To start the service through the command line:

/usr/local/psa/admin/bin/modules/watchdog/wd --start

To stop the service through the command line:

/usr/local/psa/admin/bin/modules/watchdog/wd --stop

To restart the service through the command line:

/usr/local/psa/admin/bin/modules/watchdog/wd --restart

Log files are located in:

  • /var/log/plesk/modules/watchdog/log/wdcollect.log
  • /var/log/plesk/modules/watchdog/log/monit.log

Configuration files are accessible at:

  • /usr/local/psa/etc/modules/watchdog/monitrc
  • /usr/local/psa/etc/modules/watchdog/

Watchdog (rkhunter)

Log is located in: /var/log/rkhunter.log

The configuration file is accessible at: /usr/local/psa/etc/modules/watchdog/rkhunter.conf


To start the service through the command line:

/etc/init.d/httpd start

To stop the service through the command line:

/etc/init.d/httpd stop

To restart the service through the command line:

/etc/init.d/httpd restart

Log files are located in:

  • /var/log/httpd/
  • /var/www/vhosts/<domain_name >/statistics/logs/

Configuration files are accessible at:

  • /etc/httpd/conf/httpd.conf
  • /etc/httpd/conf.d/
  • /var/www/vhosts/<domain_name >/conf/httpd.include


To start the service through the command line:

/etc/init.d/mailman start

To stop the service through the command line:

/etc/init.d/mailman stop

To restart the service through the command line:

/etc/init.d/mailman restart

Log files are located in: /var/log/mailman/

Configuration files are accessible at:

  • /etc/httpd/conf.d/mailman.conf
  • /usr/lib/mailman/Mailman/
  • /etc/mailman/sitelist.cfg


To start the service through the command line:

/usr/local/psa/bin/sw-engine-pleskrun /usr/local/psa/admin/plib/DailyMaintainance/script.php

Configuration files are accessible at:



To start the service through the command line:

/usr/local/psa/bin/sw-engine-pleskrun /usr/local/psa/admin/plib/DailyMaintainance/script.php

Configuration files are accessible at:


Backup Manager

Backup logs are located in:

  • /var/log/plesk/PMM/sessions/<session>/psadump.log
  • /var/log/plesk/PMM/sessions/<session>/migration.log
  • /var/log/plesk/PMM/logs/migration.log
  • /var/log/plesk/PMM/logs/pmmcli.log

Restore logs are located in:

  • /var/log/plesk/PMM/rsessions/<session>/conflicts.log
  • /var/log/plesk/PMM/rsessions/<session>/migration.log
  • /var/log/plesk/PMM/logs/migration.log
  • /var/log/plesk/PMM/logs/pmmcli.log

The configuration file is accessible at:


Plesk Migration Manager

Migration logs are located in:

  • /var/log/plesk/PMM/msessions/<session>/migration.log
  • /var/log/plesk/PMM/rsessions/<session>/migration.log
  • /var/log/plesk/PMM/rsessions/<session>/conflicts.log
  • /var/log/plesk/PMM/logs/migration.log
  • /var/log/plesk/PMM/logs/pmmcli.log
  • /var/log/plesk/PMM/logs/migration_handler.log


Log is located in:


Configuration files are accessible at:

  • Apache configuration
    • /etc/httpd/conf.d/zzz_horde_vhost.conf
    • /etc/psa-webmail/horde/conf.d/
  • Horde configuration:

·         /etc/psa-webmail/horde/


Log files are located in:


Configuration files are accessible at:

  • Apache configuration
    • /etc/httpd/conf.d/zzz_atmail_vhost.conf
    • /etc/psa-webmail/atmail/conf.d/
  • Atmail configuration:
    • /etc/psa-webmail/atmail/atmail.conf
    • /var/www/atmail/libs/Atmail/Config.php


To start the service through the command line:

/etc/init.d/psa-firewall start

To stop the service through the command line:

/etc/init.d/psa-firewall stop

To restart the service through the command line:

/etc/init.d/psa-firewall restart

Configuration files are accessible at:

  • /usr/local/psa/var/modules/firewall/
  • /usr/local/psa/var/modules/firewall/
  • /usr/local/psa/var/modules/firewall/

psa-firewall (IP forwarding)

To start the service through the command line:

/etc/init.d/psa-firewall-forward start

To stop the service through the command line:

/etc/init.d/psa-firewall-forward stop

To restart the service through the command line:

/etc/init.d/psa-firewall-forward restart

Configuration files are accessible at:

  • /usr/local/psa/var/modules/firewall/
  • /usr/local/psa/var/modules/firewall/ip_forward.saved

Moving the Plesk GUI to a Separate IP Address

By default, the Plesk GUI can work on all IP addresses available on the Plesk server (from the server’s IP pool). You may want to allow access to the Plesk GUI only from the local network. For that, you should move the GUI to an internal IP address.

To move Plesk GUI to a separate IP address, in the configuration file /etc/sw-cp-server/conf.d/plesk.conf, replace the lines

listen 8443 ssllisten 8880;

with the lines

listen SPECIFIC_SERVER_IP:8443 ssllisten SPECIFIC_SERVER_IP:8880;

where SPECIFIC_SERVER_IP is the new IP address that you want to use for the Plesk GUI.

Do not change the ports.

Setting Off Automatic Integration of WordPress Installations

If you are using the WordPress Toolkit extension, it detects new installations performed through the Application Catalog (or Application Vault) and integrates them with WordPress Toolkit. For this reason, installation of WordPress on a site takes up to 20 seconds. If you want to avoid this, you can switch off automatic detection of new installations by the WordPress Toolkit.

To do this, add the following lines to the panel.ini file :


autoAttachApsInstances = off

Turning Off WordPress Toolkit

If you are using the WordPress Toolkit extension, you can completely switch it off on your server.

To switch off WordPress Toolkit, add the following lines to the panel.ini file:



MySQL Performance Tuning

Many databases (and particularly many relational databases) rely on Structured Query Language (SQL), for handling data storage, data manipulation, and data retrieval. If developers want to create, update or delete data then they have always been able to do so easily with SQL statements. That said, we live in an age where the sheer amount of data being shunted around has grown and is still growing at alarming and overwhelming rates, and to compound this, workloads are always changing too, so while SQL statements are useful, there is an ongoing and pressing need for MySQL performance tuning. The swift and efficient movement and processing of data is crucial if we hope to deliver an excellent end-user experience while keeping costs as low as possible.

So, developers wanting to seek out and eliminate hold-ups and inadequate operations must turn to MySQL performance tuning tools which can help them with execution plans and remove any guesswork. MySQL performance may be important, but it isn’t necessarily an easy thing to do. In fact, there are a few aspects of the process which make it a difficult undertaking for developers. MySQL optimization requires sufficient technical prowess, i.e. enough knowledge and skill to comprehend and create a variety of execution plans, and that can be quite off-putting.

As if the fact that it’s tricky wasn’t enough, MySQL optimization takes time and energy. When you’re faced with a whole array of SQL statements to wade through, you face a problem with a built-in degree of uncertainty. Each example needs to be carefully considered during MySQL performance tuning. First, you need to decide which ones to amend and which to leave alone, then you need to work out what approach to take to MySQL tuning with each one that you do select because they’ll all need different approaches depending on what their function is. That’s why we are going to discuss several tips and techniques that will help you approach MySQL performance without getting distracted snowed under by the sheer weight of them.

The Benefits of MySQL Optimization

MySQL performance tuning is essential for keeping costs down. If you can use the right-sized server for the job, then you won’t be paying for more than you need, and if you can understand whether moving data storage or adding extra server capacity will lead to MySQL performance improvements then that helps efficiency too. MySQL tuning can be challenging, but it’s worth the time that it takes because an optimized database has greater responsiveness, delivers better performance, and offers better functionality.

MySQL Query Optimization Guidelines

Here are some useful tips for MySQL tuning. They are a great addition to your collection of best practices.

Make sure that the predicates in WHERE, JOIN, ORDER BY and GROUP BY clauses are all indexed. WebSphere Commerce points out that SQL performance can be improved significantly by predicate indexing because not doing so can result in table scans that culminate in locking and other difficulties. Which is why we highly recommend that you index all predicate columns for better MySQL optimization.

Keep functions out of predicates

The database won’t use an index if there’s a predefined function.

For instance:


The UPPER() function means that the database won’t look to the index on COLONE. If that function can’t be avoided in SQL, you’ll need to make a new function-based index or create custom columns in the database in order to experience improved MySQL performance.

Remove non-essential columns with the SELECT clause

Rather than use ‘SELECT *’, always specify columns for the SELECT clause, because unneeded columns add extra load on the database, hindering its performance and causing knock-on effects to the whole system.

Try not to use a wildcard (%) at the start of a predicate

The predicate LIKE ‘%abc’ will cause a full table scan, i.e.:


This kind of wildcard use can slow down performance significantly.

Use INNER JOIN, instead of OUTER JOIN where you can

Only use outer join where you absolutely need to. If you use it when you don’t need to then you’ll be putting the brakes on database performance due to slower execution of SQL statements and negative effects on MySQL optimization.

Use UNION AND DISTINCT only where needed

If you use the UNION and DISTINCT operators when other options are available you’ll be needlessly adding excessive sorting which slows down SQL performance. Try using UNION ALL instead for better MySQL performance.

You need to use ORDER BY in SQL for better sorting results

ORDER BY sorts the result-set into pre-determined statement columns. Although this is advantageous for database admins who want data to be sorted, it’s detrimental to MySQL performance. The reason for this is that in order to produce the final result set the query needs to sort the data first, which requires quite a convoluted and resource intensive SQL operation.

Don’t Use MySQL as a Queue

Queues can sneak up on you and slow down your database performance. For example, any time you set up a status for a specific item so that a ‘relevant process’ can gain access to it, you will be creating a queue without knowing it. This just adds pointless extra loading time to use the resource.

Queues are a problem because they cause your workload to be treated in an inefficient serial fashion rather than more efficient parallel and because they frequently lead to a table that contains work in progress along with data from jobs that have already been completed. This slows down the app and also hinders MySQL performance tuning.

The Four Fundamental Resources

A properly functioning database requires four fundamental resources. A CPU, hard drive, memory, and a network. Problems with any one of them will negatively affect the database, so it’s important to choose the right hardware and make sure that it’s all functioning properly. In practical terms, this means that if you’re going to invest in a powerful CPU then don’t try to cut corners with less memory or slower storage. A set up is only as good as its slowest component, and if all of them aren’t at parity then the result will be MySQL performance bottlenecks. Investing in more memory is probably the most cost-effective way of improving performance as it is inherently faster than disk-based storage. If all operations can be held in memory without resorting to disk usage, then processes will speed up considerably.

Pagination Queries

Applications that paginate tend to slow the server. By giving you a results page with a link to the next one these apps usually approach grouping and sorting in ways that don’t use indexes, using a LIMIT and offset function that places an extra burden on the server and then discards rows.

Adjusting the user interface itself will assist with optimization.  Instead of listing all pages in the results and linking to each page, you can just include a link to the next page. This also stops users wasting time on incorrect pages.

In terms of queries, rather than using LIMIT with offset, you can select one more row than you require, and when someone clicks the ‘next page’ link, you can set that last row as the start of the next set of results. For example, if the user looked at a page with rows 201 to 220, select row 221 as well; for the next page to be rendered, you would query the server for rows greater than or equal to 221, limit 21.

MySQL Optimization—Subqueries

In terms of subqueries, it’s better to use a join where you can, at least in current versions of MySQL.

The optimizer team is doing a lot of work on subqueries, so it may be that subsequent iterations of MySQL may come with extra subquery optimizations. It’s best to keep an eye on which MySQL optimizations end up in the each version, and what their effects will be. What I’m saying here is that my advice to err towards a join may not hold forever. Servers are getting more and more intelligent, and the instances where you will need to explain to them how to do something instead of what results to return are reducing.

Use Memcached for MySQL Caching

Memcached is a system that enables distributed memory caching, improving the speediness of websites that use big dynamic databases. It manages to do this by keeping database objects in Dynamic Memory to cut server load any time an outside data source asks for a read. A Memcached layer reduces the number of times a database issues a request.

Memcached stores each value (v) with a key (k), and then retrieves them without the need to parse the database queries for a much more streamlined process.


MySQL tuning  ( as well as tuning of MariaDB ) may be time-consuming and thought-provoking but it’s one of the hurdles that you need to take in your stride if you want to ensure that your users receive the best possible experience. Poor database performance could benefit from investing in the best hardware and making sure that it’s balanced, but even with the best CPUs, and the fastest memory and SSDs on the market, there is still an additional performance improvement to be had from taking the time to implement proper MySQL optimization. It can be a laborious task for developers, but the performance and enhancements and efficiency savings are well worth it. Keep these tips close to hand and refer to them often. They’re not all-encompassing, but they are a handy starting point for your journey into MySQL tuning.

5 Essential Practices to Unlock Your Staging Environment’s Full Potential

From developers writing the code to end-users getting the product, a software development lifecycle consists of many environments. In this post, we’re going to discuss the staging environment. We’re also going to talk about the importance, best practices, limitations, and alternatives to this environment.

Staging is the replica of the production environment. It means we run the code on the server rather than the local machine with the same architecture. Since the product is live, we can look out for any bugs and issues. Adjustments and polishing to the product are made in this phase before it goes to production. This environment is also useful for showing the client a live demo.

Why Use a Staging Environment?

Skipping staging is easy; we have seen many startups, and big companies do that. But are we really ready to face the losses of skipping this step? There are arguments that a functional testing framework can help in removing bugs or issues. But, can 2-3 people manually scripting these tests account for every possibility and iteration?

End-users have almost zero patience when it comes to poor performing apps, so we need to provide them with the best possible product. Staging is essential for having confidence in the code we write, the product we supply, and the infrastructure we provide. With a staging environment, we already have interactions, making it easier to test the countless iterations and possibilities.

A staging environment is essential to create sophisticated and valued software and give clients value for both their time and money.

Tests Performed On a Staging Environment

Staging consists of two main tests performed to eliminate bugs and issues to the maximum extent:

  • Smoke Test: New builds developed to the already existing software undergo smoke testing. The main aim is to check if the major functionalities are working correctly. After we do the smoke testing, we decide whether to send the software for further testing or revert to developers for more debugging.
  • User Acceptance Testing (UAT): Developers code the software based on their understanding of the requirements, but does the software have everything that the client wants? User Acceptance testing in a staging environment can help us to understand and answer the above question. End-user or the client will perform this test to see if their requirements are met without compromises or drawbacks.

Staging Environment: Best Practices to Follow

Let’s be honest, the staging environment setup costs more. Instead of having an excellent staging environment setup, we see that things get out of our hands pretty quickly. We must bring the staging environment as close as possible to the production environment to avoid chaos.

The following are some of the fundamental practices that unlock the staging environment’s power to its full potential.

1. Staging Should Be an Exact Replica of Production:

The value of a staging environment depends on how well it matches the production environment. We must make sure that every build or release goes through it. Mismatch in the configurations of staging and production will always lead to catastrophic results.

For example, consider that a new, developed build goes into the staging environment. We get a clearance from the staging environment, and we deploy the code in production without having a second thought. Suddenly there will be a complete outage in the product, and you don’t know the reason for it. The answer is that the configuration and environment mismatch.

Can our staging environment currently hold up with the real-time traffic that our production has? Does our staging environment have the same set of systems and services as production? These are the most critical questions we must ask ourselves to know the value of a staging environment. And if the answer is yes, then we are good to go.

2. Use Data to Test Iterations and Possibilities:

How many times have we seen empty tables in the staging area? Countless. Empty tables give us no information about the user experience. However, we can take the help of dummy data, with which we can test some but not all iterations and possibilities. Quantity and quality of the data present for testing in a staging environment is very crucial.

When the testing teams work with dummy data, their capacity is limited because they only have access to the dummy test account. However, we can add a whole new dimension to this by adding the real user and getting him to execute tasks on the product directly. This addition eventually adds a lot of clarity to the process.

We can also release code into staging weekly and daily, making it a lot easier. We can tackle the data quality by making staging product primary and channeling real-life tasks through it, rather than the production version. Since we are updating the builds and releases on a daily and weekly basis, users can try out new features, enhancements, and bug fixes.

This step may not generate the load the production usually gets. Still, we are at the benefit of various use cases that are always triggered in staging, which is a production mirror image. We can easily trace out high impact issues and bugs on the software.

3. Constant Monitoring and Updating:

To be the closest replica of production, staging needs to be monitored and updated at all times with extreme attentiveness. Every new build, release, and update goes through staging before entering production, making it very important to monitor and update it. We can’t even miss out on sending tiny things to the staging area because we have a lot to do.

As we have seen in the last step, it is apparent that we may have to push the code into staging regularly. Monitoring helps us in observing the patterns and errors that are present in the product. We can get a clear idea of what has to be improved and what has to be maintained.

When we provide the users with regular updates, we are getting an infinite amount of uses cases triggered spontaneously and simultaneously. We must identify the significant issues popping up and alienate them before this product goes into production and causes an outage.

However, we must be careful not to jeopardize some of the most critical user data with a staging environment. Like the emails and the personal details of the real-time, users should not be confused with staging data. 

If you’re looking for more resources on monitoring, updates, and performance, check out Part 1 and Part 2 of our DevOps Cycle series.

4. Don’t Hurry Through the Staging Area:

In some companies, a project developed for over six months will get rushed in a matter of days in the staging area. This will lead to insufficient testing, which will lower the value of the staging environment. The testing department should be provided with enough time to deploy products with fewer issues.

Problems like data corruption and data leakage often take time to show up. So, rushing the staging environment will lead to corruption and leakage in the production environment. These problems will lead to a complete outage of the application. We can avoid these catastrophic problems if we give a staging environment enough time.

5. Use Performance Metrics:

We must check our production environments with as many performance parameters as possible, including the chaotic ones. When we deploy the product, we may encounter many surprises and chaotic situations like crashing servers, Dos attacks, network outages, etc. We can benefit a lot by including these elements of wonder in our testing framework. 

Even though all the parameters can’t be replicated practically, we must ensure that we have tested the application for maximum possible scenarios.

Don’t Forget the Limitations

Like everything else, a staging environment too has some limitations and drawbacks. These limitations occur mainly due to the limitedness of the environment and mismatching with production. If the staging environment configuration does not match production, then there is a chance of pushing buggy code into production, leading to many problems. Double-checking the settings before deployment will help us overcome this limitation.

Even though we replicate the staging environment exactly with production, it is impossible to load test the staging with traffic from production. This difference may produce slight turbulence when we release the product. Due to the limitedness of the staging environment, data corruption and data leak results may get delayed. This is not good if we have already deployed the code thinking there are no issues.

To sum it up, by pushing the code directly from development to deployment, we create uncertainty in the situation. This uncertainty puts the companies’ reputation and value on risk. A staging environment safely eliminates this risk. The importance of staging easily outweighs the limitations it has. We must ensure the best user experience by bringing a staging environment very close in replicating the production environment.

So, how efficiently is your staging environment matching up with the production environment? Do you have any suggestions that we missed for making the staging environment more efficient? Let us know in the comments below!

Next Level Ops Podcast: Solving the Most Common WordPress Problems with Lucas Radke

Hello Pleskians! This week we’re back with the eighth episode of the Official Plesk Podcast: Next Level Ops. In this installment, Superhost Joe and Product Wizard Lucas Radke talk about common WordPress problems and what hosting providers and users can do about them.

In This Episode: Noisy Neighbors, Fixing WordPress Problems, and What Hosting Providers Can Do

What are the most common WordPress problems for hosting providers? In what domains do common WordPress problems fall for most users? How much does WordPress itself mitigate these problems and what can hosting providers and users do? In this episode, Joe and Lucas discuss the three main areas under which WordPress problems usually fall — performance, updates, and security. You can have noisy neighbors when an environment is shared by too many users, impacting your website’s performance. 

Frequent updates are also often a pain point as non-updated plugins and themes can lead to security issues. Hosting providers should ideally provide solutions for this, otherwise it can lead to backdoors that compromise websites. For instance, tools such as Smart Updates for Plesk WordPress Toolkit analyzes WordPress updates and identifies and performs changes without breaking the production site. It also notifies users of any potentially critical updates. 

It’s essential for users to be proactive about potential issues from their side, especially non-savvy tech users. What can users do to ensure that they are taking the right precautions? The first thing is to make sure that they use a trusted web hoster who provides them with a secure hosting environment. Recently, WordPress has also had an increasing emphasis on security and recommends some basic security protections. For example, to make sure that access is limited, keeping backups, regular updates, and installing plugins and themes from trusted sources. For WordPress, security is about risk reduction.

“The great and terrible thing about WordPress is the amount of freedom you have. The freedom to set up whatever website you want considerably cheaply. But also the freedom to cause problems for either yourself, your client or your hosting provider,” says Joe, “Because if you’re on a shared host and your website is compromised, then it’s possible that other websites are compromised as well.”

Key Takeaways

  • What are some of the actions hosting providers can take to fix common WordPress issues? Hosting providers are responsible for how well the site performs. Users may expect high performance without paying the price for it. Many users install plugins to help with the performance or security of their website. The hosting provider has to make sure that plugins are updated and to make sure that there are no open doors for hackers. It’s also essential that hosting providers have a properly trained support team, specialized in solving WordPress issues.
  • What can users do to minimize some frequent WordPress problems? Being proactive is very important for users. Along with being informed about what’s happening in the community from a security perspective. Which plugins are having potential issues? What are some of the security issues coming up in the WordPress community? Trying to get the information that helps users reduce security risks should be a priority, especially for non-tech savvy users.
  • To what extent does WordPress mitigate these problems? WordPress has had a recently increased security focus. It’s forcing stronger passwords; it’s verifying email addresses; it has a site Health Checker and Troubleshooter performing checks on users’ WordPress installations; and other criteria for running WordPress sites securely.
  • Which plugins can mitigate some of the issues? iThemes Security is a useful plugin. Smart Updates for Plesk’s WordPress Toolkit has some cool features. WordPress Toolkit checks for updates for plugins, themes, and core. It can automatically perform updates if you choose to do so. Smart Updates makes sure that the proper changes are identified and implemented without breaking the live site.

…Alright Pleskians, it’s time to hit the play button if you want to hear the rest. If you’re interested in hearing more about WordPress hosting, check out this Next Level Ops episode. We’ll be back soon with the next installment.

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Lucas Radke

Lucas is a Product Manager at Plesk.

Did you know we’re also on Spotify and Apple Podcasts? In fact, you can find us pretty much anywhere you get your daily dose of podcasts. As always, remember to update your daily podcast playlist with Next Level Ops.  And stay on the lookout for our next episode!

Discovering the Plesk WordPress Toolkit: Behind the Scenes

It goes without saying that WordPress is the most popular CMS in the world today. In fact, 37,8% of all websites use WordPress as a CMS. And considering that in 2020 there are over 1.7 billion active websites globally, almost 40% is quite an impressive figure (right?) That said, it’s no wonder why WordPress also dominates application installations in Plesk, such as our beloved WordPress Toolkit.

Additionally, this month we’re celebrating Plesk WordPress Toolkit has reached more than 1,600,000 WordPress websites throughout all Plesk versions and platforms. And we’re proud to say that for us, this milestone is huge. But, of course, this doesn’t end here. We’re looking forward to increasing this number and continuing its development by addressing our users’ needs. So, if these numbers have stumped you, read the rest of the article for more interesting facts.

Biggest WordPress Toolkit Feature Releases in 2020

Plesk WordPress Toolkit is one of our most treasured products. It might be because its all-in-one solution handles all WordPress installations from one single dashboard. And because it simplifies your daily workload and makes your life as a WordPress user much easier. While making sure your site is updated and protected against cyber threats. We understand – we love it too!

Whereas other abnormalities are still striking in 2020, our super team behind the WordPress Toolkit strives to deliver an enhanced product on every release. Let’s remember the major updates since the beginning of this year:

Developing WordPress Toolkit for cPanel

Whilst 2019 releases were mainly focused on radical improvements to our premium Smart Updates, 2020 has been the year for developing WordPress Toolkit for cPanel. In fact, we had a very good start with this ambitious project. And by the 4.8 release, we had made WordPress Toolkit on cPanel almost feature complete. Nonetheless, we still need to be patient before WordPress Toolkit for cPanel is available for the public. But we can assure you that the finish line is closer every day.

CLI for Smart Updates

After adding CLI for existing features such as cloning and data copy early this year – find out more here, the time for Smart Updates arrived. In WordPress Toolkit 4.8 we added the first part of Smart Updates CLI, allowing hosters to enable and disable Smart Updates on a site.

Website URL Update

One of the frequent cases our partners encounter is the migration of websites to their servers by customers. WordPress stores the website URL in its database – and sometimes, in the configuration file. Therefore, such migrations require some manual tinkering to make the website work as usual. To help users, we added the ability to perform this action with a couple of clicks straight from the WordPress Toolkit user interface. This feature is called “Update Site URL.”

Disable wp-cron.php Execution

To facilitate the ability to disable wp-cron.php, we added a one-click switch on each website’s card. Turning the switch on will automatically create a scheduled task that runs wp-cron.php every 30 minutes. And it will also disable the default wp-cron execution by adding a specific line to wp-config.php file. Pretty useful indeed.

Default WordPress Installation Language

Finally, in 2020 we also delivered this quite handy functionality. Now, server administrators can open global WordPress Toolkit settings and choose a language that should be selected for all WordPress installations on the server by default. Users installing WordPress can choose a different language if they want, obviously.

Did You Know? – The Team Behind It All

All these great achievements wouldn’t have been possible without our technical team. And to recognize their hard work and commitment throughout these years, we want to dedicate some time to them. So, let us introduce you to Andrey Kugaevsky, Product Manager at Plesk – aka the WordPress Paladin. Even though we’re sure you’ve probably heard Andrey before in one of our official Next Level Ops Podcast or read one of his articles in our blog.

Andrey and his team sweat their work out to make WordPress Toolkit the star of the show. With that in mind, we’re inviting you to meet the team behind our beloved product. Let’s hit the play button:

Your Feedback is Also Essential

And of course, our technical team wouldn’t be able to achieve such great achievements if it wasn’t because of our users’ contributions. There are different ways you can use your voice and help Andrey and his team to make the WordPress Toolkit even better. Our Program Managers are in permanent contact with support teams for gathering information before choosing a new product feature for implementation. And for some top features, they test hypotheses on-site or create surveys and send them to customers for review.

If you have feedback on WordPress Toolkit or ideas on how to improve it, making it more useful to you and your clients, you can check out this article to find out more about how to contribute.

Get Started with Our Current Offers

Now that you know a little bit more about what’s going on behind closed doors, you may want to give Plesk WordPress Toolkit a try. Currently, we’re offering 6 months free for WordPress Toolkit on a yearly subscription, including remote management for agencies. Additional details about these offers can be found here.

Or if you’re already familiar with our product and your curiosity got you this far, why don’t you tell us your experience with Plesk? You can let us know in the comments below. We’re all ears!

Next Level Ops Podcast: Tips for Scaling Your Hosting with Jan Loeffler

Hello Pleskians! This week we’re back with the seventh episode of the Official Plesk Podcast: Next Level Ops. In this installment, Superhost Joe welcomes back Jan Loeffler, Plesk’s CTO and Tech Mage, to talk about optimizing and scaling your hosting.

In This Episode: the TikTok Effect, Jan’s Downtime Checklist and When to Scale

What do we mean by scaling and why should you be thinking about it? What do you do if you suddenly become popular on TikTok and visitors are streaming to your website? Before you scale online, what is the first thing you should be doing? Jan and Joe answer these questions and more in the latest Next Level Ops episode.

Avoiding downtime is the first thing you should be considering, according to Jan. “Downtime is the worst problem for your business. Because that means that customers are not able to visit your site anymore,” says Jan. “Most of the downtime is not happening due to the hosting stack or the hosting infrastructure. Usually, downtime happens more often from the user.” 

Before you consider scaling and performance tuning, make sure that you have a process in place for:

  • Disaster recovery and creating regular backups.
  • Not making changes on a live site and using tools that provide you with test environments.
  • Making sure that your website is fast because businesses lose revenue when sites take more than 3 seconds to load.
  • Not using “too poor” hardware and always making sure that you have enough server capacity left.
  • Profiling your server and site activity by using performance monitoring tools to find out where your bottlenecks are.

To get the best out of scaling your hosting, make sure you follow Jan’s Downtime Checklist above. And remember, “It’s also like running a marathon. You shouldn’t always run at the limit because afterwards you’ll get a cold.” says Jan. 

Wise words. 

To check out Jan’s previous feature, go here to learn all about optimizing your website (and get bonus training tips for your next big marathon).

“Downtime is the worst problem for your business. Because that means that customers are not able to visit your site anymore. Most of the downtime is not happening due to the hosting stack or the hosting infrastructure. Usually, downtime happens more often from the user.” 

Jan Loeffler

Key Takeaways

  • What’s the Downtime Checklist? Before scale and tuning websites, make sure that the user is not contributing to downtime. Have access to regular backups, test environments, good hardware and monitoring tools.
  • Speeding up your website and caching. Everything that helps you reduce database calls is your first priority. The second priority is to reduce processing PHP. It’s even faster when you don’t need to call up your web server. This is possible through the Content Delivery Network (CDN). You can use the Speed Kit to speed up your website.
  • Scaling your website. A website should usually be able to handle 200 requests per second. If you’re scaling your business or brand, make sure whether you need a static or a dynamic website. If you run an ecommerce website, then you need horizontal scaling.

…Alright Pleskians, it’s time to hit the play button if you want to hear the rest. If you’re interested in hearing more about site optimization, cloud services and WordPress hosting, check out the rest of our Next Level Ops episodes. We’ll be back soon with the next installment.

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Jan Loeffler

Jan is the Chief Technical Officer at Plesk.

As always, remember to update your daily podcast playlist with Next Level Ops. And stay on the lookout for our next episode!