New Relic – Application Performance Monitoring with Plesk

New Relic Plesk extension

We have great news! From today onwards, you can get detailed performance data of your Plesk web infrastructure and applications with the New Relic extension on Plesk. Read on to find out how this newly-updated, handy extension can benefit you, your infrastructure and applications.

Post updated on April 26, 2018

What’s New Relic?

New Relic is an analytics software company that specializes in monitoring and analyzing web servers and web applications. With New Relic, you gain access to plenty of analytics tools to measure and monitor performance bottlenecks, throughput, network graph, server health and many more – all almost in real-time.

The New Relic Plesk extension seamlessly integrates with your Plesk server, and supports the two most important tools by New Relic:

  • APM for Application Performance Monitoring and Management
  • INFRASTRUCTURE for Server Monitoring

Without a doubt, New Relic lands near the top of our must-have list for Web Professionals.


Monitors health and tracks capacity, memory or CPU consumption. The INFRASTRUCTURE component allows you to view and analyze critical system properties of your web server. This is a must-have product for monitoring important metrics such as CPU usage, physical memory, and running processes.

Once installed on the server, you will see all important data within minutes in your New Relic dashboard. You can create alerts to be informed if some metrics pass a critical condition.

Feature #2 – APM

Code level visibility for all your web applications. The APM (application performance monitoring) component delivers real-time, trending data and charts about your web application’s performance down to the deepest code levels. 

New Relic Plesk extension for application performance monitoring

If you have a performance problem with your application, then you don’t have to guess where the performance blockers are. APM supports 7 programming languages to analyze your web application source code, including PHP out-of-the-box. With the help of APM, you can determine whether the bottleneck is within your application, the web server or your database, helping you make quick decisions to enhance your user experience.

Activate Infrastructure and APM with one click

This extension integrates New Relic INFRASTRUCTURE and APM seamlessly into Plesk Onyx and Plesk 12.5. If you’re a developer, DevOp or software company – Use this Plesk extension to understand how your applications are performing in development and production environments.

Meanwhile, there are stress-testing services like  Blitz, Stormforger or Loadstorm to simulate many website visitors. All while using New Relic to identify weak spots in your code.

How do I use the New Relic extension?


  1. Enter your New Relic license key
  2. Enter your preferred server name
  3. Click “Install”
  4. Done!



License Key

You’ll first, you need an account at New Relic to get your license key. But don’t worry, you can quickly sign up for a free account with basic functionality.

The INFRASTRUCTURE component is available for free, but the APM component requires a paid subscription. However, you can test APM for a limited time span with your free account.

You’ll find your New Relic license key in the “Account settings” in your New Relic profile. If you’re convinced of the power of APM, just upgrade to an ESSENTIALS or PRO subscription. Your license key will stay the same and you can use the full power of the Plesk extension.

Server Name

You can specify a unique name that will be used as an ID within your New Relic profile. For example, you could use your domain name or hostname – whatever you like.

Install INFRASTRUCTURE: This option installs the INFRASTRUCTURE agent on your server.

Install APM: This option installs the APM agent for PHP on your server. To install the agent, you need to select the PHP versions your applications are using. This configures the APM agent properly in order to analyze requests handled by your applications.

The Operating Systems that this extension supports are: 

  • Ubuntu
  • Debian
  • CentOS
  • Red Hat

Free extension with sources available on GitHub

We love open source! So we also released the New Relic extension in our official Plesk GitHub account in a public repository.

We’d be happy to see your valuable input and contribution. Feel free to create pull requests or add GitHub issues to contribute to this project. If you like it, give it a star 🙂 Now, have fun and get rid of those annoying bottlenecks in your code!

How to reduce server load and improve WordPress speed with Memcached

In my last article about Varnish in a Docker container, I’ve explained how to easily activate server-side caching and what advantages you can get with this mechanism. Today, I will show you how you can reduce server load and drastically improve your WordPress website speed with Memcached.

Memcached – a distributed memory caching system

Memcached caches data and objects directly into the memory (RAM) and reduces the amount of times an external source has to be read (e.g. the database or API-calls). This especially helps dynamic systems like WordPress or Joomla! by noticeably improving the processing time!

Before we start, keep in mind that Memcached does not have built-in security measures for shared hosting environments! This tutorial should only be used on a dedicated server.

Installing Memcached

On my server, I run Plesk Onyx with CentOS 7.x. This tutorial also applies to other systems, just remember to use the system-specific commands (e.g. apt-get instead of yum). To install Memcached, first access your server via SSH and use the command line:

yum install memcached

After the installation process, we start it with:

service memcached start

Next we have to install PECL Memcached for the corresponding PHP version. WordPress is fully compatible with PHP 7, so let’s activate Memcached for the latest PHP 7.1 version. Start by installing all the necessary packages to add our custom PHP module in Plesk.

yum install make plesk-php71-devel gcc glibc-devel libmemcached-devel zlib-devel

Build the module with these instructions. You don’t have to specify the libmemcached directory manually, simply hit Enter if prompted.

/opt/plesk/php/7.1/bin/pecl install memcached

In the next step, we have to add a line to the corresponding configuration file to register the module in PHP. You can use the command line without having to open the ini file with an editor.

echo "" > /opt/plesk/php/7.1/etc/php.d/memcached.ini

And finally, re-read the PHP handlers so that you see the module in the PHP overview in the Plesk GUI.

plesk bin php_handler --reread

You can check the phpinfo()-page now to find out if the memcached module was loaded properly.

Memcached php configuration
Memcached – phpinfo() output

Or directly via the command line:

/opt/plesk/php/7.1/bin/php -i | grep "memcached support"
Memcached php command line check
Memcached – PHP command line check

Secure and monitor your Memcached integration

Memcached is using port 11211 per default. For security reasons, we can bind port 11211 only to localhost.

Add the following line to the end of file /etc/sysconfig/memcached and restart Memcached service.


To monitor and get some stats from Memcached, you can use the following commands:

echo "stats settings" | nc localhost 11211
/usr/bin/memcached-tool localhost:11211

Activate Memcached in WordPress

Once Memcached is installed on the server, it is easy to activate it in WordPress. First, we need to activate the Memcached backend with a special script that auto-detects whether to use Memcached as the caching mechanism.

Download the script from and move all files to the /wp-content/  folder.

If you didn’t change the default port (11211) of Memcached, then you are ready to use it directly. If you’ve changed it, then you will have to add the following code to the wp-config.php (placed in the root of your WordPress instance).

$memcached_servers = array( array( '', 11211 ) );

Okay, once the backend is activated, we will install a cache plugin to store and serve rendered pages via Memcached. Install the plugin Batcache ( using the installation instruction.

  1. Download and unzip the package
  2. Upload the files advanced-cache.php to the /wp-content/ folder
  3. Open wp-config.php and add the following line
    1. define(‘WP_CACHE’, true);
    2. Important: Be sure that Memcached is enabled properly for the selected PHP version before adding this line else an error will be thrown!
  4. Upload the file batcache.php to the /wp-content/plugins/ folder

That’s pretty much it! You can open the advanced-cache.php file and adjust the settings for your needs. The file batcache.php is a small plugin that regenerates the cache on posts and pages. Don’t forget to activate the plugin in the backend on the plugin page!

Verify that Memcached works properly in WordPress

Now, let’s verify that you did everything correctly. The easiest way to see whether the rendered page was sent from cache is to add an additional header field to the response.

To achieve it, you have to modify the advanced-cache.php file. Open the file and search for

var $headers = array();

Change this line to

var $headers = array('memcached' => 'activated');

Open the Developer Tools in your browser (F12 in Chrome), select the Network tab and reload your website several times (just to be sure the page is loaded from the cache) and check the response headers. If you see the memcached header field, then everything is fine!

Memcached for WordPress - Nginx response headers
Memcached – Response Headers Check

Attention, if you are logged in into WordPress, then cache is never used and the system always sends the uncached version of the loaded page. So, what can you do to verify the functionality while being logged in? You can either log out first or open a new tab in your browser in Incognito / Private Window mode and use the Developer Tools there.

Instead of the examining the headers, you can also check the source code of the loaded page. If you can find similar lines, then the page was loaded from the cache:

    generated 207 seconds ago
    generated in 0.450 seconds
    served from batcache in 0.007 seconds
    expires in 93 seconds

Let’s do some stress tests with!

We can test the load performance by stress testing, which will load the website with many concurrent users per second for a certain time span. Without any security and overload protection, your server should start to respond slower until the requests cannot be handled anymore. With Memcached activated, your server should be able to serve intensive requests longer without throwing errors.

Let’s run some load and performance tests with

Note: For this stress test I took the same small server that I used for the tests with Varnish (only 1 CPU and 500MB Memory)!

Result WITHOUT Memcached:

Wordpress Without Memcached
Stress test – WordPress without Memcached

It is the same result like in the Varnish stress test. As you can see, I had to abort the stress test because the server couldn’t handle the requests less than 5 seconds and less than 50 concurrent users into the test. After just 15 seconds, the server collapsed completely and no requests could be managed anymore!

Result WITH Memcached:

Memcached WordPress
Stress test – WordPress with Memcached

As you can see, the Memcached cache allows us to keep the server stable even under heavy load. The small test server handled over 400 concurrent users and responded all requests over 50 seconds without any errors. After 50 seconds and almost 450 concurrent users, the server finally overloaded and stopped accepting further requests. With a more powerful server, the numbers would be much higher.

Therefore, it’s a great idea to use Memcached to keep your website reactive, even when it suffers a simple attack. For real DDoS attacks (Distributed Denial of Service attack) you’d be better off with CloudFlare ServerShield to protect your server.

Summary: Memcached for WordPress works perfectly

Memcached can greatly improve the performance of your WordPress website and reduce the CPU-load of your server. It’s easy to set up a working environment and it works out-of-the-box.

Thank you for reading my article, let me know your comments!

Have fun improving the speed of your WordPress website and stay Plesky!

Varnish for WordPress in a Docker container

Is your website experiencing heavy traffic? Are you looking for a solution that will reduce server load and will improve website speed? Varnish might just be what you need. Varnish listens for duplicate requests and provides a cached version of your website pages, mediating between your users’ requests and your server.

So how do you activate Varnish? In this article, I will show you how you can easily increase your website speed by using Varnish as a one click Docker container. I will demonstrate how using a website caching solution like Varnish can easily improve both page response times and the maximum number of concurrent visitors on your website. To simulate real traffic and measure correct response times, I have used an external server similar to, or to generate lots of traffic and concurrent users to our site.

What is Varnish and why should you use it?

Varnish Cache Plugin

Varnish HTTP Cache is a software that helps reduce the load on your server by caching the output of the request into the virtual memory. It is a so-called HTTP accelerator and is focused on HTTP only. Varnish is open source and is used by high traffic websites such as Wikipedia.

If you have lots of daily visitors, we recommend using a cache mechanism. You’ll see your response time improving significantly because the server can send the already cached data, directly from the memory, back to the client, without the resource consuming process handling on the web server. Additionally, it reduces the load on the CPU so that the server is able to handle many more requests without getting overloaded. I will demonstrate this in the stress tests later.

Running Varnish in a Docker container

Docker is a great open source project that makes it incredibly simple to add Varnish to a running server. We don’t need to install Varnish on the production server, we simply use a ready-to-go Varnish Docker image. The main advantage is that if something goes wrong with the container, we can simply remove it and spin-up a new container within seconds. The way in which Docker containers are designed guarantees that Varnish will always run independently of our system environment. Do you want to know more about Docker containers? Read more about the 6 essentials on Docker containers!

For this tutorial, I will use the newly integrated Docker support on Plesk to activate Varnish. The Plesk interface makes it easy to get a Varnish instance running, only requiring small modifications of the Varnish configuration file to be done using the terminal.

A further improvement would be to rebuild the Varnish Docker image so that it takes our configuration as a parameter from the Plesk UI. For now, I’ll stick to the original Docker image and upload our configuration via shell.

Activate Varnish in Plesk and test on a static page

Okay, let’s try it first on the default static page of Plesk. In the default settings, Plesk uses Nginx as a reverse proxy server for Apache. This means that Nginx is listening to default port 80(443 for HTTPS) and Apache to an internal port (7080 HTTP, 7081 HTTPS) We will push our Varnish container in between of the two web servers. In this scenario, Varnish will get the request from Nginx and the content from Apache. Don’t worry, it’s easier than it sounds!

Go to Docker and search for the image million12/varnish in the Docker Image Catalog. Once found, click “run” and Plesk will download the image to your local machine. After the download, click “run (local)”, which will open the configuration page of the container. The only thing that we’ll change is the port mapping.

Port mapping in Varnish
Varnish in Docker container on Plesk Onyx – Port mapping

Remove the tick at the option “Automatic port mapping” and set an external port (I will use port 32780 in this tutorial) for the option “Manual mapping”. This means that port 80 of the container is mapped to the external port 32780. By adding a proxy rule we can “talk” to the container through this external port. We will set the backend server in Varnish to the Apache port from where the data will be gathered if a “cache miss” occurred.

Test Varnish with a static page

Create a subdomain for testing our Varnish integration on a static page. After the subdomain was created, go to the “Hosting Settings” and deactivate the options “SSL/TLS support” and “Permanent SEO-safe 301 redirect from HTTP to HTTPS” because we want to test the Varnish functionality over HTTP first. Okay, but how do we redirect the requests to the Varnish container? This can be done easily with the option Docker Proxy Rules that you will find in the domain overview.

Proxy rules related to Varnish Cache
Varnish – Proxy rules for Docker container on Plesk Onyx

Click on “Add Rule” and select the previously created container and the port mapping that we entered manually. If you cannot make a selection, then your container is not running. In this case you should click on Docker in the menu and start the container first. If you open the subdomain after you’ve activated the proxy rule, you will see the error Error 503 Backend fetch failed. Don’t panic, this is an expected behavior. We did not configure the Varnish backend server yet!

Error 503 - Backend fetch failed
Varnish – Error 503 Backend fetch failed

Configure Varnish properly in the Docker container using SSH

This is the only time when we need to access the server and the Varnish Docker container via SSH. Open your terminal and type

$ ssh [email protected] // Replace with your user name and correct IP address

Enter your password if required to get access to the server. Tip: use a private / public key pair to improve the security of your server!

First of all, we need to find out the ID of our Docker container. To list all active container type into the command line

$ docker ps
Varnish HTTP Cache - Running Docker containers - Plesk Onyx
Varnish – Running Docker containers – Plesk Onyx

Copy the Docker ID and use the following command to access the Docker container

$ docker exec -it ID bash // Replace ID with the correct container ID

Okay, the most important thing to do is change the host and port value for the default backend server in the file. /etc/varnish/default.vcl

For .host we will enter the IP address of the server where Plesk is executed (in our example 111.222.333.444) and for .port 7080. As mentioned before, this is the default Apache HTTP port in Plesk. We have to use this port because, internally ,Varnish can only speak over an unencrypted channel!

Tip: Do we have a cache hit or miss?

How do we see that the content was loaded from the memory and not from the Apache server? You will see that the request was processed by Varnish through a special header entry in the response, you will not know whether the data was loaded from the memory or was requested from the Apache server.

To achieve it without having to use varnishlog in the console, we can set another header value with the corresponding value (cache hit / cache miss). We have to use the function sub vcl_deliver that is the last exit point for almost all code paths (except vcl_pipe). Add the following code within the curly brackets of the function sub vcl_deliver

if (obj.hits > 0) {
     set resp.http.X-Cache = "HIT";
} else {
     set resp.http.X-Cache = "MISS";

Use the Developer Tools in your browser to examine the response

Save the modified file and exit the container. Switch to your Plesk UI again and restart the container in Docker with the “Restart” button. When you see the success message, go to the tab of the subdomain with the 503 error message. Do not reload the page yet but open the Developer Tools first (alt + cmd + i on a MacBook). Go to the “Network” tab and reload the page. Select the first entry (URL /) and take a closer look at the “Response headers”.

Cache Miss and Varnish
Varnish – Cache Miss

If everything was done properly, you will see some new header variables:

X-Cache – This is the variable that I’ve defined in the configuration file. After the first reload it should display a “MISS”.
X-Varnish: ID – The internal ID for this file in Varnish {more information required}
Via: "1.1 varnish-v4" – This shows that the request was redirected through the Varnish container.

Okay, it’s about time to see some Varnish magic! Click on the reload button in your browser to reload the page. This time it will be loaded from the virtual memory.

Varnish - Cache Hit
Varnish – Cache Hit

What about websites that are using HTTPS to encrypt the connection?

It also works and the best part of it is that you don’t have to change anything! Create an SSL certificate for the subdomain using the great Let’s encrypt extension. After the certificate was created and assigned (the extension does it automatically), go the the static page and reload it using https:// instead of http:// If you open your browser console, you will see a X-Cache: HIT in the response headers:

Activate Varnish caching on your WordPress website

We just saw that it’s technically possible to activate Varnish inside a Docker container with Plesk. Now let’s try it on a WordPress website!

The main difference is the configuration of the VLC configuration file within the Varnish container. WordPress is a dynamic CMS, thus we cannot cache everything without restricting the functionality of the system; the administration pages shouldn’t be cached since changes wouldn’t be possible any more for logged in users.

There are many pre-defined configuration files for WordPress available on the internet, from various developers. In most cases, you can use them right away without any modifications. For our test integration, we will take the configuration file created by HTPC Guides (with small adjustments – link below).

For this article and for the stress tests I’ve created a fully workable website with WordPress. I want to test under real conditions and not with a default WordPress installation. The website should also be secured with an SSL certificate and only callable over HTTPS. For this reason, I will also activate an SSL certificate with the help of the Let’s Encrypt extension for this installation.

Use a WordPress Plugin to activate support for HTTPS

Important: Do not use the option “Permanent SEO-safe 301 redirect from HTTP to HTTPS” within Plesk in “Hosting Settings” because this will lead to a redirect loop in our special environment constellation. Instead I will use a WordPress plugin to switch our installation completely to HTTPS. The name of the plugin is Really Simple SSL and can be downloaded from the official plugin repository.

Please make the same preparations like for the static page but add this time the additional required configuration data for WordPress to the default.vcl configuration file inside the Docker container. I’ve used the this Varnish configuration file (GitHub Gist) for my test installation. Don’t forget to adjust the backend server again like we already did for the static page!

Tip: Do not forget to restart the Docker container from the Plesk UI to reload the configuration information. If you forget to restart the container, then Varnish will not work properly with the WordPress website.

Now reload the front page of WordPress with the browser console open. The first loading process should throw an X-Cache: MISS but the second (and following) reloads will return an X-Cache: HIT.

Cache Hit with Varnish HTTP Cache plugin
Varnish in WordPress – Cache Hit

Let’s run some stress tests with!

We’ve seen that Varnish helps to improve the performance of the website. What is with the promised load reduction on the CPU? I can test it with the so-called stress testing which will load the website with many concurrent users per second for a certain time span. Without any security and overload protection, the server will start to respond steadily slower until the requests cannot be handled any more completely. With activated Varnish the server will be able to serve such intensive requests longer without throwing errors.

All right, it’s time to run load and performance tests with the external service provider

Note: I used a very small server for this test instance (only 1 CPU and 500MB Memory), so the positive impact of Varnish should be much higher on a more powerful server!

Result WITHOUT Varnish:

Wordpress without Varnish HTTP Cache
Stress test – WordPress without Varnish

As you can see, I had to abort the stress test because the server already couldn’t handle the request after less than 5 seconds and less than 50 concurrent users. After just 15 seconds the server collapsed completely and no requests could be managed any more!

Result WITH Varnish:

Varnish HTTP cache - WordPress with Varnish
Stress test – WordPress with Varnish

Varnish magic! As you can see, the Varnish cache allows us to keep the server stable even under heavy load. The small test server handled over 300 concurrent users and responded all requests over 30 seconds without any errors. After 30 seconds and over 300 concurrent users the server was overloaded and couldn’t accept further requests. With a more powerful server the numbers should be much higher! So, it’s also great to keep your website reactively if it suffers a DDoS attack, at least for a certain number of requests.

Summary: Varnish for WordPress within a Docker container on Plesk

Let me make a small checklist:

  • Varnish in Docker container? Yes.
  • Varnish in WordPress? Yes.
  • Varnish in Plesk? Yes.
  • Varnish for WordPress within Docker container in Plesk? Absolutely, yes!

Mission accomplished! 🙂

As you’ve seen, Varnish can greatly improve the performance of your WordPress website and reduce the CPU-load of your server. It’s relatively easy to setup a working environment using Varnish in a Docker container between Nginx and Apache within Plesk. The most important part is the correct configuration of Varnish for your specific CMS.

Thank you for reading. In the next blog post, I will take a look into another memory caching system, Memcached.

Stay tuned and stay Plesky!

Critical Kernel flaw discovered – Update your server

KernelCare as the fix for critical kernel flaw

Linux Kernel flaw that has existed for over 10 years in the code has been discovered by Andrey Konovalov, a security researcher at Google. The DCCP (Datagram Congestion Control Protocol) implementation causes this flaw that can lead to kernel code execution from unprivileged processes. DCCP is a message-oriented transport layer protocol and enables the access to congestion-control mechanisms.

The good news first, the vulnerability is not executable remotely but requires a local account. The bad news is that a user can use the flaw to crash the system or escalate his privileges to get administrative access.

Andrey posted a detailed description about the bug:

In the current DCCP implementation an skb for a DCCP_PKT_REQUEST packet is forcibly freed via __kfree_skb in dccp_rcv_state_process if dccp_v6_conn_request successfully returns [3].

However, if IPV6_RECVPKTINFO is set on a socket, the address of the skb is saved to ireq->pktopts and the ref count for skb is incremented in dccp_v6_conn_request [4], so skb is still in use. Nevertheless, it still gets freed in dccp_rcv_state_process.

The fix is to call consume_skb, which accounts for skb->users, instead of doing goto discard and therefore calling __kfree_skb.

To exploit this double-free, it can be turned into a use-after-free:

// The first free:
// Another object allocated on the same place as dccp_skb:
some_object = kmalloc()
// The second free, effectively frees some_object

As this point, we have a use-after-free on some_object. An attacker can control what object that would be and overwrite its content with arbitrary data by using some of the kernel heap spraying techniques. If the overwritten object has any triggerable function pointers, an attacker gets to execute arbitrary code within the kernel.

Andrey already committed a patch for the DCCP flaw to the main Kernel code and all major Linux distributions already provide updates to fix this issue.

Use KernelCare for automatic, rebootless updates

KernelCare by CloudLinux will update your servers automatically without having to reboot the system. It ensures that your kernel is always up to date with all security updates and helps to lower operating costs for server management.

KernelCare Plesk Extension - Fixes flaws automatically

Keep your servers updated with the KernelCare Plesk extension that deploys kernel security patches, installed as soon as they are released to maintain the safest Linux environment!

Stay up-to-date and Plesky!

Google PageSpeed Insights – How to optimize your site to rank higher

So, you have a well-configured server but the performance of your website is poor. Your page response times (latency) are in the seconds and your server cannot handle more than 100 concurrent users. You’ve invested in SEO but you still feel that Google Search does not give you the ranking your site deserves. What do you do? How can Google PageSpeed Insights help you? Let’s start with the basics!

Performance is an important ranking factor

Good website performance is essential. A modern website does not just consist of some few static files, it is made up of front-end libraries and frameworks like Bootstrap. The more files a client has to download to render a complete page, the longer it’ll take a page to load. And the longer it takes for a page to load, the lower the ranking falls.

The impact of mobile

The other factor that is key to a website’s search ranking is its mobile-friendliness. Not only because mobile-friendly sites are optimized to load quickly on low throughput and high latency mobile connections, they also provide great user experiences.

A very popular framework to implement easily a responsive web design is Bootstrap, and even though Bootstrap is easy to use, it requires at least two more static files to be able to work. This means that we’re buying usability at the expense of loading performance. But don’t worry, I will explain how you can compensate for this small loss later in this article.

Google PageSpeed Insights helps to increase the performance

With PageSpeed Insights by Google, you can perform checks to identify areas of improvement and make your websites faster and more mobile-friendly within seconds – Both of which are key to get a pole position on Google Search.

Google PageSpeed Insights - Frontpage

You can use PageSpeed Insights for free from the project page, or follow our guide to install Google PageSpeed Insights Plesk Extension on your Plesk control panel.

Understanding PageSpeed Insights Recommendations

1. Avoid landing page redirects

Redirects can cause a perceptible latency if the request is redirected several times to the end point from where data is eventually sent to the client. Every redirect initiates another HTTP request-response action (with possible DNS lookups and TCP handshakes) which can dramatically decrease the site performance, especially on a mobile device with a slow internet connection.

A good example how to avoid redirects for mobile devices, is to use a modern, responsive design. An already mobile-optimized website does not require redirects to a dedicated subdomain for mobile devices.

Also make sure you redirect correctly in one step from to People tend to just type the shortest form of your domain to the browser address bar – but your website should run with https only (for more security and better ranking) and most probably use www as subdomain.

SEO tip: 301 redirects from HTTP to HTTPS

HTTPS has become an important factor for the ranking in Google. Search engine prefers website that use the HTTPS protocol to ensure secure communications between the two endpoints, here the client and the server. Consider activating a 301 redirect option in on your domains once you’ve installed your SSL certificates.

For Plesk users, the Plesk extension Security Advisor will help you to activate free certificates for all you websites, and you can activate your 301 redirects through “Hosting Settings” on your dashboard.

Talking about redirects, Plesk supports the SEO-friendly 301 redirects from HTTP to HTTPS out of the box. This means, if you activate a free SSL certificate powered by Let’s Encrypt, Plesk will help you to switch to the secure protocol without losing the ranking power.

2. Enable compression

Always send content compressed with GZIP or Deflate to the client. This rule checks whether the source served compressible resources (such as HTML, images or JS / CSS files) with compression. Compression reduces the number of bytes transferred over the network up to 90%. This reduces the overall time to download all resources which leads to a faster loading time and better user experience.

Important for the compression usage is that both sides (both client and server) understand the applied compression algorithm. The so-called HTTP Header fields exchange the supported algorithms. If you want the know more about the network protocol HTTP, then please read my article about HTTP/2. Most modern browsers do already support compression out of the box. On the server-side you can use special modules, e.g. mod_deflate (Apache) or ngx_http_gzip_module (Nginx).

Plesk supports compression out of the box

Don’t worry, a Plesk server already pre-installs the required compression modules, you just have to activate this feature manually for all domains that should use compression. You can add the needed code into an .htaccess (Apache) or web.config (NGINX) in the root directory of your website or even easier directly in Plesk:

Go to “Websites & Domains” and select “Apache & nginx Settings”. If you use the Apache web server, then you will have to add the following code into the textarea under “Additional Apache directives”. Select the textarea “Additional directives for HTTPS” if you are using HTTPS else the first textarea.


AddOutputFilterByType DEFLATE text/plain text/html text/xml;
AddOutputFilterByType DEFLATE text/css text/javascript;
AddOutputFilterByType DEFLATE application/xml application/xhtml+xml;
AddOutputFilterByType DEFLATE application/rss+xml;
AddOutputFilterByType DEFLATE application/javascript application/x-javascript

If you use NGINX, then you have to add the following code into the text area “Additional nginx directives”.


gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_disable "msie6";
gzip_types text/plain text/css text/javascript text/xml application/json application/javascript application/x-javascript application/xml application/xml+rss;

Warning: The dynamic compression can affect the CPU in the way that you could lose the performance advantage of the compression due to long CPU processing. It does not make sense to set the compression level to the highest level because the gain in the file size is minimal compared to an average level but the CPU load and processing time is dramatically higher. Another improvement would be to cache already compressed files and deliver them directly without any compression processes involved.

3. Leverage browser caching

The loading of static files is time-consuming and expensive. The browser stores already downloaded resources in the cache storage of the browser. The server can define a specific caching policy with special headers. The local cache should provide the resources from the local cache instead of requesting them again from the server.

You can use two header fields in the response header: Cache-Control and ETag. With  Cache-Control you can define how long the browser can cache individual responses. ETag creates a revalidation token with which the browser can recognize file changes easily.

The browser should cache static files for at least one week. If you have files that don’t change regularly or at all, then you can increase the cache time up to one year.

4. Reduce server response time

PageSpeed Insights triggers this rule if the server does not respond within a certain time span (>200ms). The response time means the time that the browser needs to load the HTML code for the output. Many factors can have a negative effect on the response time.

The reason of a slow response time is not easy to solve without insight analysis. Possible factors for the delay could be caused by the server, such as slow CPU or memory starvation, or in the application layer, e.g. slow script logic, heavy database queries or too many libraries included.

The question is, how to find those bottlenecks? You could use the New Relic Extension to solve such issues or alternatively check your website with WebPageTest to see how browsers render the pages and which files slow your site down!

5. Minify HTML, CSS & JavaScript

The server can minify resources like the HTML code or JavaScript and CSS files before sending them to the browser. This saves many bytes of data which speeds up the download of the resources. Minification is the process of compacting the code without losing any information that the client requires to render the website properly.

Such optimizations contain for instance the removing of comments, unused code or unneeded whitespaces. Don’t worry, you don’t have to do it manually, there are a lot of free tools or plugins that will do the job for you automatically. Just google it!

Note: If you look in such a minified file, you might think that this is not readable at all but for the computer it doesn’t make a difference. In fact, it is even better if the code is as compact as possible!

6. Eliminate render-blocking JavaScript and CSS in above-the-fold content

PageSpeed Insights triggers this rule if the browser loads JavaScript and CSS files even though the so-called above-the-fold content doesn’t need their code to create the proper output. This means that the browser can not render the HTML output as long as all external resources are not available completely.

An external resource is not necessarily a file from another server but an additional file in general that the client has to load on top the the HTML response to render the page properly. Rendering relevant JavaScript and CSS code can be added inlined. But this should be limited only to the absolutely necessary code parts. You should load not rendering critical JavaScript code asynchronously or deferred at the bottom of the page.

It also makes sense to concatenate all files into one single file (one file each for CSS and JavaScript) to reduce the amount of HTTP requests. In general, you should definitely activate HTTP/2 support on your server. The new version of the network protocol will have a very positive impact on the site performance. Read all about HTTP/2 and how to activate it in our blog post HTTP/2 – Increase your site performance!

7. Optimize images

If you have a lot of images on your website, then this could one of the biggest improvement potential. Optimizing images without an impact of the visual quality can reduce the file size significantly which in turn improves the download time and bandwidth usage drastically.

Many different possibilities exist to optimize images, e.g. resolution, image format or the quality settings. On many websites webmasters upload images in too high resolutions and thus with too large file sizes. PageSpeed Insights lists these files after the check with a percentage number of possible size saving for optimized variations of the same images.

Content-Delivery-Networks like CloudFlare (link to our extension) can optimize images automatically for you and bring them close to your customers. Be aware, this optimization feature requires a paid subscription. Of course, you can also optimize your images manually. Read this guide provided by Google: Optimize Images.

8. Prioritize visible content

This rule is similar to the render-blocking rule. PageSpeed Insights triggers it when additional network round trips are necessary to render the above-the-fold content of the loaded page. If visitors load this page over a slow connection (with high latencies), the additional network requests will create significant delays and degrade the user experience.

It is important to structure the HTML code that the critical content is loaded first. So, if you have a sidebar next to you article, then position the sidebar after the article in the HTML code so that the browser renders the article before the sidebar.

I’ve already mentioned the asynchronous JavaScript delivery, it is also possible to improve the CSS delivery strategy. Required CSS instructions in the visible content part can be loaded inlined directly in the HTML code and the rest deferred in one file after the rendering process.

Google PageSpeed Insights Plesk Extension

If you haven’t already done so, install the Google PageSpeed Insights Plesk Extension today and start improving your website performance and rankings.

Do you have tips of your own? Share them in the comments below.

Introducing Google PageSpeed Insights Plesk Extension

Google PageSpeed Insights Plesk Extension

Page performance is important for search engine rankings

Website performance is one of the things search engines look at to decide how to rank your page. Especially with the increasing number of visitors constrained by low throughput and high latency browsing on mobile devices, every second it takes to load a page matters.

What is Google PageSpeed Insights?

With PageSpeed Insights by Google you can perform checks to identify measurements to make your websites faster and more mobile-friendly within seconds. And this is also key to get a pole position in Google Search. The tool analyzes the delivered content of your website and makes suggestions to improve it.


Google PageSpeed Insights


You can use PageSpeed Insights for free from the project page. Enter your domain into the text field and click on the “ANALYZE” button. The service will review the entered URL and make some pre-defined performance rule checks to create an overall rating. The best score of 100 requires an optimized website that passes all performance rules successfully!

Read this article to find out how to use and understand Google PageSpeed Insights recommendations. 

Google PageSpeed Insights Plesk Extension

Google PageSpeed Insights Extension

Measuring website performance once isn’t enough. So we’ve created the Google PageSpeed Plesk Extension so you can quickly and directly run the checks regularly within Plesk – no more leaving the Plesk Interface and opening external pages to generate a detailed report.

The Google PageSpeed Plesk Extension not only gives administrators the rights to run a test, your customers or employees with normal user permission rights can also gain access to this extension. This is a great service feature for your customers if you are using Plesk as your control panel on your servers!

Main features of the Plesk extension

  • Check all your domains within seconds
  • Detailed report page with many improvement suggestions
  • Custom button to start check process and show ratings directly
  • Download optimized static files directly after the check
  • Store results and displays an overview page
  • For both administrators and end customers

Google PageSpeed Insights - Result page

If Google PageSpeed Insights can compress static files further, then the extension will show a download link to an archive with optimized files. Download the files and replace them on your web server (don’t forget to backup first) and improve the download performance for your visitors! This feature is also provided by Google.

Plesk Extension - Google PageSpeed Insights - Custom Button

Get your website performing even better today. Get the Google Pagespeed Insights Plesk extension here, or install it directly from your Plesk Extension Panel.

And don’t forget to read our article on how to use and understand Google PageSpeed Insights. 

Have fun optimizing your web projects and stay Plesky!

Plesk now supports PHP 7.1.x

We’re pleased to announce that Plesk Onyx now supports the latest PHP version 7.1.1.

The update to PHP 7.1.0 brought developers a bunch of cool new improvements such as Catching multiple exceptions types or Nullable Types. For the full list of new features, head over to the official PHP release announcement.

In version 7.1.1, more bugs were fixed to make the PHP 7.1.x branch even more stable.

Planning to update?

Check your CMS version before updating Plesk PHP to 7.1!

Because there are several incompatibilities with the last version of PHP.  If you’re working with the three most used Open Source CMSs, the compatible versions are:

  • WordPress >= 4.7
  • Joomla! >= 3.6.4
  • Drupal >= 8.2.3

Plugins and modules are not all operational on PHP 7.1.0 so if you’re planning to migrate from 7.0 (or an earlier version) to 7.1, proceed with caution. You might want to try making a copy of your sites to try out PHP version 7.1 before taking it live if all goes well.

TIP: If you are using the WordPress Toolkit on Plesk Onyx, you will be able to easily duplicate your website and test the new version on the copy first in the next major version of the toolkit!

Check your version of PHP

On your Plesk control panel, go to Tools & SettingsPHP Settings.

PHP 7.1.1 - Plesk PHP Settings

Install PHP 7.1 in Plesk

Go to Tools & Settings – Updates and UpgradesAdd / Remove components. Select PHP 7.1 and click on the “Continue” button.

Plesk Php 7

After the installation process, you must activate the new version for your domains. Go to Websites & Domains, select your domain and click on the PHP Settings icon. On the next page, you’ll define the PHP version and other parameters, such as the memory limit or the execution time.

Plesk - Activate PHP version 7.1

Select the new version, set your preferred performance and common settings, and click “Save. If you’ve successfully updated your Plesk PHP version, you’ll see 7.1.1 under the PHP Settings icon.

Plesk PHP - version 7.1 activated

Note: As mentioned above, not all applications or plugins support the latest PHP version. If an application or plugin is vital to your website, be prepared to switch back to the last PHP version.

Now go forth and try the newest version of PHP, and enjoy the improved performance and decreased memory usage!

HTTP/2 – Does it improve site performance?

HTTP/2 and Plesk - improve site performance

There are only 2 groups of web developers – those who already use HTTP/2 to boost website performance and those who are getting ready to use HTTP/2 on their next project. If you’ve not yet heard about HTTP/2, you’ve lots of catching up to do. Let’s get started.

So what is HTTP/2? Is it just a marketing buzzword or is there actually more than that?

HTTP/2 is the latest version of the famous HTTP network protocol. HTTP stands for Hypertext Transfer Protocol which is used by the World Wide Web. This protocol makes it possible to distribute text and media information using so-called web links between non-connected nodes, such as a browser and a server. For instance, your browser used this protocol to load this blog article. So without HTTP, there wouldn’t be an Internet!

Before diving into the advantages of HTTP/2 and explaining why it will speed up your site, let’s first understand how data is transferred between independent systems.


The network protocol HTTP

HTTP uses a client-server model. This means that your browser (Firefox, Chrome etc.) is the client and our blog application running on a hosting computer is the server. This article can be identified and loaded by a fixed Uniform Resource Locator (or short and more familiar: URL). If you open the URL of this article, your client will make an HTTP request to the server and retrieve the information in the HTML format. Once the transfer (over a transport layer, commonly TCP) was carried out, your browser renders the received HTML code response to the output that you are looking at right now!

History fact: The term “hypertext” was first used by Ted Nelson in 1965 (Xanadu Project). HTTP and HTML were invented by Tim Berners-Lee and his team at CERN in 1989. BTW, the first website was published on August 6th 1991.

The network protocol supports sessions and authentication. A session is an open sequence of request-response transactions over a TCP connection to a specific port. Port 80 is used for HTTP and 443 for HTTPS connections. HTTPS is HTTP over SSL / TLS that means that an end-to-end connection is established through an encrypted channel using Transport Layer Security (TLS) as cryptographic protocol.

HTTP/1.0 and HTTP/1.1

Before HTTP/2 was introduced as a standard, HTTP/1.1 was the official standard. HTTP/1.1 is a revision of the original HTTP/1.0 version. The original HTTP/1.0 version was officially introduced in 1996. The initial revision of HTTP/1.1 was introduced in 1997, the improved and updated revision of this version was released in 1999 and again in 2014. The main difference between both outdated standards is the support of multiple connections per request. HTTP/1.0 only supports a single connection for each resource request, whereas HTTP/1.1 allows to reuse the same connection multiple times, a persistent connection so to speak. This leads to less latency which helps to load a modern website faster. Latency is the time delay between the request (cause) and response (effect). This was improved even further in HTTP/2 but I will explain the main advantages of the new standard later!

HTTP request methods in detail

I talked about requests to the server previously. HTTP defines several request methods that can be used for different purposes and actions on the identified resource. The most common verbs (as the methods are also called) are GET and POST which should be familiar to you.

http2 headers

When you call a URL following a normal link, then your browser does a GET request. You can see GET parameters directly in the URL, e.g. ?id=42. In the example the GET variable is id and has the value 42.  When you sign into a service by putting your credentials into a form and clicking on the submit button, then your client will perform a POST request. Besides these methods, HTTP supports some other methods which are typically not used by your browser while surfing the internet. The other methods are:

  • HEAD (like GET but without response body),
  • PUT (modifies or creates resource),
  • DELETE (deletes resource),
  • TRACE (echoes request),
  • OPTIONS (returns supported HTTP methods),
  • CONNECT (converts request to a TCP/IP tunnel),
  • PATCH (applies modifications to resource).

HTTP responses and HTTP status codes

Let’s also take a brief look at the responses. The response of the server after a request does not only contain the response body, the HTML code for the output of the loaded page, but also the response header fields. These fields contain important information and parameters about the HTTP transaction in the established connection, e.g. the used encryption algorithm or cache mechanisms. For the sake of completeness I should mention that such important parameters are also send in the request in the request header fields.

http status codes

The first line of the HTTP response always contains the so-called status code that helps the client to handle the response properly. Who does not know the notorious “Server Error 500” message? Exactly, this is the status code 500 that was sent by the server due to internal problems to the browser. There are several main categories that can be recognized by the first digit:

  • 1[xx] – Informational,
  • 2[xx] – Successful,
  • 3[xx] – Redirection,
  • 4[xx] – Client Error,
  • 5[xx] – Server Error.

Advantages of HTTP/2

HTTP/2 supports most of the high-level syntax of the HTTP/1.1 revision. For instance, the request methods or the status codes are the same. The most important change was the way how data packages are framed and transported between the nodes.

The server can push data to the client even though they were not requested yet by the browser but will be required to render the page completely. Additionally requests can be multiplexed (requests or responses are combined) and pipelined (multiple requests without waiting for corresponding responses) over one single TCP connection. These improvements decrease the latency which leads to significantly better page load speed.

So, what do I need to use the HTTP/2 advantages? Both client and server have to understand and support this standard. All popular modern browsers already have HTTP/2 support fully implemented by now. Your browser will automatically load web pages over HTTP/2 if the server supports it.

How do I activate HTTP/2 on my server? Simply use Plesk!

This is really easy! As always, Plesk will do the hard work for you, while you can relax and concentrate on your business! If you have already Plesk on your server, then you are just some clicks away of supporting the modern, fast network standard.

Activate HTTP2 with Plesk

The Plesk team created the security extension Security Advisor that supports an one-click activation of HTTP/2 besides other important measures such as one-click SSL and HTTPS activation for WordPress. Open the Plesk Extension Catalog on Plesk and install the Security Advisor. The extension is completely free and will not only make your website more secure but also faster!

So long, thanks for reading and now I need to GET /coffee HTTP/2.0! 🙂

Develop Plesk Extensions Series: Submit your extension to the official Plesk Extensions Catalog

In the first three parts of this series we’ve learned how to install Plesk locally, setup the development environment properly and how to write the first Plesk extensions. Today I will show you how to submit your shiny new extension to the official Plesk Extension Catalog. It’s up to you, make Plesk users happy and become famous!

Part 4 – Submit your extension to the official Plesk Extension Catalog

It’s not really hard to submit an extension to Plesk’s Extension Team if you follow a checklist that the team has provided on their submission page. Let’s go through the list together and clarify each point:

Your extension should be packaged in zip archive.

This one is easy. Select all files that belong to the extension and compress them into an zip archive. You can use the build-in compress feature of your OS or an external archive program for this purpose.

Tipp: If you are using a MacBook (iOS), then you can use the free app YemuZip to get rid of the iOS-specific hidden files and folders in the archive using the “PC” option.

The archive should not contain unused files. When the extension is uploaded in Plesk there should be no warnings about unused files found.

See my comment above. Get sure that you don’t add folders or files that are not used by Plesk, e.g. the PhpStorm project folder. You should test the upload and installation processes for each Plesk and OS version before submitting your extension!

The archive should contain meta.xml file with valid description.

Without the meta.xml your extension won’t work. The XML code must be both well-formed and valid, this means that it follows a defined structure. Check existing extensions or use the extension stubs to create a correct manifest file.

Extension description should contain at least one sentence. It’s recommended to use two or three sentences for proper explanation of the idea behind your extension.

I think this is self explanatory. The better your description is the more likely users will install and use it! Describe exactly what the extension contains and what the user can do with it.

Extension name and description should be in English. It’s possible to provide translations into other languages as long English is present.

Always use English in your extension and provide local language files directly in your package. Place the files in plib/resources/locales. It you share your code on a collaboration platform, such as GitHub, then it makes sense to write also your comments in English. In general, it always makes sense but you should know it! 😉

Extension UI must have full English language support. It’s possible to provide UI translations into other languages as long English is present.

See my comment above!

Your extension should have icons in .png format.
32×32 icon location: _meta/icons/32×32.png
64×64 icon location: _meta/icons/64×64.png

Download a free icon or create an own icon with the help of an image program or an online icon generator.

Your extension should have at least one screenshot (1024х768 size) in .png format. It’s possible to provide up to three screenshots.
Screenshots location: _meta/screenshots/
Screenshot names should be: 1.png, 2.png, 3.png

Create screenshots from your extension and add them you the package. You may use just 3 screenshots, so try to depict the most important parts of your extension which will help the users to choose your extension!

If your extension works only on Linux or Windows (or was tested on only one platform), this should be stated in meta.xml.

Add this information into the description and also to the email that you write to the Plesk team. It’s important else the team will test you extension on the other operation system as well and reject your submission. For instance, if your extension only runs on Unix systems, then you should add to the meta.xml:


Your extension should only use API calls described in official documentation. Private API calls not described in the documentation should not be used.

Use the provided API calls from the Plesk developers. Take a look to part 2 of this tutorial where I describe how to add the API stubs to your project.

Don’t use external calls which can not be verified by the team. If you integrate your private service, then make the API transparent and provide all needed details to the Plesk team so that they can check your integration thoroughly.

Last but not least, write good and clean code what makes it easier for the team to check it and speeds up the successful inclusion in the catalog.

Plesk Extension team will review your extension and contact you with the result

Once submitted, the team will review your submission and either add your extension into the catalog or write you an email with issues that have to be corrected. Don’t hesitate to contact the team at any time if you have questions regarding the correct submission process.

You can find the submission form and all important information here: Submit Your Extension.

All right, I wish you a great journey with Plesk and looking forward to some great extensions from you.

Stay Plesky!

Develop Plesk Extensions Series: My first Plesk extension

My First Plesk Extension

How to develop Plesk extensions

In the first part (Install a local version of Plesk) we installed Plesk locally and in the second part (Create extension stub and IDE project) we prepared our development environment to get started writing great Plesk extensions. This blog post will give you a brief introduction how to write a simple extension. In the screencast video I will write a Hello World extension within 5 minutes and in the blog article I will explain you the basics with the Pizza extension that I’ve created for this purpose.

Part 3 – My first Plesk extension

Relax and lean back, let me show you how to write a small Hello World extension using the previously created extension stub. In this video you will also see what you can do if you encounter permission right problems while using the deployment feature within PhpStorm. This problem occurs if you upload the extension once within the Plesk instance where you are logged in as another user than the SFTP user that you use to update the modified files. We can solve this issue directly in the terminal using chown for folders and chmod for specific files. Here we go:

Easy, isn’t it? Now let me explain the Pizza extension in more detail. The Pizza extension was my first extension to get into the development of Plesk extensions. Though it is a small extension, it contains many of the basics that you will need for your own extension, such as saving an option value to the database or showing a custom button in the main view.

What does the Pizza extension actually do? It adds a link to your favorite pizza delivery service directly to the main page Websites & Domains. You may specify your own pizza link in the settings. This means that you can use the extension to set a bookmark to any page.

Let’s take a look on a simple Plesk extension

Download the Pizza extension – you can use the ready to go package linked above or get it from the official Plesk Github repository . Unpack the archive in your working directory and add this folder in PhpStorm as a new project. You will see the typical structure that you already encountered if you’ve created an extension stub via command line – what I described in part 2 of this tutorial (with small exceptions).

Let’s go through all files and see what they contain. Use your IDE to navigate through the files. If you’ve setup your environment properly, you will be able to use auto-completion and the documentation of used classes and functions.


  <name>GET Your Pizza!</name>
  <description>Set a link to order a delicious pizza directly from Plesk!</description>

This file is the manifest file. It contains all information about the extension, such as the ID, name, description and version. Keep this file updated (version and release) if you update your extension! Please refer to the official documentation to understand the extensions structure even more profound. See the following page for a detailed description of all parameteres: Extension Structure


This folder contains screenshots and icons for the catalog. It is required to be able to get listed within the official Plesk extension catalog. If you just develop for yourself, you don’t need to create this folder.


$application = new pm_Application();

Here you can find the entry point file (index.php) and the icon that is used to set a custom button. See also part 2 of the tutorial where I’ve described the structure in more detail.


This folder contains the logic of the extension. Here you will find all important PHP and HTML files that are required for the extension.

It is important that you understand the correct structure and naming conventions of Plesk extensions. They are using Zend framework practices, so also the MVC pattern implementation. Please read Step 1 of the following page before you proceed with this tutorial: Exercise 1. Tabs, Forms, Lists, Tools


public function init()

    // Init title for all actions
    $this->view->pageTitle = $this->lmsg('page_title');

public function indexAction()
    // Default action is formAction

This is the main controller of our extension. It extends the abstract class pm_Controller_Action that is used for all extension controllers. With init() we initialize the controller. In the Pizza extension I use the function formAction() as the default action, so I have to forward from the function indexAction().

public function formAction()
    // Set the description text
    $this->view->output_description = $this->lmsg('page_title_description');

    // Init form here
    $form = new pm_Form_Simple();
    $form->addElement('text', 'pizzalink', ['label' => $this->lmsg('form_pizzalink'), 'value' => pm_Settings::get('pizzalink'), 'style' => 'width: 40%;']);
    $form->addControlButtons(['cancelLink' => pm_Context::getModulesListUrl(),]);

    // Process the form - save the license key and run the installation scripts
    if ($this->getRequest()->isPost() && $form->isValid($this->getRequest()->getPost())) {
        if ($form->getValue('pizzalink')) {
            $this->_pizzalink = $form->getValue('pizzalink');

        pm_Settings::set('pizzalink', $this->_pizzalink);

        $this->_status->addMessage('info', $this->lmsg('message_success'));
        $this->_helper->json(['redirect' => pm_Context::getBaseUrl()]);

    $this->view->form = $form;

In formAction() we prepare the form for the settings page and set some variables that we will use in the view file. Additionally, the post request after a click on the save button is processed here.


public function getButtons()
    $buttons = [[
        'place'       => self::PLACE_DOMAIN,
        'title'       => 'GET Your Pizza',
        'description' => 'One click away from a delicious pizza!',
        'icon'        => pm_Context::getBaseUrl().'images/icons/pizza-icon.png',
        'link'        => pm_Settings::get('pizzalink'),
        'newWindow'   => true

    return $buttons;

In this file the custom button is created. The key-value-pair place in the buttons array defines the position of the button. Here we used self::PLACE_DOMAIN to display the button on the domain overview page in the right sidebar. Take a look into the pm_Hook_CustomButtons class to see all possible places.

The Plesk team created a demo extension with many examples how to use the custom button feature. Get it here: Custom Buttons


$messages = [
    'message_success'        => 'Your pizza link was saved successfully!',
    'page_title'             => 'GET your pizza directly from Plesk!',
    'page_title_description' => '<p>Let\'s get cheesy! :-) </p><p>This extension adds a link to your favorite pizza delivery service directly to the main page "Websites & Domains". You may specify your own pizza link in the settings.</p><p>Be Plesky And Enjoy Your Pizza!</p>',
    'form_pizzalink'         => 'Set your own pizza link'

This is the language file, default language is en-US. In this file we define all translation strings in an array. You can see the usage of the strings in the IndexController.php in the form creation call $this->lmsg('LANGUAGESTRING').


pm_Settings::set('pizzalink', '');

This file is only triggered directly after the installation process. I use this call to preset the link in the settings of the extension.


<?php echo $this->output_description; ?>
<?php echo $this->form; ?>

The view files are used to create the output. Here we display the content that was created in the controller (description and form).

Learn from official sources how to write a good extension

That’s all – we are done! You see it is quite easy to get started. Please read the official documentation to understand more about the development of Plesk extensions and take a look on the example extensions in the Plesk GitHub account.

Still reading? Stop now and write your own cool extension. In the next part I will show you how you submit your extension to the official Extensions Catalog. Make all Plesk users happy with your contribution!

Stay Plesky and enjoy coding!