My Plesk User Experience (2): Lessons learned from testing Plesk Onyx

My Plesk user experience 2 - Plesk Onyx testing and analysis

So Plesk Onyx came along and it had implemented NGINX caching. Naturally I was curious and removed all my customizations. Then I started to compare the website performance with the inbuilt NGINX caching, other caching methods, and the Speed Kit extension that speeds up websites.

This was the variety of tests and configurations I made on the platform:

Platform Web Server Configuration Caching Engine Configuration
1 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF
2 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching ON
3 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF Redis Caching ON
4 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching ON Redis Caching ON
5 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF SpeedKit Ext. ON
6 WordPress Website on WordPress.com Everything in default mode
7 WordPress Website on Vesta CP NGINX Web Template turned ON with the WordPress2 Option selected

I installed the Plesk server (version 17.8.11 update 25) on the Digital Ocean droplet on CentOS7 with 2 GB RAM. Next, installing the Redis server as it was. I plugged in Redis Object Cache with its default settings. And had no additional parameters in additional NGINX directives.

There was PHP version 7.2.10 with default settings and the “FPM application served by NGINX mode. And the VestaCP server installed on Digital Ocean droplet on Ubuntu 16.04.

As a test page, I used a typical blog post with lots of photos. Hosted both on the server and externally, with a small chunk of text and one comment.

Testing on the Plesk Onyx Platform

Testing on Plesk Onyx platform

For testing, I used the httperf command line tool (with the same launch parameters) and a well-known online testing system GTmetrix.com. From the GTmetrix.com reports, I chose the following parameters:

Time to First Byte (TTFB) is the total amount of time spent to receive the first byte of the response once it has been requested. It is the sum of “Redirect duration” + “Connection duration” + “Backend duration“. This metric is one of the key indicators of web performance.

Once the connection is complete and the request is made, the server needs to generate a response for the page. The time it takes to generate the response is known as the Backend duration.

    • Fully Loaded Time: RUM Speed Index is a page load performance metric indicating how fast the page fully appears. The lower the score, the better.
    • PageSpeed Score
    • YSlow Score

The httperf utility was launched with the following parameters:

httperf –hog –server jam.pavuk.su –uri=/index.php/2018/10/03/kgd/ –port=443 –wsess=100000,5,2 — rate 1000 –timeout 5

The creation of 100,000 sessions (5 calls each 2 seconds) with speed 1,000. And here, the following markers received with httperf were the most interesting:

  • Connection rate – the real speed of creating new connections. It showed the server ability to process connections.
  • Request rate – the speed of processing requests, in other words a number of requests a server can execute per second. It showed web app responsiveness.
  • Reply rate – an average number of server replies per second.

Plesk Onyx Test Results

Plesk test results

Clearly, there’s an ocean of tools and solutions to test website performance. Some more complete and respected than others. But even the tools I used allowed me to come to pretty objective conclusions. The test results are summarized in the table below with the green buts highlighting the best values of the parameter, and the red – the worst.

Plesk Onyx test results table

And so, after analyzing the received data, we can conclude the following:

  1. Unchanged PageSpeed and YSlow Scores
    PageSpeed and YSlow Score metrics in Plesk remain absolutely the same, no matter the configuration. Therefore, they don’t depend on caching or other server settings like for code optimization, image size, gzip compression and CDN usage.
  2. Caching is essential for speed
    No caching on Plesk at all gives the worst time metrics. Fully Loaded Time and TTFB dramatically increase. Websites with the turned off caching are significantly slower.
  3. NGINX and Redis are a successful combo
    Comparing caching methods, NGINX caching used in Plesk seems better than Redis Cache. It’s possible the default Redis Cache configuration doesn’t let us achieve a higher performance. It’s not quite clear how the used combination of both caching tools works, but it gives quite alright TTFB и Backend duration metrics.
  4. WordPress performance suffers
    WordPress.com shows the worst performance results. However, by default, it doesn’t actually offer bad optimization for the PageSpeed Score.
  5. Vesta and NGINX mean extremely fast page load
    Using the lightweight Vesta control panel with the turned on NGINX Web Template + php-fpm (wordpress2) designed for WordPress hosting gives great speed results. Even more, for WordPress hosting, VestaCP has custom NGINX web templates including NGINX caching support.

Moving to a new DigitalOcean Droplet

Plesk on Digital Ocean droplet - install - now a one-click app

I deployed Plesk to the new DigitalOcean droplet using Web Installer as it doesn’t require me to go to the server via SSH and do all the stuff in web interface. This recent migration from my VPS to a new DigitalOcean droplet gave me new data for my last Plesk experience. All in all, the migration was successful with minor warnings, which in most cases I resolved using migration wizard suggestions.. The bottom line is that Plesk with turned on key features and settings gives very good results for your website.

Also, I strongly recommend you turn on NGINX caching with your Plesk if you’re seeking a simple and reliable way to speed up your website. You won’t need to set up any difficult configurations. And web pros can make the most of Plesk by fine-tuning as they see fit. That’s what it’s made for. their right.

Finally, my story was aimed at people without professional knowledge who simply want to use built-in Plesk features. So I hope that this story will be good reason for you to login to Plesk and take a fresh look.

My Plesk User Experience (1): Easy Starts and Common Issues

Plesk User Experience While Testing Plesk Onyx

Plesk first crossed my path when it came packaged with web hosting acquired from a Russian provider. At the time it was version 12.0, but I never paid any attention to it until I discovered that part of its service was domain names registration.

Starting Off with Plesk

It couldn’t hurt to register a couple of domains for myself, and so I did. I added them to Plesk, and configured the DNS records. Now these websites loaded default web pages. Then, as I already had websites hosted in Plesk, I thought “Why not use mailboxes registered on my own domains?”. So I went and created a couple of mailboxes and configured Roundcube webmail.

But it was all just personal use until I occasionally started to use this complete infrastructure as a sort of a test server. Why? In order to solve tasks related with questions from forum users. And so, my Plesk server operated like this for a while without any use cases development. That is, until the start of 2017 – when I spontaneously took a closer look at something I had available, but which was laying there unused this whole time.

Easy Building on the Plesk Platform

Building on Plesk Platform

I realized that I could now use my own platform for my personal blog. It didn’t take me long to choose WordPress as I had previous experience with it. What’s more, the new Plesk Onyx had integrated its WordPress Toolkit, which looked promising. After getting a license with additional extensions, I started building – themes, plugins, you name it, before publishing my first posts.

Plesk is also built for multiple domains. So when my famous, American Instagrammer friend needed a website to develop her “Travelling with kids” idea, I offered my hosting platform.

Within Plesk, I created a personal account for her and subscriptions with two domains. One was used to host her website, and the other to host her personal mail.

She quickly learned how to use the WordPress admin dashboard and Plesk. She created mailboxes and installed WordPress plugins and themes. Then created posts and moderated comments. Which I believe says a lot about how easy Plesk’s interface is.

As thousands of subscribers were actively visiting both our blogs, it was time to pay more attention to Plesk server maintenance. And later, to server optimization, creating regular work in the Plesk interface and even more in the Linux command line. But more on that later. Before that, there were common issues of all sorts that I had started to face.

Issues uncovered and solved by using Plesk

Issues solved by using Plesk
  • Service downtime
    Various services like httpd and MySQL stopped every now and then. I managed to solve this by turning on and configuring Watchdog.
  • Memory usage
    Then Health Monitor started to constantly notify that MySQL consumes RAM.
  • Basic MySQL settings
    I had optimized operation modes of MySQL via CLI and thought it would benefit to have at least some basic settings of MySQL optimization in the Plesk interface. Eventually, RAM for VPS was increased from 1 to 2 GB, solving the issue.
  • Frequent updates
    Email notifications about new WordPress plugins made me login to Plesk often. I am one of “update-it-all” types and very meticulous when it comes to installing the latest software versions. The Smart Updates feature in WordPress Toolkit solved this task.
  • Extensions accessibility
    I used to find accessing my installed extensions inconvenient. So it was great when WordPress Toolkit had installed extension icons in the left menu.

Speeding up and hardening the WordPress Website

Speed Up WordPress Website

During an internal contest for the best WordPress website hosted in Plesk, I focused on two goals. I wanted to make my WordPress website the fastest and the most secured.

To achieve the A+ note on ssllabs.com, special NGINX parameters became necessary. They were installed via Additional nginx directives and the /etc/nginx/conf.d /ssl.conf file. An attempt to maximize the speed of my website powered by NGINX was a special matter.

At that time, NGINX caching wasn’t yet implemented in Plesk. So I tried various caching solutions, such as redis, memcached, and the very same NGINX caching. All via the CLI, of course, but with the help of customized settings.

It didn’t take long to realize the NGINX version shipped with Plesk was not suitable to use with trendy acceleration technologies. Ones like caching, the brotli compression method, PageSpeed Module, or TLS1.3. Even the Plesk Forum also raised this issue as it seemed to occupy the minds of advanced users.

The result was publishing different ways how to compile the latest NGINX versions. Thus, supporting modern technologies, and substituting the NGINX version shipped with Plesk for a custom one. I also joined forum users in compiling and optimizing NGINX builds for my Plesk server, all during the contest.

In the end, I got the speedy WordPress site I wanted powered by customized NGINX with Redis caching. All was well until Plesk Onyx was released. See what happened next in part 2 of my Plesk experience story tomorrow.

Using Elastic Stack for Data Analysis and XenForo for Forums

Why use XenForo for Forums?

Many big organizations and companies use forums to engage with their communities. Unlike popular social networks, a forum helps strengthen the community at a higher level. With forums, you get:

  • More accurate data structuring.
  • Means to use powerful tools to retrieve information.
  • Ability to use advanced rating and gamification systems.
  • Power to use moderation and anti-spam protection.

In this article, we’ll explain how to use the modern XenForo engine to deploy forums. So we’ll use caching based on Memcached and ElasticSearch because it’s a powerful search engine. These services will work inside Docker containers. Also, we’ll deploy and manage them via the Plesk interface.

In addition, we’ll talk about ways you can use Elastic Stack (ElasticSearch + Logstash + Kibana) to analyse data in the context of Plesk. This will come in handy when analysing search queries or server logs on the forum.

How to Deploy the XenForo Forum on Plesk

Adding a Database

  1. First, create a subscription for the domain forum.domain.tld in Plesk.
  2. Then, in the domain’s PHP Settings, select the latest available PHP version (at the time of writing: PHP 7.1.10).
  3. Go to File Manager. Delete all files and directories in the website’s httpdocs except for favicon.ico.
  4. Upload the .ZIP file containing the XenForo distribution (Example: xenforo_1.5.15a_332013BAC9_full.zip) to the httpdocs directory.
  5. Click “Extract Files” to unpack the .ZIP file. Then, select everything in the unpacked archive and click “Move” to transfer the .ZIP file contents to the httpdocs directory. You can delete the upload directory and the xenforo_1.5.15a_332013BAC9_full.zip file afterwards – you won’t need those anymore.
  6. In the forum.domain.tld subscription, go to Databases.
  7. Create a database for your future forum. You can choose any database name, username and password.
  8. And for security reasons, it’s important to set Access control to “Allow local connections only”. Here’s what it looks like:

Installing the Forum

  1. Go to forum.domain.tld. The XenForo installation menu will appear.
  2. Follow the on-screen instructions and provide the database name, username and password you set.
  3. Then, you need to create an administrative account for the forum. After you finish the installation, you can log into your forum’s administrative panel and add the finishing touches.
  4. Speed up your forum significantly by enabling memcached caching technology and using the corresponding container from your Plesk Docker extension. But before you install it, you need to install the memcached module for the version of PHP used by forum.domain.tld. Here’s how you compile the memcached PHP module on a Debian/Ubuntu Plesk server:

# apt-get update && apt-get install gcc make autoconf libc-dev pkg-config plesk-php71-dev zlib1g-dev libmemcached-dev

Compile the memcached PHP module:

# cd /opt/plesk/php/7.1/include/php/ext
# wget -O phpmemcached-php7.zip https://github.com/php-memcached-dev/php-memcached/archive/php7.zip
# unzip phpmemcached-php7.zip
# cd php-memcached-php7/
# /opt/plesk/php/7.1/bin/phpize
# ./configure –with-php-config=/opt/plesk/php/7.1/bin/php-config
# export CFLAGS=”-march=native -O2 -fomit-frame-pointer -pipe”
# make
# make install

Install the compiled module:

# ls -la /opt/plesk/php/7.1/lib/php/modules/
# echo “extension=memcached.so” >/opt/plesk/php/7.1/etc/php.d/memcached.ini
# plesk bin php_handler –reread
# service plesk-php71-fpm restart

Run Memcached Docker

  1. Start by opening the Plesk Docker extension. Then, find the memcached Docker image in the catalog in order to install and run it. Here are the settings:


2. This should make port 11211 available on your Plesk server. So you can verify it using the following command:

# lsof -i tcp:11211
COMMAND    PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
docker-pr 8479 root    4u  IPv6 7238568      0t0  TCP *:11211 (LISTEN)

Enable Memcached caching for the forum

  1. Go to File Manager and open the file forum.domain.tld/httpdocs/library/config.php file in Code Editor.
  2. Add the following lines to the end of the file:

$config[‘cache’][‘enabled’] = true;
$config[‘cache’][‘frontend’] = ‘Core’;
$config[‘cache’][‘frontendOptions’][‘cache_id_prefix’] = ‘xf_’;
//Memcached
$config[‘cache’][‘backend’]=’Libmemcached’;
$config[‘cache’][‘backendOptions’]=array(
‘compression’=>false,
‘servers’ => array(
array(
‘host’=>’localhost’,
‘port’=>11211,
)
)
);

3. Also, make sure that your forum is working correctly. You can verify that caching is working with the following command:

# { echo “stats”; sleep 1; } | telnet localhost 11211 | grep “get_”
STAT get_hits 1126
STAT get_misses 37
STAT get_expired 0
STAT get_flushed 0

Add ElasticSearch search engine

You can improve your XenForo forum even further by adding to it the powerful ElasticSearch search engine.

  1. First of all, you need to install a XenForo plugin called XenForo Enhanced Search and the Docker container ElasticSearch.Note that the Docker container requires a significant amount of RAM to operate, so make sure that your server has enough memory. You can install the XenForo Enhanced Search plugin by downloading and extracting the .ZIP file via Plesk File Manager.
  2. Read the XenForo documentation to learn how to install XenForo plugins. Once you’re done, the ElasticSearch search engine settings should appear in the forum admin panel:

3. In order to get the search to work, you need to install the ElasticSearch Docker container in the Plesk Docker extension with the following settings:

4. Then, verify that port 9200 is open for connection using the following command:

# lsof -i tcp:9200

5. After that, you need to make sure that ElasticSearch is connected and create a Search Index in the forum administration panel:

Congratulations! You’ve done it. You’ve set up a forum based on the modern XenForo engine supplemented with a powerful search engine and accelerated caching based on Memcached.

Improve your XenForo Forum Further with Kibana

You can make your forum even better and add the ability to analyse forum search queries with Kibana. To do this, follow the steps below:

  1. You can either use a dedicated Kibana-Docker container or a combined Elasticsearch-Kibana-Docker container.
  2. You’ll also need to install a patch for the XenForo Enhanced Search plugin. This creates a separate ElasticSearch index that stores searches and can be analysed using Kibana. Here’s an example of Tag Cloud Statistics of keywords used in search queries:

Downloading the Patch File

You can download the patched file for version 1.1.6 of the XenForo Enhanced Search plugin. Replace the original file found in httpdocs/library/XenES/Search/SourceHandler with the file you downloaded. In addition to the search index, ElasticSearch will create a separate index named saved_queries which will store search queries to be analysed by Kibana.

Another promising approach is to replace the standard web statistics components in Plesk (Awstats and Webalizer) with a powerful analysis system based on Kibana. There are two options for sending vhost logs to ElasticSearch:

  1. Using Logstash, another component of the Elastic Stack.
  2. Using the rsyslog service with the omelasticsearch.so plugin installed (yum install rsyslog-elasticsearch). This way, you can directly send log data to ElasticSearch. This is very cool, because you do not need an extra step like with Logstash.

Important Warning :

The logs must be in JSON format for ElasticSearch to store them properly and for Kibana to parse them. However, you can’t change the log_format nginx parameter on the vhost level.

Possible Solution:

Use the Filebeat service, which can take the regular log of nginx, Apache or another service, convert it into the required format (for example, JSON) and then pass it on. As an added benefit, this service lets you collect logs from different servers. There are many opportunities to experiment.

Using rsyslog, you can send any other system log to ElasticSearch to be analised with Kibana – and it’s quite workable. For example, here’s a rsyslog configuration /etc/rsyslog.d/syslogs.conf for sending your local syslog to Elasticsearch in a Logstash/Kibana-friendly way after running the rsyslog service with the command rsyslogd -dn:

# for listening to /dev/log
module(load=”omelasticsearch”) # for outputting to Elasticsearch
# this is for index names to be like: logstash-YYYY.MM.DD
template(name=”logstash-index”
type=”list”) {
constant(value=”logstash-“)
property(name=”timereported” dateFormat=”rfc3339″ position.from=”1″ position.to=”4″)
constant(value=”.”)
property(name=”timereported” dateFormat=”rfc3339″ position.from=”6″ position.to=”7″)
constant(value=”.”)
property(name=”timereported” dateFormat=”rfc3339″ position.from=”9″ position.to=”10″)
}
# this is for formatting our syslog in JSON with @timestamp
template(name=”plain-syslog”
type=”list”) {
constant(value=”{“)
constant(value=”\”@timestamp\”:\””)     property(name=”timereported” dateFormat=”rfc3339″)
constant(value=”\”,\”host\”:\””)        property(name=”hostname”)
constant(value=”\”,\”severity\”:\””)    property(name=”syslogseverity-text”)
constant(value=”\”,\”facility\”:\””)    property(name=”syslogfacility-text”)
constant(value=”\”,\”tag\”:\””)   property(name=”syslogtag” format=”json”)
constant(value=”\”,\”message\”:\””)    property(name=”msg” format=”json”)
constant(value=”\”}”)
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type=”omelasticsearch”
template=”plain-syslog”
searchIndex=”logstash-index”
dynSearchIndex=”on”
bulkmode=”on”  # use the bulk API
action.resumeretrycount=”-1″  # retry indefinitely if Logsene/Elasticsearch is unreachable
)

You can see that ElasticSearch index logstash-2017.10.10 was successfully created and is ready for Kibana analysis:

# curl -XGET http://localhost:9200/_cat/indices?v
health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana             TYNVVyktQSuH-oiVO59WKA   1   1          4            0     15.8kb         15.8kb
yellow open   xf                  JGCp9D_WSGeuOISV9EPy2g   5   1          6            0     21.8kb         21.8kb
yellow open   logstash-2017.10.10 NKFmuog8Si6erk_vFmKNqQ   5   1          9            0       46kb           46kb
yellow open   saved_queries       GkykvFzxTiWvST53ZzunfA   5   1         16            0     43.7kb         43.7kb

You can create a Kibana Dashboard with a custom visualization showing the desired data, like this:

Your community on the XenForo platform

So you can now set up a modern platform for working with the community with the additional ability to collect and analyse all kinds of statistical data.

Of course, this article is not meant to be a comprehensive, “one stop shop” guide. It does not cover many important aspects, like security, for example. Think of this as a gentle nudge meant to spur your curiosity and describe possible scenarios and ways of implementing them. Experienced administrators can configure more advanced settings by themselves.

In conclusion, think of the Elastic Stack as of a tool or a construction set you can use to get a result according to your own liking. Just make sure to feed it the correct data you want to work with.

How to reduce server load and improve WordPress speed with Memcached

In my last article about Varnish in a Docker container, I’ve explained how to easily activate server-side caching and what advantages you can get with this mechanism. Today, I will show you how you can reduce server load and drastically improve your WordPress website speed with Memcached.

Memcached – a distributed memory caching system

Memcached caches data and objects directly into the memory (RAM) and reduces the amount of times an external source has to be read (e.g. the database or API-calls). This especially helps dynamic systems like WordPress or Joomla! by noticeably improving the processing time!

Before we start, keep in mind that Memcached does not have built-in security measures for shared hosting environments! This tutorial should only be used on a dedicated server.

Installing Memcached

On my server, I run Plesk Onyx with CentOS 7.x. This tutorial also applies to other systems, just remember to use the system-specific commands (e.g. apt-get instead of yum). To install Memcached, first access your server via SSH and use the command line:

yum install memcached

After the installation process, we start it with:

service memcached start

Next we have to install PECL Memcached for the corresponding PHP version. WordPress is fully compatible with PHP 7, so let’s activate Memcached for the latest PHP 7.1 version. Start by installing all the necessary packages to add our custom PHP module in Plesk.

yum install make plesk-php71-devel gcc glibc-devel libmemcached-devel zlib-devel

Build the module with these instructions. You don’t have to specify the libmemcached directory manually, simply hit Enter if prompted.

/opt/plesk/php/7.1/bin/pecl install memcached

In the next step, we have to add a line to the corresponding configuration file to register the module in PHP. You can use the command line without having to open the ini file with an editor.

echo "extension=memcached.so" > /opt/plesk/php/7.1/etc/php.d/memcached.ini

And finally, re-read the PHP handlers so that you see the module in the PHP overview in the Plesk GUI.

plesk bin php_handler --reread

You can check the phpinfo()-page now to find out if the memcached module was loaded properly.

Memcached php configuration
Memcached – phpinfo() output

Or directly via the command line:

/opt/plesk/php/7.1/bin/php -i | grep "memcached support"
Memcached php command line check
Memcached – PHP command line check

Secure and monitor your Memcached integration

Memcached is using port 11211 per default. For security reasons, we can bind port 11211 only to localhost.

Add the following line to the end of file /etc/sysconfig/memcached and restart Memcached service.

OPTIONS="-l 127.0.0.1"

To monitor and get some stats from Memcached, you can use the following commands:

echo "stats settings" | nc localhost 11211
/usr/bin/memcached-tool localhost:11211

Activate Memcached in WordPress

Once Memcached is installed on the server, it is easy to activate it in WordPress. First, we need to activate the Memcached backend with a special script that auto-detects whether to use Memcached as the caching mechanism.

Download the script from https://github.com/bonny/memcachy and move all files to the /wp-content/  folder.

If you didn’t change the default port (11211) of Memcached, then you are ready to use it directly. If you’ve changed it, then you will have to add the following code to the wp-config.php (placed in the root of your WordPress instance).

$memcached_servers = array( array( '127.0.0.1', 11211 ) );

Okay, once the backend is activated, we will install a cache plugin to store and serve rendered pages via Memcached. Install the plugin Batcache (https://wordpress.org/plugins/batcache/) using the installation instruction.

  1. Download and unzip the package
  2. Upload the files advanced-cache.php to the /wp-content/ folder
  3. Open wp-config.php and add the following line
    1. define(‘WP_CACHE’, true);
    2. Important: Be sure that Memcached is enabled properly for the selected PHP version before adding this line else an error will be thrown!
  4. Upload the file batcache.php to the /wp-content/plugins/ folder

That’s pretty much it! You can open the advanced-cache.php file and adjust the settings for your needs. The file batcache.php is a small plugin that regenerates the cache on posts and pages. Don’t forget to activate the plugin in the backend on the plugin page!

Verify that Memcached works properly in WordPress

Now, let’s verify that you did everything correctly. The easiest way to see whether the rendered page was sent from cache is to add an additional header field to the response.

To achieve it, you have to modify the advanced-cache.php file. Open the file and search for

var $headers = array();

Change this line to

var $headers = array('memcached' => 'activated');

Open the Developer Tools in your browser (F12 in Chrome), select the Network tab and reload your website several times (just to be sure the page is loaded from the cache) and check the response headers. If you see the memcached header field, then everything is fine!

Memcached for WordPress - Nginx response headers
Memcached – Response Headers Check

Attention, if you are logged in into WordPress, then cache is never used and the system always sends the uncached version of the loaded page. So, what can you do to verify the functionality while being logged in? You can either log out first or open a new tab in your browser in Incognito / Private Window mode and use the Developer Tools there.

Instead of the examining the headers, you can also check the source code of the loaded page. If you can find similar lines, then the page was loaded from the cache:

<!—
    generated 207 seconds ago
    generated in 0.450 seconds
    served from batcache in 0.007 seconds
    expires in 93 seconds
-->

Let’s do some stress tests with Blitz.io!

We can test the load performance by stress testing, which will load the website with many concurrent users per second for a certain time span. Without any security and overload protection, your server should start to respond slower until the requests cannot be handled anymore. With Memcached activated, your server should be able to serve intensive requests longer without throwing errors.

Let’s run some load and performance tests with Blitz.io.

Note: For this stress test I took the same small server that I used for the tests with Varnish (only 1 CPU and 500MB Memory)!

Result WITHOUT Memcached:

Wordpress Without Memcached
Stress test – WordPress without Memcached

It is the same result like in the Varnish stress test. As you can see, I had to abort the stress test because the server couldn’t handle the requests less than 5 seconds and less than 50 concurrent users into the test. After just 15 seconds, the server collapsed completely and no requests could be managed anymore!

Result WITH Memcached:

Memcached WordPress
Stress test – WordPress with Memcached

As you can see, the Memcached cache allows us to keep the server stable even under heavy load. The small test server handled over 400 concurrent users and responded all requests over 50 seconds without any errors. After 50 seconds and almost 450 concurrent users, the server finally overloaded and stopped accepting further requests. With a more powerful server, the numbers would be much higher.

Therefore, it’s a great idea to use Memcached to keep your website reactive, even when it suffers a simple attack. For real DDoS attacks (Distributed Denial of Service attack) you’d be better off with CloudFlare ServerShield to protect your server.

Summary: Memcached for WordPress works perfectly

Memcached can greatly improve the performance of your WordPress website and reduce the CPU-load of your server. It’s easy to set up a working environment and it works out-of-the-box.

Thank you for reading my article, let me know your comments!

Have fun improving the speed of your WordPress website and stay Plesky!