Warning: Fileless attacks are on the rise

Fileless attacks are on the rise!

Ever heard of fileless attacks? This is malicious code gets a foothold on your server. Not through a certain file or a document, but by infiltrating the server RAM. Thus, exploiting various processes and vulnerabilities of the server software. They can do this via vulnerable web applications, specially formed requests, and so on.

The idea behind fileless attack

The harm that a fileless attack inflicts leaves no trace since its malware does not write any files to the hard drive. Instead, it performs all malicious activities directly in RAM. After the system reboots, the malicious code disappears – but the damage has already been done to your server. This type of threat is commonly referred to as an Advanced Volatile Threat (AVT).

Some types of malicious code harm system files, some set up malicious code for other types of attacks, and others open entry points for hackers to use other server’s vulnerabilities. Both users and security solutions, like McAfee Endpoint Security, Virsec Security Platform, and others, are not tuned for Fileless attacks. Thus, making them hard to detect.

Fileless Malware Found On Various Operating Systems

On Windows servers, hackers actively use the pre-installed system Powershell to download and run malicious code. Or they can also use BAT and VBS scripts. These techniques are now widespread since you can execute them in frameworks like Powershell Empire, Powersploit, and Metasploit Framework.

As for Linux, most installed distributions like CentOS, Ubuntu, and Debian, have pre-installed software. This usually has programming languages interpreters: Python, Perl, and С compiler – a bad practice of installing an operating system on servers. Lots of hosting servers also have PHP installed because of its huge popularity. So Fileless attacks use these interpreters.

How Fileless Malware Survives on Linux

On Linux, the easiest way to run malicious code in RAM by way of fileless malware is to use shared memory. Hence, a block of RAM shared and pre-mounted in the file system. By placing an executable file in /dev/shm or/run/shm, it’s possible to run the file directly in RAM. Remember that these directories are nothing but shared memory.

However, the content of these directories can be viewed with the ls command, which works for any other directory. Moreover, these directories are usually mounted with the noexec flag and only root can run programs in them. Therefore, more intricate types of fileless malware use, for example, the memfd_create system call (in case of the C programming language).

Interpreted languages, such as Perl and Python, which are widely used in web hosting, also offer the ability to use syscall(). PHP, which is even more popular, does not have built-in techniques to use syscall. However, there are old tricks that allow using required system calls even in PHP.

Fileless Attacks Are Increasing

Fileless attacks increase

According to research carried out by Ponemon Institute in 2018, we should expect fileless attacks to grow and make up 35% of all cyberattacks worldwide. Consequently, there will also be a decrease of regular file-based attacks.

Fileless vs file-based attacks

Fileless attacks are particularly dangerous in the corporate world since. Because Fileless malware becomes especially effective after installing in the RAM of servers active 24/7, 365 days a year. So Fileless attacks can hit any organization – like the Democratic National Committee in the US in mid-2016 for example. A hacker known as Guccifer 2.0 inserted a piece of Fileless malware into the Committee’s system and then gained access to 19,252 emails and 8,034 attachments. The document of the District Court for the District of Columbia states that Powershell scripts were used to hack the Microsoft Exchange Server of the Committee.

This intrusion resulted in the publication of a series of revelations that ended up hindering Hillary Clinton, Donald Trump’s then rival.

How to protect against Fileless attacks

Cybersecurity experts recommend the following measures to withstand the threat of fileless malware intrusion:

  1. A company that wants to protect its corporate cyber security has to be cyber-resilient and therefore stay informed about new kind of attacks.
  2. Avoid scripting languages like Powershell, because fileless malware actively exploits them. You can either delete Powershell or configure the system so that an attacker can’t exploit it.
  3. Use adapted solutions to detect malicious code – not just on the file system, but also in the RAM.
  4. Beware of Macros – they’re the most common tools on any computer and a possible entry point for fileless malware. As with scripting languages, companies don’t necessarily need to give up on all kinds of Macros. But they do need to be responsible when using them.

Fileless Attack Prevention Advice – From the Experts

Reputable sources of protection against fileless attacks stress that you need to “Keep your software up to date. As inconvenient as they can be, software updates are usually done to patch critical security vulnerabilities.” It’s one of the best practices for fileless malware protection.

As far as Microsoft products are concerned, Comparitech tell us “How to stop fileless malware”: “The main defense against any type of malware is to keep your software up to date. As Microsoft has been very active in taking steps to block the exploitation of PowerShell and WMI, installing any updates from Microsoft should be a priority.”

Ilia Kolochenko, CEO, Founder, High-Tech Bridge, speaks about the vulnerability of not keeping web applications up to date:

“It’s a very colorful, albeit very sad, example how a vulnerability in a web application can lead to disastrous consequences for an entire company, its customer base and beyond. Today, almost any critical data is handled and processed by web applications, but cybersecurity teams still seriously underestimate the risks related to application security.

Most companies don’t even have an up2date application inventory. Without knowing your assets, you won’t be able to protect them. Many global companies still rely on obsolete automated solutions and tools for their application security, while cybercriminals are already using machine-learning in their attacks when targeting and profiling the victims.”

Our cybersecurity experts at Plesk also advocate the importance of timely and regular installation of updates. Whether on your operating system, hosting server software, web applications, or CMS plugins. Right now, it’s the best way to protect against fileless attacks. Have a look at our Change Log for the latest information and released Plesk updates, and their installation procedure.

#6 reasons why you need to update Plesk now

Six reasons why you need to upgrade your Plesk today - Plesk

After New Year, I was sitting in an airport cafe far from home and work, waiting to board. Suddenly, my meditative state of mind was interrupted by two IT-looking guys who, I guess because of my branded hoodie and backpack, asked me if I worked for Plesk. So naturally we got to talking and it turned out that they worked in a small firm using Plesk to manage web projects. As I was the first company employee they had encountered, I ended up listening to their piled-up claims against Plesk.

The same old excuses

It was clear they used an outdated version – Plesk 17.0 on a CentOS server. And their argument for this was the same old “If it’s not broke, don’t fix it”. They bought, launched, configured, and started using. They feared the latest version carried unnecessary problems as new products offer not only new functions, but also new bugs. Because of this, some don’t take the risk. Others don’t know how to update Plesk properly; and others have no clue that new versions are available.

Mind you, I can understand these reasons for not updating, however, they don’t outweigh the advantages of using the latest product versions. Luckily there was still plenty of time until my boarding announcement, so I started argument by going through their complaints.

Reason #1: Better Backup Storage

Reasons to update your Plesk - better backup and cloud storage - Plesk
To create and store backups in cloud storage, just install the corresponding extensions.

It’s essential for them to have a stable, reliable backup system to work on their web projects. The guys mentioned using a remote FTP server for this, however, it was unstable and they sometimes had issues during backing up or restoration. FTP server service and maintenance also required extra resources. So, I prepared my first argument – that with the latest Plesk, they could use cloud storage for backups.

Not just any cloud providers either, but industry giants like Amazon, Google, Microsoft, or even DigitalOcean. They tick off important criteria like security, redundancy, affordability and flexibility. They can easily set up scheduled backups and store them at one time in Google Drive and at another in Amazon S3. So they no longer need to spend resources on their FTP server for storing backups. However, the guys still shook their heads so I went on to my next reason.

Reason #2: Improved Website and Server Performance

“Do you find your websites’ performance important?” I asked, “And what are you currently doing to speed things up?” The guys eagerly explained how they spend all their working hours on that. Tinkering with the server via CLI, picking the required parameters, testing various caching-speeding plugins, and so on. They were really impressed when I told them that they could turn on effective nginx caching in Plesk with just one click and fine-tune. “And if that’s not enough”, I told them, “visit our Extensions Catalog and install Speed Kit – a complete solution for speeding up websites.”

6 reasons to update your Plesk - enable nginx caching
To turn on and configure nginx caching, go to Apache & Nginx Settings of your subscription.
6 reasons to update plesk - turn on nginx caching - Plesk
If you have a WordPress website, then you can turn on nginx caching with just one click.

Reason #3: Finding the important bearded owl

I had to talk about Plesk’s new best friend: the Advisor. The intelligent Adviser-owl recommends ways to improve performance of your Plesk server, without being annoying.  Once you achieve all-round security, you level up to the bearded owl. The other party was so interested I had to open the laptop with my own Plesk server so they could see.

6 reasons to update plesk onyx - screenshot-5 - install advisor extension
Install the Advisor extension and start getting helpful recommendations for your server at once.
6 reasons to update Plesk Onyx - screenshot-4 - the real bearded owl - Advisor

Reason #4: The Self-Repairing Feature

To further prove how the latest Plesk outshines their outdated version, I revealed the new self-repairing feature. This lets you repair Plesk by yourself right from a browser window, without having to connect to the server via SSH. Handy if you don’t have SSH access. And there’s the long-awaited process list that helps identify and manage the processes consuming the most system resources. So, before going to Plesk Support, you can launch Repair Kit to perhaps fix an issue yourself.

6 reasons to update your Plesk - screenshot-6 - How to access Plesk Repair Kit - Plesk
Go to https://domain.tld:8443/repair to access Repair Kit where you can try to get your server back to work.
6 reasons to update your Plesk - screenshot-7 - Find and fix server issues yourself with Plesk Repair Kit - Plesk
With the Repair Kit extension, you can check your server for errors and issues and then fix them.
6 reasons to update your Plesk - screenshot-8 - see what consumes server CPU & Ram with Plesk Repair Kit - Plesk
If you want to know what consumes CPU and RAM on your server, go to Repair Kit’s process list

Reason #5: You’re always up-to-date

Another thing – if their projects grew, sooner or later they would need new, high-demand features, solutions, and technologies. Serious updates of third-party components are always implemented in the latest versions of Plesk on a clean OS. Operating systems on which Plesk is installed also have their own life cycles. So older OS versions stop supporting Plesk over time, as well as outdated versions of third-party components.

Our conversation had become a one-man show, with the guys listening attentively. It was time to finish on a high. So I said that Plesk wants all users to get the best out of the product. From increased reliability to security innovations and implementation of new demanded features. For this to happen, we occasionally stop supporting old Plesk versions.

Lastly – there’s nothing complicated to update Plesk. And once you do, you get access to all the cool, new features our team worked so hard to roll out.

Reason #6: All the latest features and more

I asked if they had seen how powerful and convenient the new WordPress Toolkit became.  And SSL – which offers access to kickass features like “Keep domain secured” HSTS management. Our community, documentation, and support will always help you update and explore new opportunities. Moreover, according to our technical support data, the update from versions 17.0 and 17.5 to 17.8 goes very smoothly and Support requests are very rare.

Finally, I said, “if I was not persuasive enough, trust your peers – the Plesk server-owners. Because 41% of all Plesk Onyx 17.0 and 17.5 instances have already updated by their owners to version 17.8. Also, there are currently 61% of all Plesk servers on Plesk Onyx 17.8. Also, 80% of all new installations are Plesk Onyx 17.8.”

I could see that my recent acquaintances were satisfied and ready to update their server as we said goodbye and went towards our gates. I hope I can persuade you too that using the latest Plesk versions is the right choice for your business.

My Plesk User Experience (2): Lessons learned from testing Plesk Onyx

My Plesk user experience 2 - Plesk Onyx testing and analysis

So Plesk Onyx came along and it had implemented NGINX caching. Naturally I was curious and removed all my customizations. Then I started to compare the website performance with the inbuilt NGINX caching, other caching methods, and the Speed Kit extension that speeds up websites.

This was the variety of tests and configurations I made on the platform:

Platform Web Server Configuration Caching Engine Configuration
1 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF
2 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching ON
3 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF Redis Caching ON
4 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching ON Redis Caching ON
5 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF SpeedKit Ext. ON
6 WordPress Website on WordPress.com Everything in default mode
7 WordPress Website on Vesta CP NGINX Web Template turned ON with the WordPress2 Option selected

I installed the Plesk server (version 17.8.11 update 25) on the Digital Ocean droplet on CentOS7 with 2 GB RAM. Next, installing the Redis server as it was. I plugged in Redis Object Cache with its default settings. And had no additional parameters in additional NGINX directives.

There was PHP version 7.2.10 with default settings and the “FPM application served by NGINX mode. And the VestaCP server installed on Digital Ocean droplet on Ubuntu 16.04.

As a test page, I used a typical blog post with lots of photos. Hosted both on the server and externally, with a small chunk of text and one comment.

Testing on the Plesk Onyx Platform

Testing on Plesk Onyx platform

For testing, I used the httperf command line tool (with the same launch parameters) and a well-known online testing system GTmetrix.com. From the GTmetrix.com reports, I chose the following parameters:

Time to First Byte (TTFB) is the total amount of time spent to receive the first byte of the response once it has been requested. It is the sum of “Redirect duration” + “Connection duration” + “Backend duration“. This metric is one of the key indicators of web performance.

Once the connection is complete and the request is made, the server needs to generate a response for the page. The time it takes to generate the response is known as the Backend duration.

    • Fully Loaded Time: RUM Speed Index is a page load performance metric indicating how fast the page fully appears. The lower the score, the better.
    • PageSpeed Score
    • YSlow Score

The httperf utility was launched with the following parameters:

httperf –hog –server jam.pavuk.su –uri=/index.php/2018/10/03/kgd/ –port=443 –wsess=100000,5,2 — rate 1000 –timeout 5

The creation of 100,000 sessions (5 calls each 2 seconds) with speed 1,000. And here, the following markers received with httperf were the most interesting:

  • Connection rate – the real speed of creating new connections. It showed the server ability to process connections.
  • Request rate – the speed of processing requests, in other words a number of requests a server can execute per second. It showed web app responsiveness.
  • Reply rate – an average number of server replies per second.

Plesk Onyx Test Results

Plesk test results

Clearly, there’s an ocean of tools and solutions to test website performance. Some more complete and respected than others. But even the tools I used allowed me to come to pretty objective conclusions. The test results are summarized in the table below with the green buts highlighting the best values of the parameter, and the red – the worst.

Plesk Onyx test results table

And so, after analyzing the received data, we can conclude the following:

  1. Unchanged PageSpeed and YSlow Scores
    PageSpeed and YSlow Score metrics in Plesk remain absolutely the same, no matter the configuration. Therefore, they don’t depend on caching or other server settings like for code optimization, image size, gzip compression and CDN usage.
  2. Caching is essential for speed
    No caching on Plesk at all gives the worst time metrics. Fully Loaded Time and TTFB dramatically increase. Websites with the turned off caching are significantly slower.
  3. NGINX and Redis are a successful combo
    Comparing caching methods, NGINX caching used in Plesk seems better than Redis Cache. It’s possible the default Redis Cache configuration doesn’t let us achieve a higher performance. It’s not quite clear how the used combination of both caching tools works, but it gives quite alright TTFB и Backend duration metrics.
  4. WordPress performance suffers
    WordPress.com shows the worst performance results. However, by default, it doesn’t actually offer bad optimization for the PageSpeed Score.
  5. Vesta and NGINX mean extremely fast page load
    Using the lightweight Vesta control panel with the turned on NGINX Web Template + php-fpm (wordpress2) designed for WordPress hosting gives great speed results. Even more, for WordPress hosting, VestaCP has custom NGINX web templates including NGINX caching support.

Moving to a new DigitalOcean Droplet

Plesk on Digital Ocean droplet - install - now a one-click app

I deployed Plesk to the new DigitalOcean droplet using Web Installer as it doesn’t require me to go to the server via SSH and do all the stuff in web interface. This recent migration from my VPS to a new DigitalOcean droplet gave me new data for my last Plesk experience. All in all, the migration was successful with minor warnings, which in most cases I resolved using migration wizard suggestions.. The bottom line is that Plesk with turned on key features and settings gives very good results for your website.

Also, I strongly recommend you turn on NGINX caching with your Plesk if you’re seeking a simple and reliable way to speed up your website. You won’t need to set up any difficult configurations. And web pros can make the most of Plesk by fine-tuning as they see fit. That’s what it’s made for. their right.

Finally, my story was aimed at people without professional knowledge who simply want to use built-in Plesk features. So I hope that this story will be good reason for you to login to Plesk and take a fresh look.

My Plesk User Experience (1): Easy Starts and Common Issues

Plesk User Experience While Testing Plesk Onyx

Plesk first crossed my path when it came packaged with web hosting acquired from a Russian provider. At the time it was version 12.0, but I never paid any attention to it until I discovered that part of its service was domain names registration.

Starting Off with Plesk

It couldn’t hurt to register a couple of domains for myself, and so I did. I added them to Plesk, and configured the DNS records. Now these websites loaded default web pages. Then, as I already had websites hosted in Plesk, I thought “Why not use mailboxes registered on my own domains?”. So I went and created a couple of mailboxes and configured Roundcube webmail.

But it was all just personal use until I occasionally started to use this complete infrastructure as a sort of a test server. Why? In order to solve tasks related with questions from forum users. And so, my Plesk server operated like this for a while without any use cases development. That is, until the start of 2017 – when I spontaneously took a closer look at something I had available, but which was laying there unused this whole time.

Easy Building on the Plesk Platform

Building on Plesk Platform

I realized that I could now use my own platform for my personal blog. It didn’t take me long to choose WordPress as I had previous experience with it. What’s more, the new Plesk Onyx had integrated its WordPress Toolkit, which looked promising. After getting a license with additional extensions, I started building – themes, plugins, you name it, before publishing my first posts.

Plesk is also built for multiple domains. So when my famous, American Instagrammer friend needed a website to develop her “Travelling with kids” idea, I offered my hosting platform.

Within Plesk, I created a personal account for her and subscriptions with two domains. One was used to host her website, and the other to host her personal mail.

She quickly learned how to use the WordPress admin dashboard and Plesk. She created mailboxes and installed WordPress plugins and themes. Then created posts and moderated comments. Which I believe says a lot about how easy Plesk’s interface is.

As thousands of subscribers were actively visiting both our blogs, it was time to pay more attention to Plesk server maintenance. And later, to server optimization, creating regular work in the Plesk interface and even more in the Linux command line. But more on that later. Before that, there were common issues of all sorts that I had started to face.

Issues uncovered and solved by using Plesk

Issues solved by using Plesk
  • Service downtime
    Various services like httpd and MySQL stopped every now and then. I managed to solve this by turning on and configuring Watchdog.
  • Memory usage
    Then Health Monitor started to constantly notify that MySQL consumes RAM.
  • Basic MySQL settings
    I had optimized operation modes of MySQL via CLI and thought it would benefit to have at least some basic settings of MySQL optimization in the Plesk interface. Eventually, RAM for VPS was increased from 1 to 2 GB, solving the issue.
  • Frequent updates
    Email notifications about new WordPress plugins made me login to Plesk often. I am one of “update-it-all” types and very meticulous when it comes to installing the latest software versions. The Smart Updates feature in WordPress Toolkit solved this task.
  • Extensions accessibility
    I used to find accessing my installed extensions inconvenient. So it was great when WordPress Toolkit had installed extension icons in the left menu.

Speeding up and hardening the WordPress Website

Speed Up WordPress Website

During an internal contest for the best WordPress website hosted in Plesk, I focused on two goals. I wanted to make my WordPress website the fastest and the most secured.

To achieve the A+ note on ssllabs.com, special NGINX parameters became necessary. They were installed via Additional nginx directives and the /etc/nginx/conf.d /ssl.conf file. An attempt to maximize the speed of my website powered by NGINX was a special matter.

At that time, NGINX caching wasn’t yet implemented in Plesk. So I tried various caching solutions, such as redis, memcached, and the very same NGINX caching. All via the CLI, of course, but with the help of customized settings.

It didn’t take long to realize the NGINX version shipped with Plesk was not suitable to use with trendy acceleration technologies. Ones like caching, the brotli compression method, PageSpeed Module, or TLS1.3. Even the Plesk Forum also raised this issue as it seemed to occupy the minds of advanced users.

The result was publishing different ways how to compile the latest NGINX versions. Thus, supporting modern technologies, and substituting the NGINX version shipped with Plesk for a custom one. I also joined forum users in compiling and optimizing NGINX builds for my Plesk server, all during the contest.

In the end, I got the speedy WordPress site I wanted powered by customized NGINX with Redis caching. All was well until Plesk Onyx was released. See what happened next in part 2 of my Plesk experience story tomorrow.

Using Elastic Stack for Data Analysis and XenForo for Forums

Why use XenForo for Forums?

Many big organizations and companies use forums to engage with their communities. Unlike popular social networks, a forum helps strengthen the community at a higher level. With forums, you get:

  • More accurate data structuring.
  • Means to use powerful tools to retrieve information.
  • Ability to use advanced rating and gamification systems.
  • Power to use moderation and anti-spam protection.

In this article, we’ll explain how to use the modern XenForo engine to deploy forums. So we’ll use caching based on Memcached and ElasticSearch because it’s a powerful search engine. These services will work inside Docker containers. Also, we’ll deploy and manage them via the Plesk interface.

In addition, we’ll talk about ways you can use Elastic Stack (ElasticSearch + Logstash + Kibana) to analyse data in the context of Plesk. This will come in handy when analysing search queries or server logs on the forum.

How to Deploy the XenForo Forum on Plesk

Adding a Database

  1. First, create a subscription for the domain forum.domain.tld in Plesk.
  2. Then, in the domain’s PHP Settings, select the latest available PHP version (at the time of writing: PHP 7.1.10).
  3. Go to File Manager. Delete all files and directories in the website’s httpdocs except for favicon.ico.
  4. Upload the .ZIP file containing the XenForo distribution (Example: xenforo_1.5.15a_332013BAC9_full.zip) to the httpdocs directory.
  5. Click “Extract Files” to unpack the .ZIP file. Then, select everything in the unpacked archive and click “Move” to transfer the .ZIP file contents to the httpdocs directory. You can delete the upload directory and the xenforo_1.5.15a_332013BAC9_full.zip file afterwards – you won’t need those anymore.
  6. In the forum.domain.tld subscription, go to Databases.
  7. Create a database for your future forum. You can choose any database name, username and password.
  8. And for security reasons, it’s important to set Access control to “Allow local connections only”. Here’s what it looks like:

Installing the Forum

  1. Go to forum.domain.tld. The XenForo installation menu will appear.
  2. Follow the on-screen instructions and provide the database name, username and password you set.
  3. Then, you need to create an administrative account for the forum. After you finish the installation, you can log into your forum’s administrative panel and add the finishing touches.
  4. Speed up your forum significantly by enabling memcached caching technology and using the corresponding container from your Plesk Docker extension. But before you install it, you need to install the memcached module for the version of PHP used by forum.domain.tld. Here’s how you compile the memcached PHP module on a Debian/Ubuntu Plesk server:

# apt-get update && apt-get install gcc make autoconf libc-dev pkg-config plesk-php71-dev zlib1g-dev libmemcached-dev

Compile the memcached PHP module:

# cd /opt/plesk/php/7.1/include/php/ext
# wget -O phpmemcached-php7.zip https://github.com/php-memcached-dev/php-memcached/archive/php7.zip
# unzip phpmemcached-php7.zip
# cd php-memcached-php7/
# /opt/plesk/php/7.1/bin/phpize
# ./configure –with-php-config=/opt/plesk/php/7.1/bin/php-config
# export CFLAGS=”-march=native -O2 -fomit-frame-pointer -pipe”
# make
# make install

Install the compiled module:

# ls -la /opt/plesk/php/7.1/lib/php/modules/
# echo “extension=memcached.so” >/opt/plesk/php/7.1/etc/php.d/memcached.ini
# plesk bin php_handler –reread
# service plesk-php71-fpm restart

Run Memcached Docker

  1. Start by opening the Plesk Docker extension. Then, find the memcached Docker image in the catalog in order to install and run it. Here are the settings:


2. This should make port 11211 available on your Plesk server. So you can verify it using the following command:

# lsof -i tcp:11211
COMMAND    PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
docker-pr 8479 root    4u  IPv6 7238568      0t0  TCP *:11211 (LISTEN)

Enable Memcached caching for the forum

  1. Go to File Manager and open the file forum.domain.tld/httpdocs/library/config.php file in Code Editor.
  2. Add the following lines to the end of the file:

$config[‘cache’][‘enabled’] = true;
$config[‘cache’][‘frontend’] = ‘Core’;
$config[‘cache’][‘frontendOptions’][‘cache_id_prefix’] = ‘xf_’;
//Memcached
$config[‘cache’][‘backend’]=’Libmemcached’;
$config[‘cache’][‘backendOptions’]=array(
‘compression’=>false,
‘servers’ => array(
array(
‘host’=>’localhost’,
‘port’=>11211,
)
)
);

3. Also, make sure that your forum is working correctly. You can verify that caching is working with the following command:

# { echo “stats”; sleep 1; } | telnet localhost 11211 | grep “get_”
STAT get_hits 1126
STAT get_misses 37
STAT get_expired 0
STAT get_flushed 0

Add ElasticSearch search engine

You can improve your XenForo forum even further by adding to it the powerful ElasticSearch search engine.

  1. First of all, you need to install a XenForo plugin called XenForo Enhanced Search and the Docker container ElasticSearch.Note that the Docker container requires a significant amount of RAM to operate, so make sure that your server has enough memory. You can install the XenForo Enhanced Search plugin by downloading and extracting the .ZIP file via Plesk File Manager.
  2. Read the XenForo documentation to learn how to install XenForo plugins. Once you’re done, the ElasticSearch search engine settings should appear in the forum admin panel:

3. In order to get the search to work, you need to install the ElasticSearch Docker container in the Plesk Docker extension with the following settings:

4. Then, verify that port 9200 is open for connection using the following command:

# lsof -i tcp:9200

5. After that, you need to make sure that ElasticSearch is connected and create a Search Index in the forum administration panel:

Congratulations! You’ve done it. You’ve set up a forum based on the modern XenForo engine supplemented with a powerful search engine and accelerated caching based on Memcached.

Improve your XenForo Forum Further with Kibana

You can make your forum even better and add the ability to analyse forum search queries with Kibana. To do this, follow the steps below:

  1. You can either use a dedicated Kibana-Docker container or a combined Elasticsearch-Kibana-Docker container.
  2. You’ll also need to install a patch for the XenForo Enhanced Search plugin. This creates a separate ElasticSearch index that stores searches and can be analysed using Kibana. Here’s an example of Tag Cloud Statistics of keywords used in search queries:

Downloading the Patch File

You can download the patched file for version 1.1.6 of the XenForo Enhanced Search plugin. Replace the original file found in httpdocs/library/XenES/Search/SourceHandler with the file you downloaded. In addition to the search index, ElasticSearch will create a separate index named saved_queries which will store search queries to be analysed by Kibana.

Another promising approach is to replace the standard web statistics components in Plesk (Awstats and Webalizer) with a powerful analysis system based on Kibana. There are two options for sending vhost logs to ElasticSearch:

  1. Using Logstash, another component of the Elastic Stack.
  2. Using the rsyslog service with the omelasticsearch.so plugin installed (yum install rsyslog-elasticsearch). This way, you can directly send log data to ElasticSearch. This is very cool, because you do not need an extra step like with Logstash.

Important Warning :

The logs must be in JSON format for ElasticSearch to store them properly and for Kibana to parse them. However, you can’t change the log_format nginx parameter on the vhost level.

Possible Solution:

Use the Filebeat service, which can take the regular log of nginx, Apache or another service, convert it into the required format (for example, JSON) and then pass it on. As an added benefit, this service lets you collect logs from different servers. There are many opportunities to experiment.

Using rsyslog, you can send any other system log to ElasticSearch to be analised with Kibana – and it’s quite workable. For example, here’s a rsyslog configuration /etc/rsyslog.d/syslogs.conf for sending your local syslog to Elasticsearch in a Logstash/Kibana-friendly way after running the rsyslog service with the command rsyslogd -dn:

# for listening to /dev/log
module(load=”omelasticsearch”) # for outputting to Elasticsearch
# this is for index names to be like: logstash-YYYY.MM.DD
template(name=”logstash-index”
type=”list”) {
constant(value=”logstash-“)
property(name=”timereported” dateFormat=”rfc3339″ position.from=”1″ position.to=”4″)
constant(value=”.”)
property(name=”timereported” dateFormat=”rfc3339″ position.from=”6″ position.to=”7″)
constant(value=”.”)
property(name=”timereported” dateFormat=”rfc3339″ position.from=”9″ position.to=”10″)
}
# this is for formatting our syslog in JSON with @timestamp
template(name=”plain-syslog”
type=”list”) {
constant(value=”{“)
constant(value=”\”@timestamp\”:\””)     property(name=”timereported” dateFormat=”rfc3339″)
constant(value=”\”,\”host\”:\””)        property(name=”hostname”)
constant(value=”\”,\”severity\”:\””)    property(name=”syslogseverity-text”)
constant(value=”\”,\”facility\”:\””)    property(name=”syslogfacility-text”)
constant(value=”\”,\”tag\”:\””)   property(name=”syslogtag” format=”json”)
constant(value=”\”,\”message\”:\””)    property(name=”msg” format=”json”)
constant(value=”\”}”)
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type=”omelasticsearch”
template=”plain-syslog”
searchIndex=”logstash-index”
dynSearchIndex=”on”
bulkmode=”on”  # use the bulk API
action.resumeretrycount=”-1″  # retry indefinitely if Logsene/Elasticsearch is unreachable
)

You can see that ElasticSearch index logstash-2017.10.10 was successfully created and is ready for Kibana analysis:

# curl -XGET http://localhost:9200/_cat/indices?v
health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana             TYNVVyktQSuH-oiVO59WKA   1   1          4            0     15.8kb         15.8kb
yellow open   xf                  JGCp9D_WSGeuOISV9EPy2g   5   1          6            0     21.8kb         21.8kb
yellow open   logstash-2017.10.10 NKFmuog8Si6erk_vFmKNqQ   5   1          9            0       46kb           46kb
yellow open   saved_queries       GkykvFzxTiWvST53ZzunfA   5   1         16            0     43.7kb         43.7kb

You can create a Kibana Dashboard with a custom visualization showing the desired data, like this:

Your community on the XenForo platform

So you can now set up a modern platform for working with the community with the additional ability to collect and analyse all kinds of statistical data.

Of course, this article is not meant to be a comprehensive, “one stop shop” guide. It does not cover many important aspects, like security, for example. Think of this as a gentle nudge meant to spur your curiosity and describe possible scenarios and ways of implementing them. Experienced administrators can configure more advanced settings by themselves.

In conclusion, think of the Elastic Stack as of a tool or a construction set you can use to get a result according to your own liking. Just make sure to feed it the correct data you want to work with.