MySQL Performance: Identifying Long Queries

MySQL performance

An optimized database server is key to ensuring that apps which rely on MySQL run efficiently. Sometimes all that’s needed to see website and application performance improvements are a few small adjustments, so in this article, we look at some of the tweaks that can help boost performance the most.

What Does it Cover?

Our advice here applies to the majority of MySQL servers running Linux and covers (among others) both dedicated and cloud VPS servers that run on various Linux distros and these system types:

  • Core-managed CentOS 6x/7x
  • Core-managed Ubuntu 14.04/16.04
  • Fully-managed CentOS 6/7 cPanel
  • Fully-managed CentOS 7 Plesk Onyx/Obsidian
  • Self-managed Linux servers

If you’ve chosen not to take advantage of direct support, you can still use these techniques, but we’re assuming that you’re familiar with these basic system admin principles:

  • SSH connections and how to navigate the standard Linux command-line shell environment.
  • Opening, Editing and Saving files in Vim or your preferred system editor.
  • MySQL interactive mode and general MySQL query syntax.

What is MySQL Optimization?

MySQL Optimization doesn’t have an exact meaning, but the general idea is pretty self-evident. It’s just about making your database function as efficiently and effectively as possible, and what that means in practical terms will obviously vary depending on the needs of your organization. Here, we’re taking it to mean a server which has been set up to avoid the kind of typical bottlenecks that can arise.

What’s a bottleneck?

A wine bottle narrows at the neck, so there’s a tighter space for your tipple to squeeze through, so it serves as the perfect metaphor for a road that narrows from two lanes to one and slows traffic or a computer system that hinders data transit for some technical reason.

E.g. the server could cope with 10 simultaneous connections, but as it is, it can only manage 5. If no more than 5 connections are established then the system will hum along happily, but as soon as it hits 6 then the limits will become all too clear.

When Should a MySQL database be optimized?

The best time to tune your database is when doing so will have the lowest impact on whatever services rely on it. As for frequency, you should audit it every week or month to make sure that poor performance isn’t slowing down apps. It’s easy to see when this is happening because you’ll notice a few tell-tale signs:

  • queries back up and never complete in the MySQL process table.
  • websites or websites that rely on it become sluggish.
  • more connection timeout errors, particularly during peak hours.

It’s fine to have a number of concurrently running queries at the same time on a system, but that becomes a burden when they regularly take more time than usual to complete. Even though the threshold for what constitutes a burden will vary from system to system and from app to app, average query times beyond several seconds will result in slower performance for websites and apps that depend on them. You may not notice that anything is amiss until a spike in traffic reveals that there is indeed a bottleneck. That’s why it pays to be proactive and optimize before this happens.

Tracking Down Performance Issues

To diagnose a particular bottleneck, you’ll need to know how to do an examination of the MySQL process table. You can view it in a few different ways, although this will vary according to your setup. To keep it short, we’ll just look at the most frequently used Secure Shell (SSH) access methods:

The MySQL Process Table: Method 1

The ‘mysqladmin’ command-line tool can be used with the flag ‘processlist’ or ‘proc’ for short. (You  can add the flag ‘statistics’ or ‘stat’ for short to show running statistics for queries since MySQL’s most recent restart.)

Command:

mysqladmin proc stat

Output:

+-------+------+-----------+-----------+---------+------+-------+

| Id | User | Host | db | Command | Time | State | Info | Progress |

+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+

| 77255 | root | localhost | employees | Query | 150 | | call While_Loop2() | 0.000 |

| 77285 | root | localhost | | Query | 0 | init | show processlist | 0.000 |

+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+

Uptime: 861755 Threads: 2 Questions: 20961045 Slow queries: 0 Opens: 2976 Flush tables: 1 Open tables: 1011 Queries per second avg: 24.323

For: can be used on the shell interface, so feeding output to other scripts and tools is no problem.

Against: The info column of the process table is always truncated so you won’t be able to view the full query if it’s longer.

The MySQL Process Table: Method 2

Run the query ‘show processlist;’ from the MySQL interactive mode prompt. (Add the ‘full’ modifier to the command to stop the Info column being truncated. You’ll be glad of it when long queries would otherwise get cut off.)

Command:

show processlist;

Output:

MariaDB [(none)]> show full processlist;

+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+

| Id | User | Host | db | Command | Time | State | Info | Progress |

+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+

| 77006 | root | localhost | employees | Query | 151 | NULL | call While_Loop2() | 0.000 |

| 77021 | root | localhost | NULL | Query | 0 | init | show full processlist | 0.000 |

+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+

 

For: Using the full modifier allows for seeing the full query on longer queries.

Against: MySQL Interactive mode can’t access the scripts and tools of the shell interface.

Using the slow query log

Another useful tool in MySQL is the included slow query logging feature, which is the preferred way of regularly finding long-running queries. You can adjust this feature using several directives, but the most commonly needed settings are:

slow_query_log – enable/disable the slow query log

slow_query_log_file – name and path of the slow query log file

long_query_time – time in seconds/microseconds defining a slow query

You can set these from the [mysqld] section of the MySQL configuration file that you’ll find at /etc/my.cnf. You’ll need to do a MySQL service restart to make them work. Take a look at this example for formatting:

Caution

The slow query log file requires a lot of disk space, and it needs continual attention while it’s running. Remember that the lower your long_query_time directive the quicker the slow query log will fill a disk partition.

[mysqld]

log-error=/var/lib/mysql/mysql.err

innodb_file_per_table=1

default-storage-engine=innodb

innodb_buffer_pool_size=128M

innodb_log_file_size=128M

max_connections=300

key_buffer_size = 8M

slow_query_log=1

slow_query_log_file=/var/lib/mysql/slowquery.log

long_query_time=5

After the slow query log has been enabled you will need to follow up from time to time to evaluate troublesome queries that require your attention. To review the contents of the slow query log file, you can parse it directly. This example shows statistics for a sample query that ran over the allotted 5 seconds:

Caution

If you enable the slow query log feature, then be advised that performance will take a hit. That’s because extra routines will be analyzing every query and writing them to the log file which will increase the input/output burden. This additional overhead explains why it’s considered to be best practice to disable the slow query log on production systems. In fact, you should only enable it for a fixed period: when you’re actively hunting for queries that might be slowing down a website or app.

# Time: 180717 0:23:28

# [email protected]: root[root] @ localhost [] # Thread_id: 32 Schema: employees QC_hit: No

# Query_time: 627.163085 Lock_time: 0.000021 Rows_sent: 0 Rows_examined: 0

# Rows_affected: 0

use employees;

SET timestamp=1531801408;

call While_Loop2();

 

Alternatively, there’s also the mysqldumpslow command line tool, which parses the slow query log file and puts like queries together but excludes the values of number and string data.

NoSQL vs SQL: Examining the Differences and Deciding Which to Choose

NoSQL vs SQL

At 74, Larry Ellison, co-founder and CTO of Oracle has amassed a fortune of $66.1 billion. He got going in 1966 and in the seventies took an idea from IBM’s Edgar F. Cobb for a SQL relational database. This became the Oracle Database rdbms (relational database management system). With no free software competitors, Oracle totally dominated the market. Everything else, like DB2 was running on IBM mainframes, and even it couldn’t oust Oracle from its top position. Mainframes remained popular until the 1990s, when PCs started to be used as servers, as they still are today. Oracle is still in the top spot for the majority of transactional business applications used by the richest companies. It bought the commonest opensource, MySQL, along with opensource Java, but both are still free to use. The big choice for all companies is still SQL vs NoSQL – between relational (SQL) or non-relational (NoSQL) data structure. Both are great in their own way, and both come with pros and cons of course, which we’ve listed for you here.

What is SQL?

SQL (Structured Query Language) organizes information in relational databases. It’s used in Oracle, Sybase, Microsoft SQL Server, Access, and Ingres. SQL uses relations (usually referred to as tables) to store and match data using shared features within the dataset.

It was Cobb’s notion that you could have a database that could be queried using a structured query language. He used SQL to create data in objects known as tables, along with the schema for that data, which describes fields in columns. One SQL record is known as a row.

What is NoSQL?

A NoSQL database describes itself, so it doesn’t need a schema. It also doesn’t mandate relations between tables in all scenarios. It only uses JSON documents, which are self-contained and easy to understand. NoSQL means high-performance, non-relational databases that use many different data models. They are known to be easy to use, have scalable performance, are resilient, and also widely available. NoSQL database examples include MongoDB, MarkLogic, Couchbase, CloudDB, and Amazon’s Dynamo DB.

NoSQL vs SQL: Major Differences

When choosing a data management system for your organization, you need to take into account the many and varied differences between SQL and NoSQL. There are differences in:

  • Language
  • Scalability
  • Community
  • Structure

Language

Use of a Structured Query Language makes any SQL-based database very versatile and helps to explain why it is used so widely. On the downside though, this also restricts it. You have to use predefined schemas to set out the structure of your data before you can even get started. Your data has to use the same structure too, structure as well, you may have to invest considerable time into pairing your data to make it ready.

A NoSQL database has a dynamic schema for unstructured data which can be stored in a lot of different ways, including graph-based, document-oriented, column-oriented, or organized as a KeyValue store. Being highly flexible like this means you won’t be burdened with the same amount of preparation. You’re free to add fields as you go and vary the syntax from database to database. Every document can have its own individual structure, so have a great deal of latitude.

Scalability

Another significant difference between SQL and NoSQL is how scalable they are. With the majority of SQL databases, can scale them vertically, meaning individual servers can be boosted through the addition of more RAM, SSD, or faster CPU. But NoSQL databases scale horizontally, meaning that they can handle increased traffic simply by adding more servers to the database. NoSQL databases have the ability to become larger and much more powerful, so they are great for handling large or constantly evolving data sets.

Community

SQL has been around for long enough now that its community is large and well developed. If you need a query answered or want to pick up new skills, there are seemingly endless forums full of experienced users who will be glad to help you out. NoSQL can’t match this level of peer support yet because it’s the new kid on the block, so unfortunately, you’ll have to come back in a few years.

Structure

SQL databases use a tables approach which makes them better suited to handling apps that ask for multi-row transactions. Accounting systems or legacy systems that were originally created for a relational structure are examples of these. NoSQL databases can be key-value pairs, wide-column stores, graph databases, or document-based.

SQL or NoSQL: Which One is Going to Fit Your Business?

The best way to determine which database is right for your business is to look at what you need it to do. If you need a predetermined structure, multi-row transactions and set schemas then SQL is the one to go for. It’s also highly consistent, which makes it an ideal choice for accounting systems.

If your company is growing rapidly and doesn’t need clear schema definitions, then NoSQL is what you want. A relational database won’t offer as much flexibility as NoSQL, which is great for companies that need to churn through large amounts of data that comes in varying structures.

Examples

We can see that the first field is teacher and the second field is subject.

{ teacher:  "James Smith", subject:  "literature" }

With SQL, you create this schema before adding it to the database:

CREATE TABLE teacherSubjects (
teacher varchar,
subject varchar
);

Varchar is variable character length. To add data to that table, you would:

INSERT INTO teacherSubjects (teacher, subject)
VALUES ("James Smith", "literature");

With a NoSQL database, in this example using MongoDB, you would use the database API to insert data like this:

db.teacherSubjects.insert( { name: "James Smith", subject: "literature" } )

Now you can create the union (all elements from two or more sets) and intersection (common elements of two or more sets) of sets using SQL.

This was such a big deal because all this could be programmed using simple SQL syntax. Then Oracle added indexing fields and caching records to improve performance and make sure that the database could complete referrals with integrity. (Referential integrity is about the completeness of transactions, so you aren’t left with orphaned records. For instance, a sales record with no product to go with it. This is what enforcing the relationship between tables refers to in Oracle.)

Note that in the above MongoDB example, Oracle programmers would call the teacherSubjectes table an intersection. It tells you what subjects a teacher has and also which teachers are in which subject. So you could also add things like subject room number and teacher email address to both records.

The Oracle database is known as a row-oriented database because that’s how it’s organized. There’s no need to turn our attention to column-oriented databases like Cassandra here, because they have different architecture. So, they are not so fundamentally different as SQL vs NoSQL. In particular, the Cassandra NoSQL database columns with similar data near to each other for the fastest possible retrieval. Cassandra and NoSQL databases do away with the concept of database normalization, which is fundamental to Oracle, as we outline below. And they don’t keep empty column values, so the rose can be different lengths.

Normalization and Efficiency

Something which Oracle emphasized was the relationship between objects, insisting that all data should be normalized, and nothing should be stored twice. In practical terms, instead of repeating the school address in every teacher record, it would be more efficient to keep a school table and put the address there. This constraint is largely absent in NoSQL databases, so it wins out here in the SQL vs No SQL debate.

Storage space and memory were costlier in the 1970s, so normalization was necessary. These days though, assembling a record that is split between different tables takes more of both, not to mention the fact that you also need to maintain index files, which can slow everything down.

Fans of NoSQL databases say memory and storage are so cheap and processing power so exponentially faster now that none of that really matters. The computer can handle it and it’s easier for programmers to code.

NoSQL vs SQL

SAP is Oracle’s biggest business competitor and has its own database, Hana. Oracle keeps all its records in memory (flushing them to disk as necessary) for the speed advantage it brings, but apart from that, they work in pretty much the same way.

NoSQL has been around for so long that it’s hard to argue a business case for changing to a newer one. When firms already understand rdbms, why switch? Oracle has solved management issues like data replication, which might leave someone using, ElasticSearch, for instance, unsupported with a compromised system on their hands. To avoid this, some businesses support opensource databases, like ElasticSearch, in-house, so you can buy in the help you need from them.

There’s been a big shift towards transactional systems. The addition of a sale to a sales database is an easy-to-understand concept. Once it’s done, Oracle calculates on-hand inventory using a saved SQL operation called a view. For MongoDB, a program would have to go through inventory items and takeaway the sales to work out the new on-hand inventory.

NoSQL Databases in Action

Looking at MySQL vs NoSQL, it’s interesting to note that NoSQL databases tend to be used in niche rather than enterprise systems. Uber is a good example as it uses Cassandra to keep tabs on its drivers, but it has unique needs, like writing millions of records a second across many data centers. The company wrote its own version of Cassandra in order to have it run on Mesos. Mesos is an orchestration system that resembles containers.

Amazon markets is a DynamoDB database which has “millisecond latency.”

DynamoDB, like MongoDB, has a JavaScript interface, which makes it simple to use. To add a record, for instance you open the database, then add a JSON item like this:

var docClient = AWS.DynamoDB.DocumentClient()
docClient.put("{JSON … }"}

One implementation detail is that you can use Node.js to run these operations in MongoDB and DynamoDB. Which means JavaScript running in the middle tier, so you don’t have to create JAR files or middleware servers like Oracle Weblogic.

So, which of the two is best for you? You could still run your accounting system on a RDBMS system. But don’t necessarily need to pay licensing fees to Oracle. You could use MySQL instead. But will it use MongoDB? That is unlikely in the short term, as huge numbers of programmers across the globe use Java and Oracle, which project managers and users understand. Use ElasticSearch for logs and Spark for analytics. With the others, look at them individually to see if they will fit in with your resources, skills, tolerance for suffering lost transactions, etc.

Conclusion

Whatever your field, selecting the correct database for your firm is a crucial decision. NoSQL databases are rapidly establishing themselves as a significant force in the database landscape. They bring many benefits: they are cheaper, are open-source, and easily scalable, which makes NoSQL more appealing for anyone who needs Big Data integration. It’s a new technology though, which can bring its own problems.

SQL databases, in contrast have had more than four decades to establish their well-defined. A mature community offers almost limitless possibilities for collaboration and support.

In the end, the choice of SQL vs NoSQL for business will come down to the individual needs of the companies concerned. Only through extensive research comparing their abilities to your needs will you find the one that is the best fit.

Manage MySQL databases from the command line

Manage MySQL via command line

MySQL is one of the most popular relational database management systems. There are a variety of ways to manage a MySQL server – countless frontends including PhpMyAdmin, MySQL Workbench and even Adminer.
However, clearly the most efficient way to manage a MySQL server is directly through command line.

The mysql command-line tool comes with the MySQL DBMS and is a simple SQL shell solution with input line editing capabilities. It supports both non-interactive and interactive use.
In case of interactive use query results are presented in an ASCII-table format. In situation of non-interactive use the result is presented in tab-separated format. The output format can be changed using command options.

First you will need have to access your server via SSH in case of Linux. To connect to Windows server you need to use Remote Desktop. It is also possible to access your MySQL database via direct connection.

To connect to MySQL from the command line, follow these steps:

1. Once the command line is available, enter the following, where USERNAME is your database username:

mysql -u USERNAME -p

2. You’ll be prompted for a password, enter your password. You should then see the “mysql>” prompt.

3. You can see a list of databases by typing in this command:

show databases;

4. To perform database tasks you need to tell MySQL which database you want to use. To access a particular database, type the following command, where DBNAME is the database you want to access:

use DBNAME;

5. Once you’ve run this command you have access to your database. Next, you can execute queries, get a listing of MySQL tables and much more. By the way, typing “help” will get you a list of commands you can use, while typing “\q” will get you out of MySQL.

Adding new users and creating a new database

You can add users and databases when you are logged in as the root user in MySQL. To log in as root, and create new users and databases, follow these steps:

1. Log into MySQL with root privileges using the following command:

mysql -u root -p

2. You’ll be prompted for your root password, fill it in and press return to proceed.

3. Creating a database is straight forward, you just need the following command. When entering it, replace username with your user you would like to add. Do that with password too. This is the command you need to enter:

GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' IDENTIFIED BY 'password';

4. Note that the command we listed above will give the new user you create blanket permissions. Instead of granting all privileges you can limit the user. One example is the SELECT permission. To limit the user to permissions as specified by SELECT only you need to enter the following command:

GRANT SELECT ON *.* TO 'username'@'localhost';

5. Type \q to exit MySQL so that you can log in with the user that you have just created.

6. It’s simple to log in to MySQL with your new user, just enter the login command we specified previously, typing the new user’s name instead of Username:

mysql -u username -p

7. The user we create have all privileges assigned to them, including the ability to create a database. Create a database with the following command, using the name of your database instead of DBNAME.

CREATE DATABASE dbname;

8. Want to start using this newly created database? Run the following command, again replacing DBNAME with the name of your database:

USE dbname;

9. MySQL now knows which database you want to work with. You can create a database table by running the following command, for example:

CREATE TABLE example ( id smallint unsigned not null auto_increment, name varchar(50) not null, constraint pk_example primary key (id) );
INSERT INTO example ( id, name ) VALUES ( null, 'Sample data' );

Scripting in SQL

You don’t need to run every single SQL command one command at a time, as in our previous example. Instead you can execute several commands all in one go by making use of a SQL script file.

This is how you use a SQL script file to create a new database, and to populate your new database:

1. Just like we did in the previous example we start by creating a new user for your database. These are the commands you need:

mysql -u root -p
GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' IDENTIFIED BY 'password';
\q

2. Next, create a new file on your computer, and call it sql. You can use any text editor you like to use, as long as the file is stored in plain text.
3. Copy and paste the following into your text file:

CREATE DATABASE dbname;
USE dbname;
CREATE TABLE tablename ( id smallint unsigned not null auto_increment, name varchar(20) not null, constraint pk_example primary key (id) );
INSERT INTO tablename ( id, name ) VALUES ( null, 'Sample data' );

4. Before you save your file, change dbname into the name of your database and tablename into the name of the table you want to add to your new database.

Remember, you can modify this script to make as many tables as you like, and even to create multiple databases all in one go. Note that our example creates a very basic table: you might want to add more complex requirements by expanding on the script.

5. Save the changes to your file and close your text editor.

6. Processing your script is easy. Type the following command in, replacing username with the name of the user you created in the first step.

mysql -u username -p < example.sql

Note that MySQL will execute commands in a script one line at a time, in other words one statement at a time. In our example file, once MySQL has finished executing the entire file you will notice that a new database and a new table is created. Finally, the table will contain any data that you have specified in the INSERT statement.

Deleting a specific table, or an entire database

Performing a table deletion is not hard. Just type this command once you are logged into MySQL, using the name of the table you want to delete instead of tablename.

DROP TABLE tablename;

Note that to execute this command you need to have already selected which database you are working on via the use command.

Deleting a complete database is simple too. Just execute this command at the MySQL prompt, again replacing dbname with the name of the database you want to remove:

DROP DATABASE dbname;

Beware: MySQL will immediately remove the database when you submit that command, it does not ask for any type of confirmation: the database is permanently removed, including all of its contents.

Deleting a database user

You can view a full list of every database user by running the following command:

SELECT user FROM mysql.user GROUP BY user;

Deleting a single user is simple, just run the following command, but insert the name of the user you want to delete in the place of username:

DELETE FROM mysql.user WHERE user = 'username';

My Plesk User Experience (2): Lessons learned from testing Plesk Onyx

My Plesk user experience 2 - Plesk Onyx testing and analysis

So Plesk Onyx came along and it had implemented NGINX caching. Naturally I was curious and removed all my customizations. Then I started to compare the website performance with the inbuilt NGINX caching, other caching methods, and the Speed Kit extension that speeds up websites.

This was the variety of tests and configurations I made on the platform:

Platform Web Server Configuration Caching Engine Configuration
1 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF
2 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching ON
3 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF Redis Caching ON
4 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching ON Redis Caching ON
5 WordPress Website on Plesk Onyx 17.8.11 Proxy Mode and Smart static files processing turned ON NGINX Caching OFF SpeedKit Ext. ON
6 WordPress Website on WordPress.com Everything in default mode
7 WordPress Website on Vesta CP NGINX Web Template turned ON with the WordPress2 Option selected

I installed the Plesk server (version 17.8.11 update 25) on the Digital Ocean droplet on CentOS7 with 2 GB RAM. Next, installing the Redis server as it was. I plugged in Redis Object Cache with its default settings. And had no additional parameters in additional NGINX directives.

There was PHP version 7.2.10 with default settings and the “FPM application served by NGINX mode. And the VestaCP server installed on Digital Ocean droplet on Ubuntu 16.04.

As a test page, I used a typical blog post with lots of photos. Hosted both on the server and externally, with a small chunk of text and one comment.

Testing on the Plesk Onyx Platform

Testing on Plesk Onyx platform

For testing, I used the httperf command line tool (with the same launch parameters) and a well-known online testing system GTmetrix.com. From the GTmetrix.com reports, I chose the following parameters:

Time to First Byte (TTFB) is the total amount of time spent to receive the first byte of the response once it has been requested. It is the sum of “Redirect duration” + “Connection duration” + “Backend duration“. This metric is one of the key indicators of web performance.

Once the connection is complete and the request is made, the server needs to generate a response for the page. The time it takes to generate the response is known as the Backend duration.

    • Fully Loaded Time: RUM Speed Index is a page load performance metric indicating how fast the page fully appears. The lower the score, the better.
    • PageSpeed Score
    • YSlow Score

The httperf utility was launched with the following parameters:

httperf –hog –server jam.pavuk.su –uri=/index.php/2018/10/03/kgd/ –port=443 –wsess=100000,5,2 — rate 1000 –timeout 5

The creation of 100,000 sessions (5 calls each 2 seconds) with speed 1,000. And here, the following markers received with httperf were the most interesting:

  • Connection rate – the real speed of creating new connections. It showed the server ability to process connections.
  • Request rate – the speed of processing requests, in other words a number of requests a server can execute per second. It showed web app responsiveness.
  • Reply rate – an average number of server replies per second.

Plesk Onyx Test Results

Plesk test results

Clearly, there’s an ocean of tools and solutions to test website performance. Some more complete and respected than others. But even the tools I used allowed me to come to pretty objective conclusions. The test results are summarized in the table below with the green buts highlighting the best values of the parameter, and the red – the worst.

Plesk Onyx test results table

And so, after analyzing the received data, we can conclude the following:

  1. Unchanged PageSpeed and YSlow Scores
    PageSpeed and YSlow Score metrics in Plesk remain absolutely the same, no matter the configuration. Therefore, they don’t depend on caching or other server settings like for code optimization, image size, gzip compression and CDN usage.
  2. Caching is essential for speed
    No caching on Plesk at all gives the worst time metrics. Fully Loaded Time and TTFB dramatically increase. Websites with the turned off caching are significantly slower.
  3. NGINX and Redis are a successful combo
    Comparing caching methods, NGINX caching used in Plesk seems better than Redis Cache. It’s possible the default Redis Cache configuration doesn’t let us achieve a higher performance. It’s not quite clear how the used combination of both caching tools works, but it gives quite alright TTFB и Backend duration metrics.
  4. WordPress performance suffers
    WordPress.com shows the worst performance results. However, by default, it doesn’t actually offer bad optimization for the PageSpeed Score.
  5. Vesta and NGINX mean extremely fast page load
    Using the lightweight Vesta control panel with the turned on NGINX Web Template + php-fpm (wordpress2) designed for WordPress hosting gives great speed results. Even more, for WordPress hosting, VestaCP has custom NGINX web templates including NGINX caching support.

Moving to a new DigitalOcean Droplet

Plesk on Digital Ocean droplet - install - now a one-click app

I deployed Plesk to the new DigitalOcean droplet using Web Installer as it doesn’t require me to go to the server via SSH and do all the stuff in web interface. This recent migration from my VPS to a new DigitalOcean droplet gave me new data for my last Plesk experience. All in all, the migration was successful with minor warnings, which in most cases I resolved using migration wizard suggestions.. The bottom line is that Plesk with turned on key features and settings gives very good results for your website.

Also, I strongly recommend you turn on NGINX caching with your Plesk if you’re seeking a simple and reliable way to speed up your website. You won’t need to set up any difficult configurations. And web pros can make the most of Plesk by fine-tuning as they see fit. That’s what it’s made for. their right.

Finally, my story was aimed at people without professional knowledge who simply want to use built-in Plesk features. So I hope that this story will be good reason for you to login to Plesk and take a fresh look.

My Plesk User Experience (1): Easy Starts and Common Issues

Plesk User Experience While Testing Plesk Onyx

Plesk first crossed my path when it came packaged with web hosting acquired from a Russian provider. At the time it was version 12.0, but I never paid any attention to it until I discovered that part of its service was domain names registration.

Starting Off with Plesk

It couldn’t hurt to register a couple of domains for myself, and so I did. I added them to Plesk, and configured the DNS records. Now these websites loaded default web pages. Then, as I already had websites hosted in Plesk, I thought “Why not use mailboxes registered on my own domains?”. So I went and created a couple of mailboxes and configured Roundcube webmail.

But it was all just personal use until I occasionally started to use this complete infrastructure as a sort of a test server. Why? In order to solve tasks related with questions from forum users. And so, my Plesk server operated like this for a while without any use cases development. That is, until the start of 2017 – when I spontaneously took a closer look at something I had available, but which was laying there unused this whole time.

Easy Building on the Plesk Platform

Building on Plesk Platform

I realized that I could now use my own platform for my personal blog. It didn’t take me long to choose WordPress as I had previous experience with it. What’s more, the new Plesk Onyx had integrated its WordPress Toolkit, which looked promising. After getting a license with additional extensions, I started building – themes, plugins, you name it, before publishing my first posts.

Plesk is also built for multiple domains. So when my famous, American Instagrammer friend needed a website to develop her “Travelling with kids” idea, I offered my hosting platform.

Within Plesk, I created a personal account for her and subscriptions with two domains. One was used to host her website, and the other to host her personal mail.

She quickly learned how to use the WordPress admin dashboard and Plesk. She created mailboxes and installed WordPress plugins and themes. Then created posts and moderated comments. Which I believe says a lot about how easy Plesk’s interface is.

As thousands of subscribers were actively visiting both our blogs, it was time to pay more attention to Plesk server maintenance. And later, to server optimization, creating regular work in the Plesk interface and even more in the Linux command line. But more on that later. Before that, there were common issues of all sorts that I had started to face.

Issues uncovered and solved by using Plesk

Issues solved by using Plesk
  • Service downtime
    Various services like httpd and MySQL stopped every now and then. I managed to solve this by turning on and configuring Watchdog.
  • Memory usage
    Then Health Monitor started to constantly notify that MySQL consumes RAM.
  • Basic MySQL settings
    I had optimized operation modes of MySQL via CLI and thought it would benefit to have at least some basic settings of MySQL optimization in the Plesk interface. Eventually, RAM for VPS was increased from 1 to 2 GB, solving the issue.
  • Frequent updates
    Email notifications about new WordPress plugins made me login to Plesk often. I am one of “update-it-all” types and very meticulous when it comes to installing the latest software versions. The Smart Updates feature in WordPress Toolkit solved this task.
  • Extensions accessibility
    I used to find accessing my installed extensions inconvenient. So it was great when WordPress Toolkit had installed extension icons in the left menu.

Speeding up and hardening the WordPress Website

Speed Up WordPress Website

During an internal contest for the best WordPress website hosted in Plesk, I focused on two goals. I wanted to make my WordPress website the fastest and the most secured.

To achieve the A+ note on ssllabs.com, special NGINX parameters became necessary. They were installed via Additional nginx directives and the /etc/nginx/conf.d /ssl.conf file. An attempt to maximize the speed of my website powered by NGINX was a special matter.

At that time, NGINX caching wasn’t yet implemented in Plesk. So I tried various caching solutions, such as redis, memcached, and the very same NGINX caching. All via the CLI, of course, but with the help of customized settings.

It didn’t take long to realize the NGINX version shipped with Plesk was not suitable to use with trendy acceleration technologies. Ones like caching, the brotli compression method, PageSpeed Module, or TLS1.3. Even the Plesk Forum also raised this issue as it seemed to occupy the minds of advanced users.

The result was publishing different ways how to compile the latest NGINX versions. Thus, supporting modern technologies, and substituting the NGINX version shipped with Plesk for a custom one. I also joined forum users in compiling and optimizing NGINX builds for my Plesk server, all during the contest.

In the end, I got the speedy WordPress site I wanted powered by customized NGINX with Redis caching. All was well until Plesk Onyx was released. See what happened next in part 2 of my Plesk experience story tomorrow.

MySQL vs MariaDB – Can One Replace The Other?

MySQL vs MariaDB

With all MariaDB’s power under the hood, “MySQL vs MariaDB” was suddenly a legitimate topic of conversation. The two databases are so similar that MariaDB negates the need to upgrade mysql, because it can usually replace MySQL in a straight swap. Here’s an in-depth comparison of the two, their relation to Plesk and how to set it up.

So, what’s MySQL?

You may have thought that you might like to upgrade MySQL, so this might be one of the first questions that comes to mind. The short answer is, it’s an open source relational database management system that has its roots in SQL, or Structured Query Language. The long answer is that MySQL began life in 2008.

A Swedish company with the catchy name of MySQL AB came up with it, before Sun bought it in 2010. It’s backed up by Oracle and runs on the usual suspects: Linux, UNIX and Windows. Although it’s versatile enough to be used in all kinds of settings, it’s come to be best known to use with web apps and online publishing.

MySQL is a part of the LAMP web development platform, or ‘stack,’ which is the common term used for a bunch of software components used together for a particular purpose. LAMP is an acronym that looks like this:

  • Linux
  • Apache Web server
  • MySQL, the relational database management system
  • PHP, the object-oriented scripting language (Perl or Python are also sometimes used in place of PHP).

We can only guess that if Apache web server had been named Inuit, then LAMP would have been LIMP, and we wonder whether sales would have taken a nosedive as a result? But jokes aside, anyone can use MySQL thanks to its GNU General Public License (GPL). However, Oracle insists that enterprise users pay for the privilege of using their offering in commercial settings.

What is MariaDB?

How comparable is performance when we’re talking about MariaDB vs MySQL? Well first you need to know what MariaDB is. It’s a close relative MySQL, a software fork of MySQL that its developers came up with in 2009. After Oracle Corp picked up MySQL they wanted to create a stand-in.

MariaDB has SQL in its DNA and allows for ACID-style data processing with assured atomicity, durability, isolation, and consistency for transactions. It has lots of features (which developers at MariaDB Corp have worked hard to ensure are similar). They include support for JSON APIs, replication of parallel data and a variety of storage engines, like MariaDB, Aria, ColumnStore, InnoDB, MyRocks, Spider, TokuDB, and Cassandra.

But there are some areas that will be incompatible between versions. For instance, from version 10.1 on, MariaDB stores JSON data using a different format to MySQL 5.7. To get around this, users who want to copy columns of JSON objects from MySQL to MariaDB can either convert them to the format that the latter uses or run statement-based replication jobs with SQL.

MariaDB Corp. offers training, migration services, remote management, and various tailored products to subscribers. MariaDB Foundation, which was set up in 2012, looks after the source code to make sure that the software remains open source.

MySQL vs MariaDB comparison

MySQL MariaDB
Database Structure MySQL is an open-source relational database management system (RDBMS). As with every other relational database, MySQL users work with staple constituents such as tables, constraints, triggers, roles, stored procedures, and views. Each row, or record is identified with a primary key, and foreign keys are used to make sure that related tables can refer to each other accurately. MariaDB is a fork of MySQL, so the database structure and indexes of MariaDB are the same as they are with MySQL. The upshot of this is that your applications will still work if you switch, because the way that the data is organised doesn’t need to change.
Indexes MySQL also allows hash indexes and InnoDB engine utilises inverted lists for FULLTEXT indexes. Most MySQL and MariaDB indexes (PRIMARY KEY, UNIQUE, INDEX, and FULLTEXT) are held in B-trees. The indexes on spatial data types use R-trees so they are an exception.
Databases Deployment MySQL is written in C and C++ and binaries are available for these systems: Microsoft Windows, OS X, Linux, AIX, BSDi, FreeBSD, HP-UX, IRIX, NetBSD, Novell Netware, and lots more. MariaDB is written in C, C++, Bash, and Perl and has binaries for these systems: Microsoft Windows, Linux, OS X, FreeBSD, OpenBSD, Solaris, and others.

Since MariaDB works as a straight swap for MySQL, you should have no trouble uninstalling MySQL, installing MariaDB in its place, then connecting (so long as the file versions are the same). Please be aware that you should run mysql_upgrade to conclude the upgrade process.

Replication / Clustering Replication with MySQL is asynchronous. One server acts as a master to the others’ slaves. You can replicate all databases, selected databases or even selected tables within a database.

MySQL Cluster is a technology providing shared-nothing (there isn’t one point of failure) clustering and auto-sharding (partitioning) for the MySQL database management system.

Internally, MySQL Cluster uses synchronous replication through a two-phase commit mechanism to ensure that data is written to many nodes. This contrasts with what is normally termed “MySQL Replication”, which is asynchronous.

MariaDB offers master-master and master-slave replication as well. MariaDB uses the Galera Cluster for multi-master. From MariaDB 10.1 onwards, Galera is included with MariaDB. To enable clustering you only need to activate the configuration parameters.
Support MySQL developers and support engineers give round-the-clock coverage for patches, updates and fixing bugs. This is part of Oracle’s lifetime support arrangement.

Oracle offers MySQL Premier Support, Extended Support, and Sustaining Support depending upon your needs.

MariaDB ‘s engineers give round the clock support under the enterprise subscription for mission-critical production systems. Support staff are well versed in both MariaDB and MySQL.
Connectors MySQL provides a variety of database connectors which include C, C++, Delphi, Perl, Java, Lua, .NET, Node.js, Python, PHP, Lisp, Go, R, D, and Erlang. MariaDB provides various database connectors which include ADO.NET, C, C++, D, Java, JavaScript, ODBC, Perl, PHP, Python, Ruby, and Visual Studio plug-in.

MySQL and MariaDB under Plesk

Plesk keeps its own databases (which can sit locally on the same server as Plesk or remotely) for things like webmail service, its customers’ websites and their apps. It offers integration for MySQL, MariaDB and PostgreSQL database management engines and provides each of them with their own management tools.

Plesk has its own MySQL database on the local server that’s the key component of keeping it running. The good news is that if you have any version after and including 11.5 you can use MariaDB as a replacement. MariaDB vs MySQL performance shouldn’t concern you as MariaDB performs equally well.

Replacing MySQL with MariaDB in Plesk

  1. Backup the databases sitting on your MySQL server with one of these commands:
    • for backing up for backing up all databases:

# mysqldump -uadmin -p`< /etc/psa/.psa.shadow ` --all-databases | gzip > /root/mysql.all.dump.sql.gz

  • for just backing up the data which Plesk needs:

# mysqldump -uadmin -p`< /etc/psa/.psa.shadow ` --databases mysql psa apsc | gzip > /root/mysql.mysql-psa-apsc.dump.sql.gz

  1. Configure the MariaDB repository on your server. Make sure to choose a version of MariaDB which is equivalent to the version of MySQL which you are using. Plesk supports Maria DB versions 5.5, 10.0, and 10.1.

Note: after you’ve installed MariaDB, don’t disable this repository on your server. Plesk needs it to be enabled for updates.

  1. If the Watchdog Plesk extension is installed on your server then stop it, and also stop any other monitoring services that might start the mysql service after you’ve stopped it manually.
  2. Install MariaDB by following one of the methods mentioned here
  3. Turn on Watchdog and any other services that you stopped in step three.
  4. Tell Plesk about the changes in the MySQL component:

plesk sbin packagemng --set-dirty-flag

  1. (Optional) If you get any MySQL errors following this update, restore the backup you created in step 1:

zcat /root/mysql.all.dump.sql.gz | mysql -uadmin -p`< /etc/psa/.psa.shadow

MariaDB installation guide

Plesk uses MySQL as both admin and client RDBMS. Plesk 11.5 for Linux and higher allows for MySQL alternatives (such as MariaDB or Percona Server) to be used as drop-in replacements for it. Upgrade from installations with drop-in replacements is also supported.

This guide will explain in great detail how you can replace MySQL server on a Plesk installation with MariaDB server. The guide is aimed at administrators with intermediate experience and understanding of package management for their OS. Also basic experience with services management and MySQL maintenance required.

Continue reading