Setting up Your Ideal Web Development Environment With Plesk Essentials

Morning beverage ready. Mail and calendar checked. Daily meeting with the team done – It’s time to start your engines and crack on with your project. If you’re familiar with this sequence, it’s because you’re also immersed in the web developer’s everyday routine.

Carrying out your daily tasks might be an easy-peasy chore. But when it comes to beginning a new project from scratch. And setting up your web development environment, you might need to add on a few more steps. Before starting cooking up a new project, you must have all the ingredients sorted. That is, for example, prepare all the data and tools you’ll need along the way.

And indeed, there’s a significant amount of web development tools out there. But what tools are suited to web developers? How do you decide which ones to have in your toolbox? In this article, we’ll bring you some prime extensions and toolkits that will make your web development experience even better. Let’s get ready to know some of Plesk’s essentials for web development, DNS, security, SEO, server, and backup.

Organizing Your Toolbox

At Plesk, our goal is to make web development simple and easy. And its integrated platform with full development and deployment capabilities allows you to build, secure, and run servers and websites. But if what you want to know is how to level up your skills with great tools, here are some excellent examples. Let’s dig deeper:

DNS, Security, and Web Plesk Extensions for Web Developers

Plesk DNSSEC

The DNSSEC acronym stands for Domain Name System Security Extensions. It’s a set of DNS protocol extensions that sign DNS data to secure the domain name resolving process.

The Plesk DNSSEC extension helps make the Internet safer. Let’s see what it allows you to do:

  • Configure the settings used for key generation and rollover.
  • Sign and unsign domain zones according to the DNSSEC specifications.
  • Receive notifications related to DNSSEC records and keys.
  • View and copy DS resource records and DNSKEY resource record sets.

Docker

Docker is a handy software technology that provides containers. That means an extra layer of abstraction and automation of operating-system-level virtualization. As a flexible Plesk tool, Docker can help you perform a wide variety of tasks. But that’s not everything. Docker also removes the obstacles to adapt to new technologies digitally as it uses existing technologies. This way, it acts as an assistant between different operating systems and developers.

The extension also frees applications from system infrastructure. Allowing expansion in capacity through collaboration. Here’s more of what you can achieve with Docker for Plesk:

  • On-demand access to a vast range of modern technologies.
  • Upload a custom image or choose one from a catalog.
  • Deploy and manage Docker containers straight from the Plesk interface.
  • Install Docker containers locally or to a remote node registered in Plesk.

Web Presence Builder

If you’re a beginner in web development, Web Presence Builder is the right tool for you. It doesn’t require great HTML knowledge or graphic design skills. This tool helps you create professional-looking websites not bad, huh?

Web Presence Builder also provides a simple visual editor and a broad set of templates for different websites. Pick a page design that you like and your content template. And then add your text to the pages and publish the website. Here’s what you can do with this tool:

  • Create web pages.
  • Add a wide variety of content (text, images, video, scripts, and more).
  • Edit website settings (website name, keywords, icons, and so on).

Joomla! Toolkit

Up next it’s the Joomla! Toolkit. A complete toolkit to power Joomla! websites. With this toolkit, you can mass-manage, secure, and automate all your instances, extensions, and templates running on a server managed by Plesk. All from one single entry point. Here’s more:

  • One single dashboard to control, maintain and monitor all your instances.
  • One-click installer to download, initialize, and configure Joomla! from start to finish.
  • It hardens your site against all types of cyberattacks with its robust security scanner.

Plesk WordPress Toolkit

As a developer, you’re probably craving lots of features and intelligent tools that make your daily workload easier to digest. Well, we’re proud to say that our beloved Plesk WordPress Toolkit is definitely one of them. With this toolkit, you can focus on core tasks and automate the mundane ones. And substantially increase productivity, security, and efficiency too.  

The Plesk WordPress Toolkit is by far the most complete tool for WordPress admins seeking pre-configured solutions for the best possible performance. As well as an intelligent tool that helps to always keep their WordPress sites secure and up-to-date without breaking a live site. In case you’re not falling yet, here’s why using this tool is not only a smart idea but also a rewarding experience: 

  • Manage all WordPress sites on the server simplifying admin tasks.
  • Install, activate, update, and remove plugins and themes from one single dashboard.
  • Keep the highest level of security selectively securing websites.
  • Clone and stage websites to simulate changes before going live. 
  • Synchronize the changes between files and databases of different sites.
  • Optimize SEO for higher traffic and manage WordPress search engine indexing.

Smart Updates

A great addition to the Plesk WordPress Toolkit is the Smart Updates feature. This power-tool combo automatically updates WordPress core, plugins, and themes using AI. Here’s more:

  • Smart Updates clones and simulates your WordPress updates before performing them.
  • It mitigates the risk of hacked sites by running updates in a secure staging environment without affecting production. 
  • You can activate Smart Updates in WordPress Toolkit with a switch, as well as automate update analysis email notifications.

SEO, Backup, Cloud, and Server Plesk Extensions for Web Developers

SEO Toolkit

Along with the performance, a thought-out SEO strategy is fundamental to improve your search engine rankings. And with better rankings, more visibility, traffic, and conversions. 

Organic search can become your primary source of clicks, traffic, and revenue for your business. With the SEO Toolkit, you get all the tools you need to give your customers a chance to find you online. And help them pick your website over those of your competitors. We’re listing some reasons why you should use SEO Toolkit for your website:

  • Track SEO KPIs and check your website’s Visibility Score to measure your success.
  • Site Audit analyzes your site and gives you tips on how to enhance optimization.
  • SEO Advisor provides you a to-do list to improve your performance based on your Site Audit and Visibility Score.
  • Log File Analyzer will crawl your site and pages to help search engines rank and index them accordingly.
  • Check each of your keyword’s performance and compare it directly to your competitors’.

Google PageSpeed Insights

As explained above, one of the main worries for web developers is site performance. Because after all the work you’ve put into your web development, you just want it to work smoothly and without any issues. But don’t panic – Here’s what you need to know to achieve good visibility in search engines. 

First of all, you need to create websites that are fast, useful to your visitors, optimized for all traffic, and most importantly, mobile-friendly. And secondly, you should monitor your sites with tools like Google PageSpeed Insights. It will help you analyze your website’s content and its performance to suggest specific improvements. Here’s how the PageSpeed Insights extension works:

  • Analyzes the performance of websites hosted on your Plesk server.
  • Assigns every website a desktop and mobile score depending on its performance.
  • Generates a report based on the results of the analysis and displays suggestions to optimize your websites’ performance.
  • Provides links in the extension UI to the suggested tools aimed at improving websites’ performance (for example, the mod_pagespeed Apache module).
  • Gives already compressed files to reduce the size of static files (free API key required).
  • Installs the mod_pagespeed Apache module and lets you configure it for your needs.

Plesk Cgroups Manager

Often, web developers suffer what’s known as the ‘noisy neighbor’ problem. For those who aren’t familiar with this concept, this issue occurs when a website on a shared hosting consumes all system resources and disrupts the performance of other websites.

To avoid this common problem, we recommend using the Plesk Cgroups Manager extension. This solution helps you deliver reliable and continuous availability. The Cgroups Manager lets you control the amount of CPU, RAM, and disk read/write bandwidth resources each subscriber or tier of subscribers gets. You can use Plesk Cgroups to:

  • Prevent consuming of resources of your server by some of the subscriptions on your shared environment.
  • Automatically set a limit of resource consumption, monitor it, and send email notifications when it exceeds a certain level.
  • Set limits at two levels – subscriber service plan level or subscriber level.

Backup to Cloud Pro

Last but not least, we find the Backup to Cloud Pro extension. This solution is for all web professionals that want to set up different backup schedules to the cloud effortlessly. What’s more, it allows you to focus on more exciting and innovative tasks as it automates your backup management. It’s easy to set up and you can secure your domains with Google Drive, Amazon S3, DropBox, DigitalOcean Spaces, and Microsoft OneDrive:

  • Back up the entire server, individual user accounts with websites or individual subscriptions.
  • Schedule backups.
  • Restore data from backup archives.

CyberDeals Sale – 50% Off Selected Plesk Extensions and Toolkits

Thank you for reading up to this point – As a reward, we want to share with you a sneak peek of what’s coming soon this November. From Friday 27th until Monday 30th, we’re giving 50% off all the extensions listed in the article as part of our CyberDeals sale. So if you don’t want to miss out on these unbeatable offers, stay on the lookout for new updates. And catch them before they fly! 

Plesk Multiple Server Management – How it Works

The biggest challenges we run into as system admins and web experts are multiple server managementsite management, and maintenance. If we don’t do this right, we face consequences. We waste time and resources. So it’s essential to own a web hosting control panel – making the whole thing simpler. While being able to create sites, apps, automate tasks, handle website security, and more.

Plesk Onyx is an all-around control panel and WebOps solution. Devs rely on it for its coding environment. Not to mention everyday tasks as it offers many extensions. Including Node.js, Ruby, WordPress Toolkit, Joomla Toolkit and more.

Plesk Control Panel Bonuses

Plesk supports Docker, which empowers developers to create and manage their new software. Do this by managing and deploying all Docker containers straight from the control panel. Additionally, Plesk offers GitHub integrations – deploying apps and sites quick from a Git repository, remote or local.

Plesk server management continues to add to its multiple server management capabilities. By giving absolute control of multiple accounts and subscriptions across all servers.

Multiple Server Management with One Control Panel

Plesk’s Multi Server extension lets you administrate multiple servers and routine tasks with just one control panel. Doesn’t matter if you’re a hosting provider, reseller, or manage your own hosting. Constantly switching between several hostnames, username IDs, and so on is exhausting.

Plesk designed its Multi Server extension with this in mind. For effective and secure multiple server management. With this extension, you can perform hosting actions on many servers. And manage the infrastructure with ease. This because memorizing hostnames, passwords, and login identifications become unnecessary.

It’s similar to simple web server management. However, the same scope of features is related to a number of servers. All through a single control panel.

Use Plesk WordPress Edition

Why the Multiple Server Management Extension?

You can install this extension directly from Plesk’s extension list. But note that you need to install Onyx on all your servers first. You’ll have all the features that Plesk has. But with this extension, you’ll gain additional functionalities:

  • Managing as many customer subscriptions and accounts as you want from your control panel.
  • Choosing between any billing systems you like, including yours.

It’s a very useful business-ready platform. Ideal for development studios and web designers who manage many different sites and clients.

What Plesk Multi-server management Consists of

  • At least two nodes which are all connected to each other using the SDK extension.
  • The basic two nodes include Service nodes and management nodes.
  • Plesk multi-server that will be installed on all extensions with all nodes.
  • All of the nodes that will have the same license key and configurations.

Subscriptions and Customer Account Management

So we said that this system comes with two nodes – service and management nodes. You use the service node to manage hosting. Because it has the power to host sites, system databases and emails. It also ensures quality load-balancing. This is important since it decides which node will provide hosting for the new subscription. Meanwhile, the multi-server extension has a separate API extending from Plesk’s API. Giving the power to add commands within the system.

The management node is a single Onyx server. Useful for both customers and administrators. And it servers a single point login spot. All new customer accounts go into this node too. But remember, this node has no tools for any hosting actions. So we use the management node to create accounts and the service node to manage their hosting.

Additionally, when a customer logs into the management node, they see and manage all subscriptions hosted through service nodes. You can see the following information on the subscription tab:

  • Status – a status sign that shows whether a subscription is successful or not.
  • A service node IP address that is provisioned for a certain subscription.

This is how Plesk server management can help you with web server management and multiple server management. You as an admin can rely on this comprehensive platform for its capabilities at all times.

UPDATE: Starting from Plesk Onyx 17.8 Multi Server feature is no longer available

Varnish for WordPress in a Docker container

Is your website experiencing heavy traffic? Are you looking for a solution that will reduce server load and will improve website speed? Varnish might just be what you need. Varnish listens for duplicate requests and provides a cached version of your website pages, mediating between your users’ requests and your server.

So how do you activate Varnish? In this article, I will show you how you can easily increase your website speed by using Varnish as a one click Docker container. I will demonstrate how using a website caching solution like Varnish can easily improve both page response times and the maximum number of concurrent visitors on your website. To simulate real traffic and measure correct response times, I have used an external server similar to blitz.io, stormforger.com or loadstorm.com to generate lots of traffic and concurrent users to our site.

What is Varnish and why should you use it?

Varnish Cache Plugin

Varnish HTTP Cache is a software that helps reduce the load on your server by caching the output of the request into the virtual memory. It is a so-called HTTP accelerator and is focused on HTTP only. Varnish is open source and is used by high traffic websites such as Wikipedia.

If you have lots of daily visitors, we recommend using a cache mechanism. You’ll see your response time improving significantly because the server can send the already cached data, directly from the memory, back to the client, without the resource consuming process handling on the web server. Additionally, it reduces the load on the CPU so that the server is able to handle many more requests without getting overloaded. I will demonstrate this in the stress tests later.

Running Varnish in a Docker container

Docker is a great open source project that makes it incredibly simple to add Varnish to a running server. We don’t need to install Varnish on the production server, we simply use a ready-to-go Varnish Docker image. The main advantage is that if something goes wrong with the container, we can simply remove it and spin-up a new container within seconds. The way in which Docker containers are designed guarantees that Varnish will always run independently of our system environment. Do you want to know more about Docker containers? Read more about the 6 essentials on Docker containers!

For this tutorial, I will use the newly integrated Docker support on Plesk to activate Varnish. The Plesk interface makes it easy to get a Varnish instance running, only requiring small modifications of the Varnish configuration file to be done using the terminal.

A further improvement would be to rebuild the Varnish Docker image so that it takes our configuration as a parameter from the Plesk UI. For now, I’ll stick to the original Docker image and upload our configuration via shell.

Activate Varnish in Plesk and test on a static page

Okay, let’s try it first on the default static page of Plesk. In the default settings, Plesk uses Nginx as a reverse proxy server for Apache. This means that Nginx is listening to default port 80(443 for HTTPS) and Apache to an internal port (7080 HTTP, 7081 HTTPS) We will push our Varnish container in between of the two web servers. In this scenario, Varnish will get the request from Nginx and the content from Apache. Don’t worry, it’s easier than it sounds!

Go to Docker and search for the image million12/varnish in the Docker Image Catalog. Once found, click “run” and Plesk will download the image to your local machine. After the download, click “run (local)”, which will open the configuration page of the container. The only thing that we’ll change is the port mapping.

Port mapping in Varnish
Varnish in Docker container on Plesk Onyx – Port mapping

Remove the tick at the option “Automatic port mapping” and set an external port (I will use port 32780 in this tutorial) for the option “Manual mapping”. This means that port 80 of the container is mapped to the external port 32780. By adding a proxy rule we can “talk” to the container through this external port. We will set the backend server in Varnish to the Apache port from where the data will be gathered if a “cache miss” occurred.

Test Varnish with a static page

Create a subdomain for testing our Varnish integration on a static page. After the subdomain was created, go to the “Hosting Settings” and deactivate the options “SSL/TLS support” and “Permanent SEO-safe 301 redirect from HTTP to HTTPS” because we want to test the Varnish functionality over HTTP first. Okay, but how do we redirect the requests to the Varnish container? This can be done easily with the option Docker Proxy Rules that you will find in the domain overview.

Proxy rules related to Varnish Cache
Varnish – Proxy rules for Docker container on Plesk Onyx

Click on “Add Rule” and select the previously created container and the port mapping that we entered manually. If you cannot make a selection, then your container is not running. In this case you should click on Docker in the menu and start the container first. If you open the subdomain after you’ve activated the proxy rule, you will see the error Error 503 Backend fetch failed. Don’t panic, this is an expected behavior. We did not configure the Varnish backend server yet!

Error 503 - Backend fetch failed
Varnish – Error 503 Backend fetch failed

Configure Varnish properly in the Docker container using SSH

This is the only time when we need to access the server and the Varnish Docker container via SSH. Open your terminal and type

$ ssh [email protected] // Replace with your user name and correct IP address

Enter your password if required to get access to the server. Tip: use a private / public key pair to improve the security of your server!

First of all, we need to find out the ID of our Docker container. To list all active container type into the command line

$ docker ps
Varnish HTTP Cache - Running Docker containers - Plesk Onyx
Varnish – Running Docker containers – Plesk Onyx

Copy the Docker ID and use the following command to access the Docker container

$ docker exec -it ID bash // Replace ID with the correct container ID

Okay, the most important thing to do is change the host and port value for the default backend server in the file. /etc/varnish/default.vcl

For .host we will enter the IP address of the server where Plesk is executed (in our example 111.222.333.444) and for .port 7080. As mentioned before, this is the default Apache HTTP port in Plesk. We have to use this port because, internally ,Varnish can only speak over an unencrypted channel!

Tip: Do we have a cache hit or miss?

How do we see that the content was loaded from the memory and not from the Apache server? You will see that the request was processed by Varnish through a special header entry in the response, you will not know whether the data was loaded from the memory or was requested from the Apache server.

To achieve it without having to use varnishlog in the console, we can set another header value with the corresponding value (cache hit / cache miss). We have to use the function sub vcl_deliver that is the last exit point for almost all code paths (except vcl_pipe). Add the following code within the curly brackets of the function sub vcl_deliver

if (obj.hits > 0) {
     set resp.http.X-Cache = "HIT";
} else {
     set resp.http.X-Cache = "MISS";
}

Use the Developer Tools in your browser to examine the response

Save the modified file and exit the container. Switch to your Plesk UI again and restart the container in Docker with the “Restart” button. When you see the success message, go to the tab of the subdomain with the 503 error message. Do not reload the page yet but open the Developer Tools first (alt + cmd + i on a MacBook). Go to the “Network” tab and reload the page. Select the first entry (URL /) and take a closer look at the “Response headers”.

Cache Miss and Varnish
Varnish – Cache Miss

If everything was done properly, you will see some new header variables:

X-Cache – This is the variable that I’ve defined in the configuration file. After the first reload it should display a “MISS”.
X-Varnish: ID – The internal ID for this file in Varnish {more information required}
Via: "1.1 varnish-v4" – This shows that the request was redirected through the Varnish container.

Okay, it’s about time to see some Varnish magic! Click on the reload button in your browser to reload the page. This time it will be loaded from the virtual memory.

Varnish - Cache Hit
Varnish – Cache Hit

What about websites that are using HTTPS to encrypt the connection?

It also works and the best part of it is that you don’t have to change anything! Create an SSL certificate for the subdomain using the great Let’s encrypt extension. After the certificate was created and assigned (the extension does it automatically), go the the static page and reload it using https:// instead of http:// If you open your browser console, you will see a X-Cache: HIT in the response headers:

Activate Varnish caching on your WordPress website

We just saw that it’s technically possible to activate Varnish inside a Docker container with Plesk. Now let’s try it on a WordPress website!

The main difference is the configuration of the VLC configuration file within the Varnish container. WordPress is a dynamic CMS, thus we cannot cache everything without restricting the functionality of the system; the administration pages shouldn’t be cached since changes wouldn’t be possible any more for logged in users.

There are many pre-defined configuration files for WordPress available on the internet, from various developers. In most cases, you can use them right away without any modifications. For our test integration, we will take the configuration file created by HTPC Guides (with small adjustments – link below).

For this article and for the stress tests I’ve created a fully workable website with WordPress. I want to test under real conditions and not with a default WordPress installation. The website should also be secured with an SSL certificate and only callable over HTTPS. For this reason, I will also activate an SSL certificate with the help of the Let’s Encrypt extension for this installation.

Use a WordPress Plugin to activate support for HTTPS

Important: Do not use the option “Permanent SEO-safe 301 redirect from HTTP to HTTPS” within Plesk in “Hosting Settings” because this will lead to a redirect loop in our special environment constellation. Instead I will use a WordPress plugin to switch our installation completely to HTTPS. The name of the plugin is Really Simple SSL and can be downloaded from the official plugin repository.

Please make the same preparations like for the static page but add this time the additional required configuration data for WordPress to the default.vcl configuration file inside the Docker container. I’ve used the this Varnish configuration file (GitHub Gist) for my test installation. Don’t forget to adjust the backend server again like we already did for the static page!

Tip: Do not forget to restart the Docker container from the Plesk UI to reload the configuration information. If you forget to restart the container, then Varnish will not work properly with the WordPress website.

Now reload the front page of WordPress with the browser console open. The first loading process should throw an X-Cache: MISS but the second (and following) reloads will return an X-Cache: HIT.

Cache Hit with Varnish HTTP Cache plugin
Varnish in WordPress – Cache Hit

Let’s run some stress tests with Blitz.io!

We’ve seen that Varnish helps to improve the performance of the website. What is with the promised load reduction on the CPU? I can test it with the so-called stress testing which will load the website with many concurrent users per second for a certain time span. Without any security and overload protection, the server will start to respond steadily slower until the requests cannot be handled any more completely. With activated Varnish the server will be able to serve such intensive requests longer without throwing errors.

All right, it’s time to run load and performance tests with the external service provider Blitz.io.

Note: I used a very small server for this test instance (only 1 CPU and 500MB Memory), so the positive impact of Varnish should be much higher on a more powerful server!

Result WITHOUT Varnish:

Wordpress without Varnish HTTP Cache
Stress test – WordPress without Varnish

As you can see, I had to abort the stress test because the server already couldn’t handle the request after less than 5 seconds and less than 50 concurrent users. After just 15 seconds the server collapsed completely and no requests could be managed any more!

Result WITH Varnish:

Varnish HTTP cache - WordPress with Varnish
Stress test – WordPress with Varnish

Varnish magic! As you can see, the Varnish cache allows us to keep the server stable even under heavy load. The small test server handled over 300 concurrent users and responded all requests over 30 seconds without any errors. After 30 seconds and over 300 concurrent users the server was overloaded and couldn’t accept further requests. With a more powerful server the numbers should be much higher! So, it’s also great to keep your website reactively if it suffers a DDoS attack, at least for a certain number of requests.

Summary: Varnish for WordPress within a Docker container on Plesk

Let me make a small checklist:

  • Varnish in Docker container? Yes.
  • Varnish in WordPress? Yes.
  • Varnish in Plesk? Yes.
  • Varnish for WordPress within Docker container in Plesk? Absolutely, yes!

Mission accomplished! 🙂

As you’ve seen, Varnish can greatly improve the performance of your WordPress website and reduce the CPU-load of your server. It’s relatively easy to setup a working environment using Varnish in a Docker container between Nginx and Apache within Plesk. The most important part is the correct configuration of Varnish for your specific CMS.

Thank you for reading. In the next blog post, I will take a look into another memory caching system, Memcached.

Stay tuned and stay Plesky!

AutoScaling WordPress Docker with AWS

Autoscaling WordPress Docker & AWS

Do you run your website with WordPress? Ask yourself: “How many concurrent visitors can it handle?” What if your site is an e-Commerce?

According to Amazon you lose 1% of sales with every extra 100ms load time. Today, customers expect your page to load in less than 3s – in their browser – not on your server.

Can you make your site faster?

Making your website fast for one visitor is relatively easy – use NGINX with php7-fpm, cache static content (e.g. with varnish or Memcached). And if latency is an issue, because your visitors come from all over, use a CDN (Content-Delivery-Network) like CloudFlare or AKAMAI. Thus bringing your site as close to your users as possible.

All the above will substantially increase your website speed since pictures and videos will display immediately. Without having to travel long distances through the web. Bear in mind though, that JavaScript and CSS files also need to be loaded within milliseconds to make your website look sexy from the very start.

Pssst, now’s when we add the fact that the Plesk platform includes all these technologies. Enabling you to run your sites with unparalleled performance, – and in a few simple clicks too.

But if you expect lots of visitors on your website at the same time – which is what will ideally happen once your site becomes popular and successful – one server just might not be enough to handle all the requests.

Multiple server requests – what do you do?

You know when you’re at the supermarket and the line at the checkout is huge? If they’re service-oriented in any way, they’ll open a new checkout to distribute the load. What happens on a crowded Saturday when every last checkout is chock-a-block? People roll their eyes, sigh in despair and are suddenly very likely to visit a competitor next time. Not what we want, is it?

What about giving our customers a fast and reliable service? One which makes them leave with a smile and come back frequently, because they felt oh-so-well-served? Now we’re talking!

But how to tune WordPress to be able to handle massive parallel requests? That obviously requires several servers – like the checkout desks in the supermarket. But as seen in our crowded Saturday shopping experience, it might not be enough to simply add one or two servers. And adding 10 servers from the start could turn out tremendously expensive and ruin your business case.

Scaling your website works a lot like consuming straight from the power supply! If you have low traffic, you have low costs. When your traffic increases, infrastructure should automatically scale to handle the load. Ahhh bliss.

This procedure leads to correlated costs, which shouldn’t cause you any headaches, as more traffic means more business. In other words, if you play your cards right and scale, increased costs for your servers shouldn’t hurt your revenues. In fact, quite the opposite. And if your traffic decreases, your server-related costs will magically disappear too.

Excited to learn how all this funky stuff works?

Great! In order to make WordPress as fast as a bullet, we need to accomplish the following steps.

  1. Set up your own database server with enough power on a separate machine.
  2. Move all static files to a file storage which is faster in delivering files.
  3. Create a CDN in front of your site to bring at least static files close to your end users.
  4. Set up multiple servers with the exact same WordPress site (including configuration).
  5. Get a load-balancer and have it distribute the load between these WP servers.
  6. Depending on how you want to make updates on your site, you can either.
    1. redeploy all instances to ship all your changes (better performance).
    2. or you use a shared filesystem that all instances use (slower, but easier to update).

Ramp up new instances automatically

But the king’s class is to actually ramp up new instances automatically driven by demand and ramp them down again when not needed.

How to accomplish that?

We need an infrastructure that allows, managed via APIs and with the capability of auto-scaling based on events (e.g. “high CPU consumption alarm”).

In our example, we use Amazon AWS since it is the most popular Cloud Service Provider based on amount of web-facing servers with the largest ecosystem. But Microsoft Azure and Google Compute Platform also have their strengths and can easily compete with AWS. Just pick one and you’re good to go. Again, Plesk runs smoothly on all major Cloud Service Providers and is available as an app on the AWS Marketplace.

Before going into the APIs, we should decide how we want to deploy WordPress on the servers. We do not want to deploy manually – we want to let the infrastructure auto-scale for us instead – which means the auto-scaling component decides when to add or remove servers. We could use Chef, Puppet, Ansible or simple bash scripts for this task, but our preference is to use Docker to simply package our WordPress including our website content and configuration fully separated from the infrastructure. And then just put this Docker image on each server and run it as a container. With this approach it is super simple to configure all we need once and reproduce deployments as often as we want with no effort.

App instance - autoscale WordPress with Docker and AWS

How to build a Docker Image

To build a Docker image you first need to describe it in a Dockerfile. You can see the Dockerfile we’re using here. But to sum up, we build our image with the latest WordPress version by running:

 $ docker build -t janloeffler/wordpress-aws-scaler:latest . 

After building it, we need to push it to a Docker Registry – which is a file storage for Docker images. We use the official Docker Hub here:

 $ docker push janloeffler/wordpress-aws-scaler:latest 

You can easily run your image containing the WordPress locally to test it out. Be aware that you need to specify parameters, like database hostname and credentials.

 $ docker run -p 80:80 -p 443:443 -it janloeffler/wordpress-aws-scaler:latest 

Now we need to get more provider-specific since AWS, Microsoft Azure and Google Compute all have different APIs. And they all call their services slightly different. But in the following example, we’re using AWS.

Scaling – It’s all about APIs

AWS offers tons of REST APIs while each of them provides tons of API calls with again lots of parameters. Most of them are optional and can be used for flexible configuration. You can access these APIs either directly via REST http calls or by using the AWS CLI directly on your shell. For now. we’ll use the CLI in this example. Which is a wrapper for the REST API and thus easier to use for debugging.

For our super-fast auto-scaling WordPress we need the following APIs:

  1. EC2                             (to manage virtual servers)
  2. S3                                (to upload files to the file storage)
  3. S3api                          (to manage the file storage)
  4. RDS                            (to manage the database)
  5. ELB                            (to manage the load-balancer)
  6. AutoScaling              (to configure auto-scaling)
  7. CloudWatch              (to monitor load on our servers; required by auto-scaling)
  8. CloudFront               (set up the Content-Delivery-Network)
  9. SNS                            (notification channel between monitoring and auto-scaling)
  10. Route53                    (manage domains and DNS entries)
  11. IAM                           (manage access permissions of the infrastructure)

To give you an idea of the complexity – the EC2 API alone provides 210 API calls to manage compute resources on AWS.

To list all your EC2 instances in your AWS account you can simply run:

 $ aws ec2 describe-instances 

The result if all API calls is always represented as a JSON response. To automate AWS, you simply have to LOVE parsing JSON 😉

Since describing all required API calls would fill approximately 20 pages, we skip that and provide a solution to you that does the whole job of managing and auto-scaling WordPress with just 1!!! single command. Sounds awesome?

Plesk WordPress AWS Scaler is OpenSource

So good news – you can check out the Plesk WordPress AWS Scaler on our git! Here’s how you do it:

 

Autoscaling Using Wordrpess Docker & AWS

Just download the repo to your local machine by cloning it:

 $ git clone https://github.com/plesk/wordpress-aws-scaler.git 
 $ cd wordpress-aws-scaler 

Now execute the Plesk WordPress AWS Scaler script to see its options:

 $ sh manage-wordpress.sh 

Plesk WordPress Scaler for AWS

You can adjust the configuration to your needs

  • WordPress Site Title
  • WordPress Admin Credentials
  • E-Mail Address
  • Domain Name
  • New Relic License Key (for application performance management)
  • EC2 & RDS configuration e.g. server sizes (here: instance types)
  • And much more

All these parameters are optional. And you can also create multiple config files for several WordPress sites in the same AWS accounts. To create a new Auto-Scaling WordPress, simply execute:

 $ sh manage-wordpress.sh create 

To update all instances with a new version of your site:

 $ sh manage-wordpress.sh update 

To delete it incl. its data and all depending resources:

 $ sh manage-wordpress.sh delete 

And if you’re interested in the technical details, just open the file manage-WordPress.sh in your preferred IDE and have a look.

Don’t have Plesk yet?

Get your free download here and try it out. You’ll get a code you can use for 14 days and an email with all the juicy details. After that, our team will be there to support you as you make your next steps for your workload or business. Happy scaling and stay Plesky!

6 essentials on Docker containers

Docker containers

Docker is one of the most successful open source projects in recent history, it’s fundamentally shifting the way people think about building, shipping and running applications. If you’re in the tech industry then the chances you’re already aware of the project. We’re going to look at 6 key points about Docker.

According to Alex Ellis, Docker Captain, Containers are disruptive and are changing the way we build and partition our applications in the cloud. Gone are monolithic systems and in come microservices, auto-scaling and self-healing infrastructure. Forget heavy-weight SOAP interfaces – REST APIs are the new lingua franca.

Whether you are wondering how Docker fits into your stack or are already leading the way – here are 6 essential facts that you and your team need to know about containers.

1. Containers are not VMs

Containers and virtual machines have similar resource isolation and allocation benefits – but a different architectural approach allows containers to be more portable and efficient. The main difference between containers and VMs is in their architectural approach.

Difference between containers and VMs

Virtual machines

VMs include the application, the necessary binaries, libraries, and an entire guest operating system — all of which can amount to tens of GBs. VMs run on top of a physical machine using a Hypervisor.  The hypervisors themselves run on physical computers, referred to as the “host machine”. The host machine is what provides the VM with resources, including RAM and CPU. These resources are divided among VMs.  So if one VM is running a more resource heavy application, more resources would be allocated to that one than to the other VMs running on the same host machine.

The VM that is running on the host machine is also often called a “guest machine.”

This guest machine contains both the application and whatever it needs to run that application (e.g. system binaries, libraries). It also carries an entire virtualized hardware stack of its own, including virtualized network adapters, storage, and CPU — which means it in turn has its own full-fledged guest operating system. From the inside, the guest machine behaves as its own unit with its own dedicated resources. From the outside, we know that it’s a VM — sharing resources provided by the host machine.

Containers

For all intents and purposes, containers look like a VM. The *key* is that the underlying architecture is fundamentally different between the containers and virtual machines. The big difference between containers and VMs is that containers *share* the host system’s kernel with other containers. The image above shows that containers package up just the user space, and not the kernel or virtual hardware like a VM does.

Each container gets its own isolated user space to allow multiple containers to run on a single host machine. All the operating system level architecture is being shared across containers.

The only parts that are created from scratch are the bins and libs – this is what makes containers so lightweight and portable. Virtual machines are built in the opposite direction. They start with a full operating system and, depending on the application, developers may or may not be able to strip out unwanted components.

  • Basically containers provide same functionality which provides by VMs, with out any hypervisor overhead
  • Containers are more light weight than VMs, since it shares kernel with host without hardware emulation (hypervisor)
  • Docker is not a virtualization technology, it’s an application delivery technology.
  • A container is “just” a process – literally a container is not “a thing”.
  • Containers use kernel features such as kernel namespaces, and control groups (cgroups)
  • Kernel namespaces provide basic isolation and CGroups use for resource allocation

Namespaces

  • Kernel namespaces provide basic isolation
  • It guarantee that each container cannot see or affect other containers
  • For an example, with namespaces you can have multiple processes with same PID in different environments (containers)
  • There are six types of namespaces available
  1. pid (processes)
  2. net (network interfaces, routing…)
  3. ipc (System V IPC)
  4. mnt (mount points, filesystems)
  5. uts (hostname)
  6. user (UIDs)

CGroups

  • CGroups(Control Groups) allocate resources and apply limits to the resources a process can take (memory, CPU, disk I/O)
    between containers
  • It ensure that each container gets its fair share of memory, CPU, disk I/O(resources),
  • Also It guarantee that single container not over consuming the resources

2. A Container (Process) can start up in one-twentieth of a second

Containers can be created much faster than virtual machines because VMs must retrieve 10-20 GBs of an operating system from storage. The workload in the container uses the host server’s operating system kernel, avoiding that step. According to Miles Ward, Google Cloud Platform’s Global Head of Solutions, a container (process) can start up in ~1/20th of a second compared to a minute or so for a modern VM. When development teams adopt Docker –  they add a new layer of agility, and productivitiy to the software development lifecycle.

Docker catalog

Image: Plesk Onyx

Having that speed right in place allows a development team to get project code activated, to test code in different ways, or to launch additional e-commerce capacity on its website –  all very quickly.
3. Containers have proven themselves on a massive scale
The world’s most innovative companies are adopting microservices architectures, where loosely coupled together services from applications. For example, you might have your Mongo database running in one container and your Redis server in another while your Node.js app is in another. With Docker, it’s become much more easier to link these containers together to create your application, making it easy-to-scale or update components independently in the future.

According to InformationWeek, another example is Google. Google Search is the world’s biggest implementer of containers, which the company uses for internal operations. In running Google Search operations, it uses containers by themselves, launching about 7,000 containers every second, which amounts to about 2 billion every week. The significance of containerization is that it is creating a standard definition and corresponding reference runtime that industry players will need to be able to move containers between different clouds (Google, AWS, Azure, DigitalOcean,…) which will allow applications and containers to become the portability layer going forward.
Docker helped create a group called the Open Container Initiative formed June 22nd 2015. The group exists to provide a standard format for container images and a specification for container runtimes. This helps avoid vendor lock-in and means your applications will be portable between many different cloud providers and hosts.
4. Containers are “lightweight”

As mentioned before, containers running on a single machine share the same operating system kernel – they start instantly and use less RAM. Docker for example has made it much easier for anyone — developers, sysadmins, and others — to take advantage of containers in order to quickly build and test portable applications. It allows anyone to package an application on their laptop, which in turn can run unmodified on any public cloud, private cloud, or even bare metal – the mantra is: “build once, run anywhere.”

Container architecture
5. Docker has become synonymous with containers
Docker is rapidly changing the rules of the cloud and upending the cloud technology landscape. Smoothing the way for microservices, open source collaboration, and DevOps. Docker is changing both the application development lifecycle and cloud engineering practices.

Stats:

  • 2B+ Docker Image Downloads
  • 2000+ contributors
  • 40K+ GitHub stars
  • 200K+ Dockerized apps
  • 240 Meetups in 70 countries
  • 95K Meetup members

Every day, lot’s of developers are happily testing or building new Docker-based apps with Plesk Onyx  – understanding where the Docker fire is spreading is the key to staying competitive in an ever-changing world.

Web Professionals understood that containers would be much more useful and portable if there was one way of creating them and moving them around, instead of having a proliferation of container formatting engines. Docker, at the moment, is that de facto standard.

They’re just like shipping containers, as Docker’s CEO Ben Golub likes to say. Every trucking firm, railroad, and marine shipyard knows how to pick up and move the standard shipping container. Docker containers are welcome the same way in a wide variety of computing environments.
6. Docker’s ambassadors: the Captains
Have you met the Docker Captains yet? There’s over 67 of them right now and they are spread all over the world. Captains are Docker ambassadors (not Docker employees) and their genuine *love* of all things Docker has a huge impact on the community.

That can be blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events.

Here, you find out on how you can follow all the Captains without having to navigate through over 67 web pages.

The Docker Community offers you the Docker basics, and lots of different ways to engage with other Docker enthusiasts who share a passion for virtual containers, microservices and distributed applications.

Got a cool Docker hack? Looking to organize, host or sponsor Docker meetups? Want to share your Docker story?

Get involved with the Docker Community here.
Docker basics

7. Alex Ellis – Docker Captain

I became a Docker Captain after being nominated by a Docker Inc. employee who had seen some of my training materials and activity in the community helping local developers in Peterborough to understand containers and how they fit into this shifting landscape of technology. The engergy and enthusiasm of Docker’s team was what lead me to start this journey on the Captains’programme.

It’s all about raising up new leaders in the community to advocate the benefits of containers for software engineering. We also write and speak about exciting new features in the Docker eco-system and  presence ourselves in conferences, meet-up groups and in the marketplace. Start my self-paced, Hands-On Docker tutorial here. If you have questions, or want to talk I’m on Twitter.

Thank you to Docker Captain Alex Ellis for co-authoring the introduction to this write-up and for providing feedback and technical insights on containers.

Be well, do good, and stay Plesky!

Cheers,
Jörg

Sources: Docker.com, Alex Ellis, Google Cloud Platform BlogInformationWeek, Freecodecamp

Next post >> What’s new in Stack Overflow’s 2016 survey

Plesk on Docker

Plesk Docker Container

Docker has been a hot topic this year. Modern software should have an option to be installed as a Docker container. Not long ago we have created a Docker image for Plesk. This article describes how to install and use Plesk Docker container.

Continue reading