Plesk System Maintenance: How The Command Line Helps Administrators

In this article we provide overview on how to manage Plesk through the command line and execute scripts or binaries on certain Plesk events. In addition, you will learn how to adjust Plesk settings to fit a new network environment or server configuration, and restart Plesk to apply new settings.

Managing Plesk Objects Through the Command Line

Plesk Command Line Interface (CLI) is designed for integrating Plesk with third-party applications. Plesk administrators can also use it to create, manage, and delete customer and domain accounts, and other Plesk objects from the command line. CLI utilities require administrative permissions on Plesk server to run.

The utilities reside in the following directories:

  • On RPM-based systems: /usr/local/psa/bin
  • On DEB-based systems: /opt/psa/bin

Upon successful execution, utilities return the 0 code. If an error occurs, utilities return code 1 and display the error details.

Executing Custom Scripts on Plesk Events

Plesk provides a mechanism that allows administrators to track specific Plesk events and make Plesk execute custom scripts when these events occur. The events include operations that Plesk users perform on accounts, subscriptions, websites, service plans, and various Plesk settings. For example, you can save each added IP address to a log file or perform other routine operations.

Changing IP Addresses in Plesk

During the lifetime of a Plesk server, you may need to change the IP addresses employed by Plesk. Two typical cases when IP addresses may need to be changed are the following:

  • Reorganization of the server IP pool. For example, substitution of one IP address with another.
  • Relocation of Plesk to another server. Changing all addresses used by Plesk (including the one on which Plesk resides) to those on the new server.

Every time the change happens, you should reconfigure all related system services. To help you do this promptly, we offer the reconfigurator command line utility located in the following directory:

  • on RPM-based systems: /usr/local/psa/bin.
  • on DEB-based systems: /opt/psa/bin.

The reconfigurator replaces IP addresses and modifies Plesk and services configuration to make the system work properly after the replacement. To do this, the utility requires a mapping file, that includes instructions on what changes to make. Each line of the file should describe a single change. For example, the following line instructs Plesk to change the IP address 192.168.50.60 to 192.168.50.61:

eth0:192.168.50.60 255.255.255.0 -> eth0:192.168.50.61 255.255.255.0

The utility also helps you with creation of the mapping file. If you call the utility with a new file name as an option, it will create the file and write all available IP addresses to it. The IP addresses in the file are mapped to themselves. If you want to perform a change, modify the change instruction for a certain IP address.

When editing the mapping file, consider the following:

  • A replacement IP address must not exist in the Plesk IP pool before changing; however, it may be in the server IP pool. To make sure the IP is not in the Plesk IP pool, go to Server Administration PanelTools & Settings > IP Addresses and remove the IP if necessary.
  • If a replacement IP address does not exist in the server IP pool, the utility adds it to both Plesk and server IP pools.

To change IP addresses used by Plesk:

  1. Generate a mapping file with current Plesk IP addresses by running the command:
  2. ./reconfigurator <ip_map_file_name>
  3. Edit the file as described above and save it.
  4. Reconfigure Plesk and its services by running the following command one more time:

./reconfigurator <ip_map_file_name>

Changing Paths to Services

Plesk uses various external components, for example, Apache web server, mail service, antivirus, and so on. When interacting with these components, Plesk gets the information on their locations from the configuration file /etc/psa/psa.conf.

Plesk configuration file provides an easy way of reconfiguring Plesk if a service is installed into another directory or migrated from the current partition to another. Note that you can only modify paths present in this file; other paths are hard-coded in Plesk components.

Each line of psa.conf has the following format:

<variable_name> <value>

A sample part of the psa.conf file is displayed below. To change a path to a service, utility, or package, specify the new path as a value of a corresponding variable.

# Plesk tree

PRODUCT_ROOT_D /usr/local/psa 

# Directory of SysV-like Plesk initscripts

PRODUCT_RC_D /etc/init.d # Directory for config filesPRODUCT_ETC_D /usr/local/psa/etc 

# Directory for service utilities

PLESK_LIBEXEC_DIR /usr/lib/plesk-9.0 

# Virtual hosts directory

HTTPD_VHOSTS_D /var/www/vhosts 

# Apache configuration files directory

HTTPD_CONF_D /etc/httpd/conf 

# Apache include files directory

HTTPD_INCLUDE_D /etc/httpd/conf.d 

# Apache binary

HTTPD_BIN /usr/sbin/httpd

#Apache log files directory

HTTPD_LOG_D /var/log/httpd 

#apache startup script

HTTPD_SERVICE httpd 

# Qmail directory

QMAIL_ROOT_D /var/qmail

Note: Be very careful when changing the contents of psa.conf. Mistakes in paths specified in this file may lead to Plesk malfunctioning.

Restarting Plesk

If you experience problems with Plesk, for example, malfunctioning of a service, you can try to resolve them by restarting Plesk or the administrative web server sw-cp-server. Also, a restart is necessary to apply configuration changes that cannot be made while Plesk is running.

To restart Plesk, run the following command:

/etc/init.d/psa restart

To restart sw-cp-server, run the following command:

/etc/init.d/sw-cp-server restart

Managing Services from the Command Line and Viewing Service Logs

Here we explain how to stop, start, and restart services managed by Panel, and access their logs and configuration files.

Plesk web interface

To start the service through the command line:

/etc/init.d/psa start

To stop the service through the command line:

/etc/init.d/psa stop

To restart the service through the command line:

/etc/init.d/psa restart

Plesk log files are located in the following directories:

  • Error Log: /var/log/sw-cp-server/error_log
  • Access log: /var/log/plesk/httpsd_access_log

Panel configuration files are the following:

  • php: $PRODUCT_ROOT_D/admin/conf/php.ini
  • www: /etc/sw-cp-server/applications.d/plesk.conf

Presence Builder

Log files are located in:

  • Error log: /var/log/httpd/sitebuilder_error.log
  • Logs: /usr/local/sitebuilder/tmp/

Configuration files are accessible at:

  • /usr/local/sitebuilder/config
  • /usr/local/sitebuilder/etc/php.ini

phpMyAdmin

The error log is located in: /var/log/sw-cp-server/error_log

The configuration file is accessible at: /usr/local/psa/admin/htdocs/domains/databases/phpMyAdmin/libraries/config.default.php

phpPGAdmin

The error log is located in: /var/log/sw-cp-server/error_log

The configuration file is accessible at: /usr/local/psa/admin/htdocs/domains/databases/phpPgAdmin/conf/config.inc.php

Courier-IMAP

To start the service through the command line:

/etc/init.d/courier-imap start

To stop the service through the command line:

/etc/init.d/courier-imap stop

To restart the service through the command line:

/etc/init.d/courier-imap restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at:

  • /etc/courier-imap/imapd
  • /etc/courier-imap/imapd-ssl
  • /etc/courier-imap/pop3d
  • /etc/courier-imap/pop3d-ssl

DNS / Named / BIND

To start the service through the command line:

/etc/init.d/named start

To stop the service through the command line:

/etc/init.d/named stop

To restart the service through the command line:

/etc/init.d/named restart

Log files are located in: /var/log/messages

The configuration file is accessible at: /etc/named.conf

FTP (ProFTPD)

Log files are located in: /var/log/plesk/xferlog

Configuration files are accessible at:

  • /etc/xinetd.d/ftp_psa
  • /etc/proftpd.conf
  • /etc/proftpd.include

Postfix

To start the service through the command line:

/etc/init.d/postfix start

To stop the service through the command line:

/etc/init.d/postfix stop

To restart the service through the command line:

/etc/init.d/postfix restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at: /etc/postfix/

QMail

To start the service through the command line:

/etc/init.d/qmail start

To stop the service through the command line:

/etc/init.d/qmail stop

To restart the service through the command line:

/etc/init.d/qmail restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at:

  • /etc/xinetd.d/smtp_psa
  • /etc/xinetd.d/smtps_psa
  • /etc/xinetd.d/submission_psa
  • /etc/inetd.conf(Debians)
  • /var/qmail/control/

SpamAssassin

To start the service through the command line:

/etc/init.d/psa-spamassassin start

To stop the service through the command line:

/etc/init.d/psa-spamassassin stop

To restart the service through the command line:

/etc/init.d/psa-spamassassin restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at:

  • /etc/mail/spamassassin/
  • /etc/mail/spamassassin/local.cf
  • /var/qmail/mailnames/%d/%l/.spamassassin

Kaspersky antivirus

To start the service through the command line:

service kavehost start

To stop the service through the command line:

service kavehost stop

To restart the service through the command line:

service kavehost restart

Log files are located in:

  • /var/log/maillog
  • /var/log/mail.log

Configuration files are accessible at:

/opt/kav/sdk8l3/etc

Odin Premium Antivirus

To start the service through the command line:

/etc/init.d/drwebd start

To stop the service through the command line:

/etc/init.d/drwebd stop

To restart the service through the command line:

/etc/init.d/drwebd restart

Log files are located in: /var/log/plesk/maillog

Configuration files are accessible at: /etc/drweb/

Tomcat

To start the service through the command line:

/etc/init.d/tomcat5 start

To stop the service through the command line:

/etc/init.d/tomcat5 stop

To restart the service through the command line:

/etc/init.d/tomcat5 restart

Log files are located in: /var/log/tomcat5/

Configuration files are accessible at: /usr/share/tomcat5/conf/

MySQL

To start the service through the command line:

/etc/init.d/mysqld start

To stop the service through the command line:

/etc/init.d/mysqld stop

To restart the service through the command line:

/etc/init.d/mysqld restart

Log file is located in: /var/log/mysqld.log

The configuration file is accessible at: /etc/my.cnf

PostgreSQL

To start the service through the command line:

/etc/init.d/postgresql start

To stop the service through the command line:

/etc/init.d/postgresql stop

To restart the service through the command line:

/etc/init.d/postgresql restart

Startup log is located in: /var/lib/pgsql/pgstartup.log

The configuration file is accessible at: /var/lib/pgsql/data/postgresql.conf

xinetd

To start the service through the command line:

/etc/init.d/xinetd start

To stop the service through the command line:

/etc/init.d/xinetd stop

To restart the service through the command line:

/etc/init.d/xinetd restart

Log files are located in: /var/log/messages/

The configuration file is accessible at: /etc/xinetd.conf

Watchdog (monit)

To start the service through the command line:

/usr/local/psa/admin/bin/modules/watchdog/wd --start

To stop the service through the command line:

/usr/local/psa/admin/bin/modules/watchdog/wd --stop

To restart the service through the command line:

/usr/local/psa/admin/bin/modules/watchdog/wd --restart

Log files are located in:

  • /var/log/plesk/modules/watchdog/log/wdcollect.log
  • /var/log/plesk/modules/watchdog/log/monit.log

Configuration files are accessible at:

  • /usr/local/psa/etc/modules/watchdog/monitrc
  • /usr/local/psa/etc/modules/watchdog/wdcollect.inc.php

Watchdog (rkhunter)

Log is located in: /var/log/rkhunter.log

The configuration file is accessible at: /usr/local/psa/etc/modules/watchdog/rkhunter.conf

Apache

To start the service through the command line:

/etc/init.d/httpd start

To stop the service through the command line:

/etc/init.d/httpd stop

To restart the service through the command line:

/etc/init.d/httpd restart

Log files are located in:

  • /var/log/httpd/
  • /var/www/vhosts/<domain_name >/statistics/logs/

Configuration files are accessible at:

  • /etc/httpd/conf/httpd.conf
  • /etc/httpd/conf.d/
  • /var/www/vhosts/<domain_name >/conf/httpd.include

Mailman

To start the service through the command line:

/etc/init.d/mailman start

To stop the service through the command line:

/etc/init.d/mailman stop

To restart the service through the command line:

/etc/init.d/mailman restart

Log files are located in: /var/log/mailman/

Configuration files are accessible at:

  • /etc/httpd/conf.d/mailman.conf
  • /usr/lib/mailman/Mailman/mm_cfg.py
  • /etc/mailman/sitelist.cfg

Webalizer

To start the service through the command line:

/usr/local/psa/bin/sw-engine-pleskrun /usr/local/psa/admin/plib/DailyMaintainance/script.php

Configuration files are accessible at:

/var/www/vhosts/<domain_name>/conf/webalizer.conf

AWstats

To start the service through the command line:

/usr/local/psa/bin/sw-engine-pleskrun /usr/local/psa/admin/plib/DailyMaintainance/script.php

Configuration files are accessible at:

/usr/local/psa/etc/awstats/

Backup Manager

Backup logs are located in:

  • /var/log/plesk/PMM/sessions/<session>/psadump.log
  • /var/log/plesk/PMM/sessions/<session>/migration.log
  • /var/log/plesk/PMM/logs/migration.log
  • /var/log/plesk/PMM/logs/pmmcli.log

Restore logs are located in:

  • /var/log/plesk/PMM/rsessions/<session>/conflicts.log
  • /var/log/plesk/PMM/rsessions/<session>/migration.log
  • /var/log/plesk/PMM/logs/migration.log
  • /var/log/plesk/PMM/logs/pmmcli.log

The configuration file is accessible at:

/etc/psa/psa.conf

Plesk Migration Manager

Migration logs are located in:

  • /var/log/plesk/PMM/msessions/<session>/migration.log
  • /var/log/plesk/PMM/rsessions/<session>/migration.log
  • /var/log/plesk/PMM/rsessions/<session>/conflicts.log
  • /var/log/plesk/PMM/logs/migration.log
  • /var/log/plesk/PMM/logs/pmmcli.log
  • /var/log/plesk/PMM/logs/migration_handler.log

Horde

Log is located in:

/var/log/psa-horde/psa-horde.log

Configuration files are accessible at:

  • Apache configuration
    • /etc/httpd/conf.d/zzz_horde_vhost.conf
    • /etc/psa-webmail/horde/conf.d/
  • Horde configuration:

·         /etc/psa-webmail/horde/

Atmail

Log files are located in:

/var/log/atmail/

Configuration files are accessible at:

  • Apache configuration
    • /etc/httpd/conf.d/zzz_atmail_vhost.conf
    • /etc/psa-webmail/atmail/conf.d/
  • Atmail configuration:
    • /etc/psa-webmail/atmail/atmail.conf
    • /var/www/atmail/libs/Atmail/Config.php

psa-firewall

To start the service through the command line:

/etc/init.d/psa-firewall start

To stop the service through the command line:

/etc/init.d/psa-firewall stop

To restart the service through the command line:

/etc/init.d/psa-firewall restart

Configuration files are accessible at:

  • /usr/local/psa/var/modules/firewall/firewall-active.sh
  • /usr/local/psa/var/modules/firewall/firewall-emergency.sh
  • /usr/local/psa/var/modules/firewall/firewall-new.sh

psa-firewall (IP forwarding)

To start the service through the command line:

/etc/init.d/psa-firewall-forward start

To stop the service through the command line:

/etc/init.d/psa-firewall-forward stop

To restart the service through the command line:

/etc/init.d/psa-firewall-forward restart

Configuration files are accessible at:

  • /usr/local/psa/var/modules/firewall/ip_forward.active
  • /usr/local/psa/var/modules/firewall/ip_forward.saved

Moving the Plesk GUI to a Separate IP Address

By default, the Plesk GUI can work on all IP addresses available on the Plesk server (from the server’s IP pool). You may want to allow access to the Plesk GUI only from the local network. For that, you should move the GUI to an internal IP address.

To move Plesk GUI to a separate IP address, in the configuration file /etc/sw-cp-server/conf.d/plesk.conf, replace the lines

listen 8443 ssllisten 8880;

with the lines

listen SPECIFIC_SERVER_IP:8443 ssllisten SPECIFIC_SERVER_IP:8880;

where SPECIFIC_SERVER_IP is the new IP address that you want to use for the Plesk GUI.

Do not change the ports.

Setting Off Automatic Integration of WordPress Installations

If you are using the WordPress Toolkit extension, it detects new installations performed through the Application Catalog (or Application Vault) and integrates them with WordPress Toolkit. For this reason, installation of WordPress on a site takes up to 20 seconds. If you want to avoid this, you can switch off automatic detection of new installations by the WordPress Toolkit.

To do this, add the following lines to the panel.ini file :

[ext-wp-toolkit]

autoAttachApsInstances = off

Turning Off WordPress Toolkit

If you are using the WordPress Toolkit extension, you can completely switch it off on your server.

To switch off WordPress Toolkit, add the following lines to the panel.ini file:

[ext-wp-toolkit]

enabled=off

MySQL Performance Tuning

Many databases (and particularly many relational databases) rely on Structured Query Language (SQL), for handling data storage, data manipulation, and data retrieval. If developers want to create, update or delete data then they have always been able to do so easily with SQL statements. That said, we live in an age where the sheer amount of data being shunted around has grown and is still growing at alarming and overwhelming rates, and to compound this, workloads are always changing too, so while SQL statements are useful, there is an ongoing and pressing need for MySQL performance tuning. The swift and efficient movement and processing of data is crucial if we hope to deliver an excellent end-user experience while keeping costs as low as possible.

So, developers wanting to seek out and eliminate hold-ups and inadequate operations must turn to MySQL performance tuning tools which can help them with execution plans and remove any guesswork. MySQL performance may be important, but it isn’t necessarily an easy thing to do. In fact, there are a few aspects of the process which make it a difficult undertaking for developers. MySQL optimization requires sufficient technical prowess, i.e. enough knowledge and skill to comprehend and create a variety of execution plans, and that can be quite off-putting.

As if the fact that it’s tricky wasn’t enough, MySQL optimization takes time and energy. When you’re faced with a whole array of SQL statements to wade through, you face a problem with a built-in degree of uncertainty. Each example needs to be carefully considered during MySQL performance tuning. First, you need to decide which ones to amend and which to leave alone, then you need to work out what approach to take to MySQL tuning with each one that you do select because they’ll all need different approaches depending on what their function is. That’s why we are going to discuss several tips and techniques that will help you approach MySQL performance without getting distracted snowed under by the sheer weight of them.

The Benefits of MySQL Optimization

MySQL performance tuning is essential for keeping costs down. If you can use the right-sized server for the job, then you won’t be paying for more than you need, and if you can understand whether moving data storage or adding extra server capacity will lead to MySQL performance improvements then that helps efficiency too. MySQL tuning can be challenging, but it’s worth the time that it takes because an optimized database has greater responsiveness, delivers better performance, and offers better functionality.

MySQL Query Optimization Guidelines

Here are some useful tips for MySQL tuning. They are a great addition to your collection of best practices.

Make sure that the predicates in WHERE, JOIN, ORDER BY and GROUP BY clauses are all indexed. WebSphere Commerce points out that SQL performance can be improved significantly by predicate indexing because not doing so can result in table scans that culminate in locking and other difficulties. Which is why we highly recommend that you index all predicate columns for better MySQL optimization.

Keep functions out of predicates

The database won’t use an index if there’s a predefined function.

For instance:

  1. SELECT * FROM TABLEONE WHERE UPPER(COLONE)=’ABC’

The UPPER() function means that the database won’t look to the index on COLONE. If that function can’t be avoided in SQL, you’ll need to make a new function-based index or create custom columns in the database in order to experience improved MySQL performance.

Remove non-essential columns with the SELECT clause

Rather than use ‘SELECT *’, always specify columns for the SELECT clause, because unneeded columns add extra load on the database, hindering its performance and causing knock-on effects to the whole system.

Try not to use a wildcard (%) at the start of a predicate

The predicate LIKE ‘%abc’ will cause a full table scan, i.e.:

  1. SELECT * FROM TABLEONE WHERE COLONE LIKE ‘%ABC’

This kind of wildcard use can slow down performance significantly.

Use INNER JOIN, instead of OUTER JOIN where you can

Only use outer join where you absolutely need to. If you use it when you don’t need to then you’ll be putting the brakes on database performance due to slower execution of SQL statements and negative effects on MySQL optimization.

Use UNION AND DISTINCT only where needed

If you use the UNION and DISTINCT operators when other options are available you’ll be needlessly adding excessive sorting which slows down SQL performance. Try using UNION ALL instead for better MySQL performance.

You need to use ORDER BY in SQL for better sorting results

ORDER BY sorts the result-set into pre-determined statement columns. Although this is advantageous for database admins who want data to be sorted, it’s detrimental to MySQL performance. The reason for this is that in order to produce the final result set the query needs to sort the data first, which requires quite a convoluted and resource intensive SQL operation.

Don’t Use MySQL as a Queue

Queues can sneak up on you and slow down your database performance. For example, any time you set up a status for a specific item so that a ‘relevant process’ can gain access to it, you will be creating a queue without knowing it. This just adds pointless extra loading time to use the resource.

Queues are a problem because they cause your workload to be treated in an inefficient serial fashion rather than more efficient parallel and because they frequently lead to a table that contains work in progress along with data from jobs that have already been completed. This slows down the app and also hinders MySQL performance tuning.

The Four Fundamental Resources

A properly functioning database requires four fundamental resources. A CPU, hard drive, memory, and a network. Problems with any one of them will negatively affect the database, so it’s important to choose the right hardware and make sure that it’s all functioning properly. In practical terms, this means that if you’re going to invest in a powerful CPU then don’t try to cut corners with less memory or slower storage. A set up is only as good as its slowest component, and if all of them aren’t at parity then the result will be MySQL performance bottlenecks. Investing in more memory is probably the most cost-effective way of improving performance as it is inherently faster than disk-based storage. If all operations can be held in memory without resorting to disk usage, then processes will speed up considerably.

Pagination Queries

Applications that paginate tend to slow the server. By giving you a results page with a link to the next one these apps usually approach grouping and sorting in ways that don’t use indexes, using a LIMIT and offset function that places an extra burden on the server and then discards rows.

Adjusting the user interface itself will assist with optimization.  Instead of listing all pages in the results and linking to each page, you can just include a link to the next page. This also stops users wasting time on incorrect pages.

In terms of queries, rather than using LIMIT with offset, you can select one more row than you require, and when someone clicks the ‘next page’ link, you can set that last row as the start of the next set of results. For example, if the user looked at a page with rows 201 to 220, select row 221 as well; for the next page to be rendered, you would query the server for rows greater than or equal to 221, limit 21.

MySQL Optimization—Subqueries

In terms of subqueries, it’s better to use a join where you can, at least in current versions of MySQL.

The optimizer team is doing a lot of work on subqueries, so it may be that subsequent iterations of MySQL may come with extra subquery optimizations. It’s best to keep an eye on which MySQL optimizations end up in the each version, and what their effects will be. What I’m saying here is that my advice to err towards a join may not hold forever. Servers are getting more and more intelligent, and the instances where you will need to explain to them how to do something instead of what results to return are reducing.

Use Memcached for MySQL Caching

Memcached is a system that enables distributed memory caching, improving the speediness of websites that use big dynamic databases. It manages to do this by keeping database objects in Dynamic Memory to cut server load any time an outside data source asks for a read. A Memcached layer reduces the number of times a database issues a request.

Memcached stores each value (v) with a key (k), and then retrieves them without the need to parse the database queries for a much more streamlined process.

Conclusion

MySQL tuning  ( as well as tuning of MariaDB ) may be time-consuming and thought-provoking but it’s one of the hurdles that you need to take in your stride if you want to ensure that your users receive the best possible experience. Poor database performance could benefit from investing in the best hardware and making sure that it’s balanced, but even with the best CPUs, and the fastest memory and SSDs on the market, there is still an additional performance improvement to be had from taking the time to implement proper MySQL optimization. It can be a laborious task for developers, but the performance and enhancements and efficiency savings are well worth it. Keep these tips close to hand and refer to them often. They’re not all-encompassing, but they are a handy starting point for your journey into MySQL tuning.

5 Essential Practices to Unlock Your Staging Environment’s Full Potential

From developers writing the code to end-users getting the product, a software development lifecycle consists of many environments. In this post, we’re going to discuss the staging environment. We’re also going to talk about the importance, best practices, limitations, and alternatives to this environment.

Staging is the replica of the production environment. It means we run the code on the server rather than the local machine with the same architecture. Since the product is live, we can look out for any bugs and issues. Adjustments and polishing to the product are made in this phase before it goes to production. This environment is also useful for showing the client a live demo.

Why Use a Staging Environment?

Skipping staging is easy; we have seen many startups, and big companies do that. But are we really ready to face the losses of skipping this step? There are arguments that a functional testing framework can help in removing bugs or issues. But, can 2-3 people manually scripting these tests account for every possibility and iteration?

End-users have almost zero patience when it comes to poor performing apps, so we need to provide them with the best possible product. Staging is essential for having confidence in the code we write, the product we supply, and the infrastructure we provide. With a staging environment, we already have interactions, making it easier to test the countless iterations and possibilities.

A staging environment is essential to create sophisticated and valued software and give clients value for both their time and money.

Tests Performed On a Staging Environment

Staging consists of two main tests performed to eliminate bugs and issues to the maximum extent:

  • Smoke Test: New builds developed to the already existing software undergo smoke testing. The main aim is to check if the major functionalities are working correctly. After we do the smoke testing, we decide whether to send the software for further testing or revert to developers for more debugging.
  • User Acceptance Testing (UAT): Developers code the software based on their understanding of the requirements, but does the software have everything that the client wants? User Acceptance testing in a staging environment can help us to understand and answer the above question. End-user or the client will perform this test to see if their requirements are met without compromises or drawbacks.

Staging Environment: Best Practices to Follow

Let’s be honest, the staging environment setup costs more. Instead of having an excellent staging environment setup, we see that things get out of our hands pretty quickly. We must bring the staging environment as close as possible to the production environment to avoid chaos.

The following are some of the fundamental practices that unlock the staging environment’s power to its full potential.

1. Staging Should Be an Exact Replica of Production:

The value of a staging environment depends on how well it matches the production environment. We must make sure that every build or release goes through it. Mismatch in the configurations of staging and production will always lead to catastrophic results.

For example, consider that a new, developed build goes into the staging environment. We get a clearance from the staging environment, and we deploy the code in production without having a second thought. Suddenly there will be a complete outage in the product, and you don’t know the reason for it. The answer is that the configuration and environment mismatch.

Can our staging environment currently hold up with the real-time traffic that our production has? Does our staging environment have the same set of systems and services as production? These are the most critical questions we must ask ourselves to know the value of a staging environment. And if the answer is yes, then we are good to go.

2. Use Data to Test Iterations and Possibilities:

How many times have we seen empty tables in the staging area? Countless. Empty tables give us no information about the user experience. However, we can take the help of dummy data, with which we can test some but not all iterations and possibilities. Quantity and quality of the data present for testing in a staging environment is very crucial.

When the testing teams work with dummy data, their capacity is limited because they only have access to the dummy test account. However, we can add a whole new dimension to this by adding the real user and getting him to execute tasks on the product directly. This addition eventually adds a lot of clarity to the process.

We can also release code into staging weekly and daily, making it a lot easier. We can tackle the data quality by making staging product primary and channeling real-life tasks through it, rather than the production version. Since we are updating the builds and releases on a daily and weekly basis, users can try out new features, enhancements, and bug fixes.

This step may not generate the load the production usually gets. Still, we are at the benefit of various use cases that are always triggered in staging, which is a production mirror image. We can easily trace out high impact issues and bugs on the software.

3. Constant Monitoring and Updating:

To be the closest replica of production, staging needs to be monitored and updated at all times with extreme attentiveness. Every new build, release, and update goes through staging before entering production, making it very important to monitor and update it. We can’t even miss out on sending tiny things to the staging area because we have a lot to do.

As we have seen in the last step, it is apparent that we may have to push the code into staging regularly. Monitoring helps us in observing the patterns and errors that are present in the product. We can get a clear idea of what has to be improved and what has to be maintained.

When we provide the users with regular updates, we are getting an infinite amount of uses cases triggered spontaneously and simultaneously. We must identify the significant issues popping up and alienate them before this product goes into production and causes an outage.

However, we must be careful not to jeopardize some of the most critical user data with a staging environment. Like the emails and the personal details of the real-time, users should not be confused with staging data. 

If you’re looking for more resources on monitoring, updates, and performance, check out Part 1 and Part 2 of our DevOps Cycle series.

4. Don’t Hurry Through the Staging Area:

In some companies, a project developed for over six months will get rushed in a matter of days in the staging area. This will lead to insufficient testing, which will lower the value of the staging environment. The testing department should be provided with enough time to deploy products with fewer issues.

Problems like data corruption and data leakage often take time to show up. So, rushing the staging environment will lead to corruption and leakage in the production environment. These problems will lead to a complete outage of the application. We can avoid these catastrophic problems if we give a staging environment enough time.

5. Use Performance Metrics:

We must check our production environments with as many performance parameters as possible, including the chaotic ones. When we deploy the product, we may encounter many surprises and chaotic situations like crashing servers, Dos attacks, network outages, etc. We can benefit a lot by including these elements of wonder in our testing framework. 

Even though all the parameters can’t be replicated practically, we must ensure that we have tested the application for maximum possible scenarios.

Don’t Forget the Limitations

Like everything else, a staging environment too has some limitations and drawbacks. These limitations occur mainly due to the limitedness of the environment and mismatching with production. If the staging environment configuration does not match production, then there is a chance of pushing buggy code into production, leading to many problems. Double-checking the settings before deployment will help us overcome this limitation.

Even though we replicate the staging environment exactly with production, it is impossible to load test the staging with traffic from production. This difference may produce slight turbulence when we release the product. Due to the limitedness of the staging environment, data corruption and data leak results may get delayed. This is not good if we have already deployed the code thinking there are no issues.

To sum it up, by pushing the code directly from development to deployment, we create uncertainty in the situation. This uncertainty puts the companies’ reputation and value on risk. A staging environment safely eliminates this risk. The importance of staging easily outweighs the limitations it has. We must ensure the best user experience by bringing a staging environment very close in replicating the production environment.

So, how efficiently is your staging environment matching up with the production environment? Do you have any suggestions that we missed for making the staging environment more efficient? Let us know in the comments below!