NGINX vs Apache – Which Is the Best Web Server in 2020?

NGINX vs Apache – which server is superior? NGINX and Apache are two of the biggest open source web services worldwide, handling more than half of the internet’s total traffic. They’re both designed to handle different workloads and to complement various types of software, creating a comprehensive web stack.

But which is best for you? They may be similar in many ways, but they’re not identical. Each has its own advantages and disadvantages, so it’s crucial that you know when one is a better solution for your goals than the other.

In this in-depth guide, we explore how these servers compare in multiple crucial ways, from connection handling architecture to modules and beyond.

First, though, let’s look at the basics of both Nginx and Apache before we take a deeper dive.

NGINX - NGINX vs Apache - Plesk

NGINX Outline

NGINX came about because of a grueling test, where a server has to reach 10,000 client connections all at the same time. It uses a non-synchronized, event-driven architecture to cope with this prodigious load. And its design means that it can take high loads and loads that vary wildly all in its stride, leveraging predictions for RAM usage, CPU usage, and latency to achieve greater efficiently.

NGINX is the brainchild of Igor Sysoev in 2002. He saw it as a solution to the C10K issue causing issues for web servers handling thousands of connections at the same time. He released it initially in 2004, and this early iteration achieved its objective through the utilization of an events-driven, asynchronous architecture.

Since its public release, NGINX has continued to be a popular choice, thanks to its lightweight utilization of resources and its flexibility to scale simply even with minimal equipment. As fans will testify, NGINX is excellent at serving static content with speed and efficiency, due to its design to pass dynamic requests to different software, which suits the specific purposes more effectively.

Administrators tend to choose NGINX because of such resource efficiency and responsiveness.

Apache - NGINX vs Apache - Plesk

Apache Outline

Robert McCool is credited with producing the Apache HTTP Server back in 1995. But as of 1999, it has been managed and maintained by the Apache Software Foundation instead. Apache HTTP Server is generally known as “Apache”, due to the HTTP web server being the foundation’s initial — and most popular — project.

Since 1996, Apache has been recognized as the internet’s most popular server, which has led to Apache receiving considerable integrated support and documentation from subsequent software projects. Administrators usually select Apache because of its power, wide-ranging support, and considerable flexibility.

It can be extended with its dynamically loadable module system, and is capable of processing various interpreted languages with no need to connect to external software

Apache vs NGINX – Handling Connections

One of the most significant contrasts between Nginx and Apache is their respective connection- and traffic-handling capabilities.

As NGINX was released following Apache, the team behind it had greater awareness of concurrency issues plaguing sites at scale. The NGINX team was able to use this knowledge to build NGINX from scratch to utilize a non-blocking, asynchronous, event-driven algorithm for handling connections. NGINX is designed to spawn worker processes capable of handling many connections, even thousands, courtesy of a fast-looping function. This searches for events and processes them continuously. As actual work is decoupled from connections easily, every worker is free to make connections only after new events are activated.

Every connection handled by the workers is situated in the event loop, alongside numerous others. Events inside the loop undergo asynchronous processing, so that work is handled in a non-blocking way. And whenever each connection closes, it will be taken out of the loop. NGINX can scale extremely far even with limited resources, thanks to this form of connection processing. As the single-threaded server doesn’t spawn processes to handle every new connection, CPU and memory utilization remains fairly consistent during periods of heavy load.

Apache offers a number of multi-processing modules. These are also known as MPMs, and are responsible for determining how to handle client requests. This enables administrators to switch its connection handling architecture simply, quickly, and conveniently.

So, what are these modules?

mpm-prefork

This Apache module creates processes with one thread to handle each request, and every child is able to accommodate one connection at one time. Provided the volume of requests remains less than that of processes, this module is capable of extremely fast performance.

But it can demonstrate a serious drop in quality when the number of requests passes the number of processes, which means this module isn’t always the right option.

Every process with this module has a major effect on the consumption of RAM, too, which makes it hard to achieve effective scaling. However, it could still be a solid choice when utilized alongside additional components built without consideration of threads. E.g. as PHP lacks thread safety, this module could be the best way to work with mod_php (Apache’s module for processing these specific files) safely.

mpm_worker

Apache’s mpm_worker module is designed to spawn processes capable of managing numerous threads each, with each of those handling one connection. Threads prove more efficient than processes, so this MPM offers stronger scaling than the module discussed above.

As there are no more threads than processes, fresh connections can take up one of the free threads rather than waiting for another suitable process to come along.

mpm_event

Apache’s third module can be considered similar to the aforementioned mpm_worker module in the majority of situations, though it’s been optimised to accommodate keep-alive connections. This means that, when using the worker module, connections continue to hold threads, whether or not requests are made actively for the full period during which the connection remains alive.

It’s clear that Apache’s connection handling architecture offers considerable flexibility when selecting various connections and request-handling algorithms. Options provided are primarily a result of the server’s continued advancement, as well as the growing demand for concurrency as the internet has changed so dramatically.

Apache vs NGINX – Handling Static and Dynamic Content

When pitting Nginx vs Apache, their ability to handle static and dynamic content requests is a common point of comparison. Let’s take a closer look.

NGINX is not designed for native processing of dynamic content: it has to pass to an external processor to handle PHP and other dynamic content requests. It will wait for content to be returned when it has been rendered, before relaying the results back to the client.

Communication has to be set up between NGINX and a processor across a protocol which NGINX can accommodate (e.g. FastCGI, HTTP, etc.). This can make things a little more complicated than administrators may prefer, particularly when attempting to anticipate the volume of connections to be allowed — an extra connection will be necessary for every call to the relevant processor.

Still, there are some benefits to using this method. As the dynamic interpreter isn’t integrated within the worker process, the overhead applies to just dynamic content. On the other hand, static content may be served in a simpler process, during which the interpreter is only contacted when considered necessary.

Apache servers’ traditional file-based methods mean they’re capable of handling static content, and their performance is primarily a function of those MPM methods covered earlier.

But Apache is designed to process dynamic content too, by integrating a processor of suitable languages into every worker instance. As a result, Apache can accommodate dynamic content in the server itself, with no need to depend on any external components. These can be activated courtesy of the dynamically-loadable modules.

Apache’s internal handling of dynamic content allows it to be configured more easily, and there’s no need to coordinate communication with other software. Modules may be swapped out if and when requirements for content shift.

NGINX or Apache – How Does Directory-level Configuration Work?

Another of the most prominent differences administrators discuss when discussing Apache vs NGINX relates to directory-level configuration, and whether it’s allowed in their content directories. Let’s explore what this means, starting with Apache.

With Apache, additional configuration is permitted on a per-directory level, through the inspection of files hidden within content directories — and the interpretation of their directives. They’re referred to as .htaccess.

As .htaccess files are located inside content directories, Apache checks every component on the route to files requested, applying those directives inside. Essentially, this allows the web server to be configured in a decentralized manner, typically utilized for the implementation of rewritten URLs, accessing restrictions, authentication and authorization, as well as caching policies.

Though these offer configuration in Apache’s primary configuration file, there are some key advantages to .htaccess files. First and foremost, they’re implemented instantly without needing to reload the server, as they’re interpreted whenever they’re located on a request path.

Secondly, .htaccess files enable non-privileged users to take control of specific elements of their web content without granting them complete control over the full configuration file.

This creates a simple way for certain software, such as content management systems, to configure environments without giving entry to central configuration files. It’s used by shared hosting providers for maintaining control of primary configurations, even while they offer clients their own directory control.

With NGINX, interpretation of .htaccess files is out of the question.It also lacks a way to assess per-directory configuration beyond the primary configuration file. As a result, it could be said to offer less flexibility than Apache, though it has a number of benefits too.

Specifically, improved performance is one of the main advantages compared to the .htaccess directory-level configuration system. In the case of standard Apache setups that accommodate .htaccess in any one directory, the server assesses the files in every parent directory leading to the file requested, whenever a request is made. Any .htaccess files found throughout this search will be read before being interpreted.

So, NGINX can serve requests in less time, due to its single-directory searches and file-reads for every request. Of course, this is based on files being located in a directory with a conventional structure.

Another benefit NGINX offers with directory-level configuration relates to security. Distributing access also leads to a distribution of security responsibility to single users, and they might not all be trustworthy. When administrators retain control of the whole server, there’s less risk of security-related problems which grant access to people who can’t be relied upon.

How does File and URI-based Interpretation Work with NGINX and Apache?

When discussing Nginx vs Apache, it’s important to remember the way in which the web server interprets requests, and maps them to system resources, is another vital issue.

When NGINX was built, it was designed to function as a web and proxy server. The architecture demanded to fulfil both roles means NGINX works with URIs mainly, and translates to the filesystem as required. This is evident in a number of ways in which its configuration files function.

NGINX has no means of determining filesystem directory configuration. As a result, it’s designed to parse the URI. NGINX’s main configuration blocks are location and server blocks: the former matches parts of the URI which come after the host and port, while the latter interprets hosts requested. Requests are interpreted as a URI, rather than one of the filesystem’s locations.

In the case of static files, requests are eventually mapped to a filesystem location. NGINX chooses the location and server blocks for handling the specific request, before combining the document root with the URI. It also adapts whatever’s required, based on the configuration specified.

With NGINX designed to parse requests as URIs rather than filesystem positions, it makes for simpler functionality in various areas. Specifically, in the following server roles: web, proxy, and mail. This means NGINX is easily configured by laying out appropriate responses to varied request patterns, and NGINX only checks filesystems when it’s prepared to serve the request. This is why it doesn’t implement .htaccess files.

Interpret requests as physical resources on a filesystem. It may also interpret requests as a URI location, which demands an assessment that’s a little less specific. Generally, Apache utilizes <Directory> or <Files> blocks for these purposes, and <Location> blocks for resources that are more abstract.

As Apache was conceived as a server for the web, its standard function is interpreting requests as traditional filesystem resources. This process starts with the document root and changing the part of the request which comes after host and port numbers, as it attempts to locate an actual file. So, on the web, filesystem’s hierarchy appears in the form of the available document tree.

Apache gives various alternatives for when requests fail to match underlying filesystems. E.g., Alias directives may be utilized for mapping alternative placements. Leveraging <Location> blocks is a way to work with the URI rather than the filesystem. A number of expression variants may be utilized to apply configuration throughout filesystems with greater flexibility.

As Apache is capable of functioning on the webspace and underlying filesystems, it has a heavier focus on filesystem methods. This is evident in a number of the design choices, such as the presence of .htaccess files in per-directory configuration. Apache documentation advises not to utilize URI-based blocks for inhibiting access when requests match those underlying filesystems.

NGINX vs Apache: How Do Modules Work?

When considering Apache vs NGINX, bear in mind that they can be extended with module systems, though they work in significantly different ways.

NGINX modules have to be chosen and compiled into its core software, as they cannot be dynamically loaded. Some NGINX users it’s less flexible as a result. This may be particularly true for those who feel unhappy managing their compiled software that’s positioned external to the distribution’s conventional packaging system.

Even though packages typically include modules which are used commonly, you would need to create the server from source if you need a non-standard module. Still, NGINX is incredibly useful, allowing users to dictate what they want out of their server by including only the functionality you plan to utilize.

For many people, NGINX seems to offer greater security as a result of this. Arbitrary components are unable to connect to the server. But if the server is in a scenario where this appears to be likely, it may have been affected already.

Furthermore, NGINX modules offer rate limiting, geolocation, proxying support, rewriting, encryption, mail functionality, compression, and more.

With Apache, the module system provides users with the option to load or unload modules dynamically based on your individual needs. Modules may be switched on and off even though the Apache core remains present at all times, so you can add or take extra functionality away and hook into the main server.

With Apache, this functionality is utilized for a wide range of tasks, and as this platform is so mature, users can choose from a large assortment of modules. Each of these may adjust the server’s core functionality in various ways, e.g. mod_php embeds a PHP interpreter into all of the running workers.

However, modules aren’t restricted to processing dynamic content: some of their functions include authenticating clients, URL rewriting, caching, proxying, encrypting, compression, and more. With dynamic modules, users can expand core functionality significantly — with no need for extensive extra work

NGINX or Apache: How do Support, Documentation, and Other Key Elements Work?

When trying to decide between Apache or Nginx, another important factor to bear in mind is actually getting set-up and the level of support with other software.

The level of support for NGINX is growing, as a greater number of users continue to implement it. However, it still has some way to go to catch up with Apache in certain areas.

Once upon a time, it was hard to gather detailed documentation for NGINX (in English), as the majority of its early documentation was in Russian. However, documentation has expanded since interest in NGINX has grown, so there’s a wealth of administration resources on the official NGINX website and third parties.

On the topic of third-party applications, documentation and support is easier to find. Package maintainers are starting to offer choices between NGINX and Apache auto-configuring. It’s easy to configure NGINX to complement alternative software without any support, as long as the specific project documents clear requirements (such as headers, permissions, etc.).

Support for Apache is fairly easy to find, as it’s been such a popular server for such a long time. An extensive library of first- and third-party documentation is on offer out there, for the core server and task-based situations that require Apache to be hooked up with additional software.

As well as documentation, numerous online projects and tools involve tools to be bootstrapped within an Apache setting. This could be present in the projects or the packages managed by the team responsible for the distribution’s packaging.

Apache receives decent support from external projects mainly due to the market share and the sheer number of years it’s been operating. There may be a higher likelihood of administrators having experience of using Apache, not just because it’s so prevalent but as a lot of them begin in shared-hosting scenarios which rely on Apache, due to the .htaccess distributed management capabilities.

NGINX vs Apache: Working with Both

Now that we’ve explored the advantages and disadvantages of NGINX and Apache, you could be in a better position to understand whether Apache or NGINX is best for you. But a lot of users discover they can leverage both server’s benefits by utilizing them together.

Traditional configuration for using NGINX and Apache in unison is to position NGINX ahead of Apache. This way, it serves as a reverse proxy — enabling it to accommodate every client request. Why is this important? Because it takes advantage of the quick processing speeds and NGINX’s capabilities to handle a lot of connections at the same time.

In the case of static content, NGINX is a fantastic server, as files are served to the client directly and quickly. With dynamic content, NGINX proxies requests to Apache to be processed. Apache will then bring rendered pages back. After this, NGINX is able to send content back to clients.

Plenty of people find this is the ideal setup, as it enables NGINX to perform as a sorting machine, handling all requests and passing on those which have no native capability to serve. If you reduce Apache’s level of requests, it’s possible to reduce the level of blocking which follows when Apache threads or processes are occupied.

With this configuration, users can scale out through the addition of extra backend servers as required. NGINX may be configured to pass to a number of servers with ease, boosting the configuration’s performance and its resistance to failure.

Apache vs NGINX – Final Thoughts

It’s fair to say that NGINX and Apache offer quality performance — they’re flexible, they’re capable, and they’re powerful. Choosing which server works best for your needs depends largely on assessing your individual requirements and testing with those patterns you believe you’re likely to see.

A number of differences between these projects have a tangible effect on capabilities, performance, and the time required to implement each solution effectively. But these tend to be the result of numerous trade-offs that shouldn’t be dismissed easily. When all is said and done, there’s no web server that meets everyone’s needs every single time, so it’s best to utilize the solution that suits your objectives best.

Linux Server Security – Best Practices for 2020

Linux Server Security

Linux server security is on sufficient level from the moment you install the OS. And that’s great to know because… hackers never sleep! They’re kind of like digital vandals. Taking pleasure – and sometimes money too – as they inflict misery on random strangers all over the planet.

Anyone who looks after their own server appreciates the fact that Linux is highly secure right out the box. Naturally, it isn’t completely watertight. But it does do a better job of keeping you safe than most other operating systems.

Still, there are plenty of ways you can improve it further. So here are some practical ways how you can keep the evil hordes from the gates. It will probably help if you’ve tinkered under the hood of a web server before. But don’t think that you have to be a tech guru or anything like that.

Deactivate network ports when not in use

Deactivate network ports when not in use

Leave a network port open and you might as well put out the welcome mat for hackers. To maintain web host security you can use the “netstat” command to inform you which network ports are currently open. And also which services are making use of them. This should close off another avenue of attack for hackers.

You also might want to set up “iptables” to deactivate open ports. Or simply use the “chkconfig” command to shut down services you won’t need. Firewalls like CSF let you automate the iptables rules, so you could just do that. If you use Plesk platform as your hosting management software – please pay attention to this article about Plesk ports.

The SSH port is usually 22, and that’s where hackers will expect to find it. To enhance Linux server security, change it to some other port number you’re not already using for another service. This way, you’ll be making it harder for the bad guys to inject malware into your server. To make the change, just go to /etc/ssh/sshd_config and enter the appropriate number.

Update Linux Software and Kernel

Update software for better Linux server security

Half of the Linux security battle is keeping everything up to date because updates frequently add extra security features. Linux offers all the tools you need to do this, and upgrading between versions is simple too. Every time a new security update becomes available, you need to review it and install it as soon as you can. Again, you can use an RPM package manager like yum and/or apt-get and/or dpkg to handle this.

# yum update

OR

# apt-get update && apt-get upgrade

It’s possible to set up RedHat / CentOS / Fedora Linux so that you get yum package update notifications sent to your email. This is great for Linux security and you can also apply all security updates using a cron job. Apticron can be used to send security mitigations under Debian / Ubuntu Linux. You can also use the apt-get command/apt command to configure unattended-upgrades for your Debian/Ubuntu Linux server:

$ sudo apt-get install unattended-upgrades apt-listchanges bsd-mailx

Reduce Redundant Software to Increase Linux Security

For greater Linux server security hardening It’s worth doing a spring clean (at any time of the year) on your installed web services. It’s easy for surplus apps to accumulate and you will probably find that you don’t need half of them. In the future, for better Linux server security try not to install software that you don’t need. It’s a simple and effective way to reduce potential security holes. Use an RPM package manager like yum or apt-get and/or dpkg to go through your installed software and remove any that you don’t need any more.

# yum list installed
# yum list packageName
# yum remove packageName

OR

# dpkg --list
# dpkg --info packageName
# apt-get remove packageName

Turn off IPv6 to boost Linux server security

Turn off IPv6

IPv6 is better than IPv4, but you probably aren’t getting much out of it – because neither is anyone else. Hackers get something from it though – because they use it to send malicious traffic. So shutting down IPv6 will close the door in their faces. Go to edit /etc/sysconfig/ network and change the settings to read NETWORKING_ IPV6=no and IPV6INIT=no. Simple as that.

Turn off root logins to improve Linux server security

Linux servers the world over allow the use of “root” as a username. Knowing this, hackers will often try subverting web host security to discover your password before slithering inside. It’s because of this that you should not sign in as the root user. In fact, you really ought to remove it as an option, creating one more level of difficulty for hackers. And thus, stopping them from being able to get past your security with just a lucky guess.

So, all it takes is for you to create a separate username. Then use the “sudo” special access command to execute root level commands. Sudo is great because you can give it to any users  you want to have admin commands, but not root access. Because you don’t want to compromise security by giving them both.

So you deactivate the root account, but before, check you’ve created and authorized your new user. Next, go to /etc/ssh/sshd_config in nano or vi, then locate the “PermitRootLogin” parameter. Change the default setting of “yes” to “no” and then save your changes.

GnuPG encryption for web host security

GnuPG encryption

When data is on the move across your network, hackers will frequently attempt to compromise Linux server security by intercepting it. Always make sure anything going to and from your server has password encryption, certificates and keys. One way to do this is with an encryption tool like GnuPG. It uses a system of keys to ensure nobody can snoop on your info when in transit.

Change/boot to read-only

All files related to the kernel on a Linux server are in the “/boot” directory. The standard access level for the directory is “read-write”, but it’s a good idea to change it to “read-only”. This stops anyone from modifying your extremely important boot files.

Just edit the /etc/fstab file and add LABEL=/boot /boot ext2 defaults, rows 1 2 to the bottom. It is completely reversible, so you can make future changes to the kernel by changing it back to “read-write” mode. Then, once you’re done, you can revert back to “read only”.

A better password policy enhances Web Host Security

better password policy - linux server security

Passwords are always a security problem because humans are. People can’t be bothered to come up with a lot of different passwords – or maybe they can’t. So what happens? They use the same ones in different places. Or worse yet – combinations that are easy to remember, like “password” or “abcde”. Basically, a gift to hackers.

Make it a requirement for passwords to contain a mix of upper AND lower case letters, numbers, and symbols. You can enable password ageing to make users discard previous passwords at fixed intervals. Also think about banning old passwords, so once people use one, it’s gone forever. The “faillog” command lets you put a limit on the amount of failed login attempts allowed and lock user accounts. This is ideal to prevent brute force attacks.

So just use a strong password all the time

Passwords are your first line of defense, so make sure they’re strong. Many people don’t really know what a good password looks like. That it needs to be complex, but also long enough to make it the strongest it can be.

At admin level, you can help users by securing Plesk Obsidian and enforcing the use of strong passwords which expire after a fixed period. Users may not like it, but you need to make them understand that it saves them a lot of possible heartache.

So what are the ‘best practices’ when setting up passwords?

  1. Use passwords that are as long as you can manage
  2. Avoid words that appear in the dictionary (like “blue grapes”)
  3. Steer clear of number replacements that are easy to guess (like “h3ll0”)
  4. Don’t reference pop culture (such as “TARDIS”)
  5. Never use a password in more than once place
  6. Change your password regularly and use a different one for every website
  7.  Don’t write passwords down, and don’t share them. Not with anybody. Ever!

The passwords you choose should increase Web Host Security by being obscure and not easy to work out. You’ll also help your security efforts if you give your root (Linux) or RDP (Windows) login its own unique password.

Linux security security needs a firewall

Firewall helps Linux server security - Plesk

A firewall is a must have for web host security, because it’s your first line of defense against attackers, and you are spoiled for choice. NetFilter is built into the Linux kernel. Combined with iptables, you can use it to resist DDos attacks.

TCPWrapper is a host-based access control list (ACL) system that filters network access for different programs. It has host name verification, standardized logging and protection from spoofing. Firewalls like CSF and APF are also widely used, and they also come with plugins for popular panels like cPanel and Plesk.

Locking User Accounts After Unsuccessful Logins

For Linux security, the faillog command shows unsuccessful login attempts and can assign limits to how many times a user can get their login credentials wrong before the account is locked. faillog formats the contents of the failure log from the /var/log/faillog database/log file. To view unsuccessful login attempts, enter:

faillog

To open up an account locked in this way, run:

faillog -r -u userName

With Linux security in mind be aware that you can use the passwd command to lock and unlock accounts:

lock Linux account

passwd -l userName

unlock Linux account

passwd -u userName

Try disk partitions for better Web host security

disk partitions - linux server security

If you partition your disks then you’ll be separating OS files from user files, tmp files and programs. Try disabling SUID/SGID access (nosuid) along with binaries (noexec) on the operating system partition

Avoid Using Telnet, FTP, and Rlogin / Rsh Services

With the majority of network configurations, anyone on the same network with a packet sniffer can intercept FTP, telnet, or rsh commands, usernames, passwords, and transferred files. To avoid compromising Linux server security try using either OpenSSH, SFTP, or FTPS (FTP over SSL), which gives FTP the benefit of SSL or TLS encryption. To move outdated services like NIS or rsh enter this yum command:

# yum erase xinetd ypserv tftp-server telnet-server rsh-server

For Debian/Ubuntu Linux server security, give the apt-get command/apt command a try to get rid of non-secure services:

$ sudo apt-get --purge remove xinetd nis yp-tools tftpd atftpd tftpd-hpa telnetd rsh-server rsh-redone-server

Use an Intrusion Detection System

NIDS or Network intrusion detection systems keep watch for malevolent activity against Linux server security like DOS attacks, port scans, and intrusion attempts.

For greater Linux server security hardening it’s recommended that you use integrity checking software before you take a system into a production environment online. You should install AIDE software before connecting the system to a network if possible. AIDE is a host-based intrusion detection system (HIDS) which monitors and analyses a computing system’s internals. You would be wise to use rkhunter rootkit detection software as well.

Logs and Audits

You can’t manage what you don’t measure, so if you want to stop hackers then your system needs to log every single time that intruders try to find a way in. Syslog is set up to store data in the /var/log/ directory by default and it can also help you to identify the potential surreptitious routes inside that misconfigured software can present.

Secure Apache/PHP/NGINX server

Edit httpd.conf file and add:

ServerTokens Prod
ServerSignature Off
TraceEnable Off
Options all -Indexes
Header always unset X-Powered-By

Restart the httpd/apache2 server on Linux, run:

$ sudo systemctl restart apache2.service

OR

$ sudo systemctl restart httpd.service

Activate CMS auto-updates

Activate CMS auto-updates

CMSs are quite complex, so hackers are always trying to exploit security loopholes with them. Joomla!, Drupal and WordPress, are all hugely popular platforms, so developers are constantly working on new security fixes. This means updates are important and should be applied straight away. The best way to ensure this happens is to activate auto-updates, so you won’t even have to think about it. Your host isn’t responsible for the content of your website. So it’s up to you to ensure you update it regularly. And it won’t hurt to back it up once in a while either.

Backup regularly

Backup regularly - linux server security - cloud

Regular and thorough backups are probably your most important security measure. Backups can help you recover from a security disaster. Typical UNIX backup programs use dump and restore, and these are we recommend them. For maximum Linux security, you need to backup to external storage with encryption, which means something like a NAS server or cloud-based service.

Protect Email Directories and Files

These Linux security tips wouldn’t be complete without telling you that Linux has some great ways to protect data against unauthorized access. File permissions and MAC are great at stopping intruders from getting at your data, but all the Linux permissions in the world don’t count for anything if they can be circumvented—for instance, by transplanting a hard drive to another machine. In such a case you need to protect Linux files and partitions with these tools:

  • For password-protected file encryption and decryption, use the gpg
  • Both Linux and UNIX can add password protection to files using openssl and other tools.
  • The majority of Linux distributions support full disk encryption. You should ensure that swap is encrypted too, and only allow bootloader editing via a password.
  • Make sure root mail is forwarded to an account that you check.

System Accounting with auditd

Auditd is used for system audits. Its job is to write audit records to the disk. This daemon reads the rules in /etc/audit.rules at start-up. You have various options for amending the /etc/audit.rules file such as setting up the location for the audit file log. Auditd will help you gain insight into these common events:

  • Occurrences at system startup and shutdown (reboot/halt).
  • Date and time an event happened.
  • The user who instigated the event (for example, perhaps they were attempting to access /path/to/topsecret.dat file).
  • Type of event (edit, access, delete, write, update file, and commands).
  • Whether the event succeeded or failed.
  • Records events that Modify time and date.
  • Discover who modified network settings.
  • Record actions that change user or group information.
  • Show who changed a file etc.

Use Kerberos

Kerberos is a third-party service offering authentication that aids Linux security hardening. It uses shared secret cryptography and assumes that packets moving on a non-secure network are readable and writable. Kerberos is based on symmetric-key cryptography and so needs a key distribution center. Kerberos lets you make remote login, remote copy, secure inter-system file copying, and other risky actions safer and it also gives you more control over them. Kerberos authentication prevents unauthorized users from spying on network traffic and grabbing passwords.

Linux Server Security Summary

That’s a lot of tips, but you need to keep your linux server security updated in a world of thieves and vandals. These despicable beings are hard at work all the time, always looking to exploit any chink in a website’s armor. If you give them the slimmest opportunity to disrupt your business, they will happily take advantage of it. Since there’s such a huge army of them, you need to make sure that your castle has extremely strong defenses.

Let us know how many of these tips you have implemented, or if you have any questions in the comments below.