How To Find a File In Linux From the Command Line

Find files in Linux

Need to know how to find a file in Linux? Well, surprise, surprise, you’re going to need the find command in Linux to scour your directory or file system. The Linux find command can filter objects recursively using a simple conditional mechanism, and if you use the -exec flag, you’ll also be able to find a file in Linux straightaway and process it without needing to use another command.

Locate Linux Files by Their Name or Extension

Type find into the command line to track down a particular file by its name or extension. If you want to look for *.err files in the /home/username/ directory and all sub-directories, try this: find /home/username/ -name "*.err"

Typical Linux Find Commands and Syntax

find command expressions look like this:

find command options starting/path expression

The options attribute controls the behavior and optimization method of the find process. The starting/path attribute defines the top-level directory where the find command in Linux begins the filtering process. The expression attribute controls the assessments that scour the directory tree to create output.

Let’s break down a Linux find command where we don’t just want Linux find file by name:

find -O3 -L /var/www/ -name "*.html"

It enables the top-level optimization (-O3) and permits find to follow symbolic links (-L). The find command in Linux searches through the whole directory hierarchy under /var/www/ for files that have .html on the end.

Basic Examples

1. find . -name thisfile.txt

If you need to know how to find a file in Linux called thisfile.txt, it will look for it in current and sub-directories.

2. find /home -name *.jpg

Look for all .jpg files in the /home and directories below it.

3. find . -type f -empty

Look for an empty file inside the current directory.

4. find /home -user randomperson-mtime 6 -iname ".db"

Look for all .db files (ignoring text case) that have been changed in the preceding 6 days by a user called randomperson.

Options and Optimization for Find Command for Linux

find is configured to ignore symbolic links (shortcut files) by default. If you’d like the find command to follow and show symbolic links, just add the -L option to the command, as we did in this example.

find can help Linux find file by name. The Linux find command enhances its approach to filtering so that performance is optimised. The user can find a file in Linux by selecting three stages of optimisation-O1, -O2, and -O3. -O1 is the standard setting and it causes find to filter according to filename before it runs any other tests.

-O2 filters by name and type of file before carrying on with more demanding filters to find a file in Linux. Level -O3 reorders all tests according to their relative expense and how likely they are to succeed.

  • -O1 – (Default) filter based on file name first
  • -O2 – File name first, then file-type
  • -O3 – Allow find to automatically re-order the search based on efficient use of resources and likelihood of success
  • -maxdepth X – Search this directory along with all sub-directories to a level of X
  • -iname – Search while ignoring text case.
  • -not – Only produce results that don’t match the test case
  • -type f – Look for files
  • -type d – Look for directories

Find Files by When They Were Modified

The Linux find command contains the ability to filter a directory hierarchy based on when the file was last modified:

find / -name "*jpg" -mtime 5

find /home/randomuser/ -name "*jpg" -mtime 4

The initial Linux find command pulls up a list of files in the whole system that end with the characters jpg and have been modified in the preceding 5 days. The next one filters randomuser’s home directory for files with names that end with the characters “conf” and have been modified in the preceding 4 days.

Use Grep to Find Files Based on Content

The find command in Linux is great but it can only filter the directory tree according to filename and meta data. To search files based on what they contain you’ll need a tool like grep. Take a look:

find . -type f -exec grep "forinstance" '{}' \; -print

This goes through every object in the current directory tree (.) that’s a file (-type f) and then runs grep ” forinstance ” for every file that matches, then prints them on the screen (-print). The curly braces ({}) are a placeholder for those results matched by the Linux find command. The {} go inside single quotes (‘) so that grep isn’t given a misshapen file name. The -exec command is ended with a semicolon (;), which also needs an escape (\;) so that it doesn’t end up being interpreted by the shell.

Before -exec was implemented, xargs would have been used to create the same kind of output:

find . -type f -print | xargs grep "forinstance"

How to Locate and Process Files Using the Find Command in Linux

The -exec option runs commands against every object that matches the find expression. Let’s see how that looks:

find . -name "rc.conf" -exec chmod o+r '{}' \;

This filters all objects in the current directory tree (.) for files named rc.conf and runs the chmod o+r command to alter file permissions of the results that find returns.

The root directory of the Linux is where the commands that -exec runs are executed. Use -execdir to execute the command you want in the directory where the match is sitting, because this might be more secure and improve performance under certain circumstances.

The -exec or -execdir options will continue to run on their own, but if you’d like to see prompts before they do anything, swap out -exec  -ok or -execdir for -okdir.

How To Manage Files Using Plesk?

Let’s say you have a website that’s all ready to go on your laptop/desktop and you’d like to use File Manager to upload it to the Plesk on Linux server:

  1. On your machine, you’ll need to take the folder with all of your website’s files on it and add it to a compressed archive in one of the usual formats (ZIP, RAR, TAR, TGZ, or TAR.GZ).
  2. In Plesk, go to Files, click the httpdocs folder to open it, click Upload, choose the archive file, and then click Open.
  3. As soon as you’ve uploaded it, click in the checkbox you see alongside and then on Extract Files.

How to Edit Files in File Manager

File Manager lets you edit your website pages by default. To do this you can use:

  • An HTML editor or a “what-you-see-is-what-you-get” style of editor, which is a nice option because it adds the HTML tags for you. If you aren’t all that confident with HTML then this can be a helpful option.
  • Code editor. When you open HTML files with this one you’ll be presented with text where the HTML syntax is highlighted. If you’re comfortable with adding HTML tags yourself then code editor is for you.
  • Text editor. HTML files are opened as ordinary text with this one.

Your Plesk administrator may have already et up the Rich Editor extension, in which case you can use it for HTML file editing. Rich Editor works in a what-you-see-is-what-you-get fashion, just like Code Editor, although it’s better specced with features like a spellchecker for instance.

Here’s how to use File Manager to edit a file:

  1. Put the cursor over the file and the line that corresponds with it will show a highlight.
  2. Open the context menu for the file by clicking on it.
  3. Click Edit in … Editor (this will vary depending on your chosen editor).

How to Change Permissions with File Manager

There are some web pages and files that you don’t necessarily want to share with the world, and that’s where altering their permissions settings can come in handy.

To achieve this, find the item you want to restrict Internet access for like this:

  1. Place your cursor over it and wait for the highlight to appear as in the previous example.
  2. Click on the file to open its context menu and do the same again on Change Permissions.
  3. Make your change and then hit OK. If you’d like to find out more about how to look at and alter permissions in Setting File and Directory Access Permissions.

File Manager’s default approach is to change permissions in a non-recursive manner, so consequently, sub-files and directories don’t aren’t affected by the changed permissions of the higher-level directories they belong to. With Plesk for Linux, you can make File Manager modify permissions in a recursive manner, assuming that your Plesk administrator set up the Permissions Recursive extension and that you understand the octal notation of file permissions.

To enable recursive editing of access permissions:

  1. Place the cursor over the directory and wait for the highlight.
  2. Click to open its context menu and then again on Set Permissions Recursive.
  3. Now you can edit them. “Folder Permissions” is talking about the higher-level directory and any of its associated sub-directories. “File Permissions” applies to sub-files in this instance.
  4. When you’ve completed your permission amendments, click OK.

File Search in File Manager

You’ve got a little bit of latitude with file searches. You can have File Manager hunt for a specific bit of text either in the file name, in the content, or in both. You can choose how you want it to search for files by clicking on the icon that appears adjacent to your chosen search field, and then clicking on whichever type you prefer.

CentOS Project Announces Early End-of-Life Date for CentOS 8

CentOS 8 Announces Early End-of-Life Date - Plesk

We recently found out that the CentOS Project accelerated the End-of-Life date for CentOS 8, meaning that no further operating system updates will be available after December 31, 2021. In the meantime, though, Plesk will continue supporting both CentOS 7 and 8 and CloudLinux 7 and 8 until their planned end of life dates.

CentOS also announced other critical changes to its roadmap that have an impact on the Plesk products and our users and partners:

  • CentOS 8 will be transformed to an upstream (development) branch of Red Hat Enterprise Linux called CentOS Stream, where previous CentOS versions are part of the stable branch.
  • Although CentOS 7 life cycle remains unchanged, updates and security patches will be available until June 30, 2024. The life cycle timing is subject to change.

For additional information on the CentOS Project changes, you can also read their detailed blog post or refer to the CentOS FAQ page.

Plesk Support for CentOS 8

Plesk Support for CentOS 8 - CentOS 8 Announces Early End-of-Life Date - Plesk

If you’re wondering how CentOS 8 End-of-Life policy could affect your Plesk, here are some workarounds that you may want to hear. The good news is that Plesk has already been investing in product support for Ubuntu for decades, and will continue to support CentOS 8. 

Plesk Obsidian supports Ubuntu 20.04 LTS starting from Plesk Obsidian 18.0.29, and Plesk Onyx 17.8, Ubuntu 18.04 LTS. Nonetheless, if you’re a Plesk Onyx user, note that from April 22, 2021, it will no longer be available for new purchases and will stop receiving further development and technical support requests. Please read this article to learn how to upgrade to the latest Plesk Obsidian and how to automate renewals to keep your Plesk updated at all times.

When to Transition and Other Alternatives

CentOS 7 is the most popular choice of Plesk users. Therefore, it will be officially supported by RHEL until June 30, 2024, and will be supported by Plesk to that date. CentOS 7 remains a good choice for a new server.

We will consider supporting CentOS Stream as an alternative to CentOS 8 based on actual industry flow. So, people who will make a decision to follow the official RHEL distro will have CentOS Stream as an option. RHEL states that switching from CentOS 8 to CentOS Stream will be in-place and smooth. 

Additionally, we also plan to deliver AlmaLinux OS support for Plesk in summer 2021. AlmaLinux OS is a free new RHEL fork from the CloudLinux team, and it’s been developed in close co-operation with the community. 

Another good thing is that Plesk will also keep supporting CloudLinux OS 8. This additional supported operating system provides an upgrade path for customers with CloudLinux 6 or 7 deployments. CloudLinux is another commercially supported operating system that many of our partners benefit from. CloudLinux includes many advanced features such as improved user resource limitations, increased user visibility, and advanced customer isolation.

If you need additional information about this topic, please reach out to our support team. They will be happy to support you. And if you want to share your thoughts with us, drop us a line in the comment section below. 

Linux System Administration – Getting Started

Linux System Administration

If you’re new to Linux system administration this guide offers you some useful tips and an overview of some of the common issues that may cross your path. Whether you’re a relative newcomer or a Linux administration stalwart, we hope that this collection of Linux commands will prove useful.

Basic Configuration

One of your first tasks in the administration of Linux is configuring the system, but it’s a process that often throws up a few hurdles. That’s why we’ve collected some tips to help you ‘jump’ over them. Let’s go through it:

Set the Hostname

Use these commands to set the hostname correctly:


hostname -f

The first one needs to show your short hostname, while the one that follows it should show your FQDN—fully qualified domain name.

Setting the Time Zone

In Linux administration, setting your service time zone to the one that most of your users share is something that they’ll no doubt appreciate. But if they’re scattered across continents then it’ll be better to play it safe and go for UTC – Universal Coordinated Time, also known as GMT – Greenwich Mean Time.

Operating systems all have their own ways of letting you switch time zones:

Setting the Time Zone in Ubuntu or Debian

Type this next command and answer the questions that pop up when prompted:

dpkg-reconfigure tzdata

Setting the Time Zone in Arch Linux or CentOS 7

  1. See the list of time zones that are available:
  2. timedatectl list-timezones

Use the Up, Down, Page Up and Page Down keys to select the one you’re after, then either copy it or write it down. Hit q to exit.

  1. Set the time zone (change UK/London to the correct zone):
  2. timedatectl set-timezone 'UK/London'

Manually set the Time Zone – Linux System Administration

Locate the correct zone file in /usr/share/zoneinfo/ and link it to /etc/localtime. Here are some examples:

Universal Coordinated Time:

ln -sf /usr/share/zoneinfo/UTC /etc/localtime

Eastern Standard Time:

ln -sf /usr/share/zoneinfo/EST /etc/localtime

American Central Time (including Daylight Savings Time):

ln -sf /usr/share/zoneinfo/US/Central /etc/localtime

American Eastern Time (including Daylight Savings Time):

ln -sf /usr/share/zoneinfo/US/Eastern /etc/localtime

Configure the /etc/hosts File

In Linux System Administration the /etc/hosts file offers a list of IP addresses and their matching hostnames. This lets you set hostnames for an IP address in one location on the local machine, and then have many applications link to outside resources using their hostnames. The system of host files goes before DNS, so hosts files will always be referenced before a DNS query. This means that /etc/hosts can help you maintain small “internal” networks which as someone involved with Linux administration you might want to use in development or for managing clusters.

It’s a requirement of some applications that the machine identifies itself properly in the /etc/hosts file. Because of this, we strongly suggest you configure the /etc/hosts file not long after deployment.   localhost.localdomain   localhost   username

You can specify some hostnames separated by spaces on each line. Each of those lines needs to start with no more than one IP address. In the example above, swap out for the IP address of your machine. Consider some extra /etc/hosts entries:

Here, every request for the domain or hostname is going to resolve to the IP address, which circumvents the DNS records for and returns an alternative website.

The second line requests that the system looks to for the domain These types of host entries make administration of Linux easier – they are helpful for using “back channel” or “private” networks to get into other servers belonging to a cluster without the need to route traffic over the public network.

Network Diagnostics

Now let’s take a look at some simple Linux commands that there are useful for assessing and diagnosing network problems. If you think you might be having connection problems, you can add the output from the appropriate commands to your support ticket.  This will assist staff in resolving your issues. If your network problems are happening intermittently then this can be especially helpful.

The ping Command

The ping command lets you test the quality of the connection between the local machine and an external machine or address. These commands “ping” and



They send an ICMP packet, which is a small amount of data to the remote host,  then they await a response. If the system can make a connection, it will let you know the “round trip time” for each packet. Here’s what that looks like for four pings to

PING ( 56 data bytes

64 bytes from icmp_seq=0 ttl=54 time=17.721 ms

64 bytes from icmp_seq=1 ttl=54 time=15.374 ms

64 bytes from icmp_seq=2 ttl=54 time=15.538 ms

The time field tells you how long each individual packet took to complete the round trip in milliseconds. In Linux Administration, when you’ve got all the information you want, you can interrupt the process using Control+C. It will then show you some statistics that look like this:

--- ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3007ms

rtt min/avg/max/mdev = 34.880/41.243/52.180/7.479 ms

These are the ones you should take note of:

  • Packet Loss, this takes the difference between how many packets were sent and how many came back to you and expresses it as a percentage.
  • Round Trip Time (rtt) tells you all the ping responses. “min” is the fastest packet round trip, and in this case, it took 34.88 milliseconds. “avg” is the average round trip, and that took 41.243 milliseconds. “max” is the longest a packet took, which was 52.18 milliseconds. “mdev” shows a single standard deviation unit, and for these four packets, it was 7.479 milliseconds.

In your administration of Linux, the ping command is useful for giving you a rough measure of point-to-point network latency, and if you want to establish that you definitely are connected to a remote server then this is the tool that can tell you.

The traceroute Command

The traceroute command tell you a bit more than the ping command. It can trace the packet’s journey from the local machine to the remote machine and report the number of hops (meaning each step using an intermediate server) it took on the way. This can be useful when you’re investigating a network issue because packet loss in one of the first few hops tells you that the problem may be with the user’s Internet service provider (ISP) or local area network (LAN), rather than your administration of Linux. But if packets were being shared near the end of the route, this could indicate a problem with the service connection.

This is what output from a traceroute command typically looks like:

traceroute to (, 30 hops max, 40 byte packets

1 ( 0.414 ms 0.428 ms 0.509 ms

2 ( 0.287 ms 0.324 ms 0.397 ms

3 ( 1.331 ms 1.402 ms 1.477 ms

4 ( 1.514 ms 1.497 ms 1.519 ms

5 ( 1.702 ms ( 1.731 ms 21.031 ms

6 ( 26.111 ms ( 23.582 ms 23.468 ms

7 ( 123.668 ms ( 47.228 ms 47.250 ms

8 ( 76.733 ms ( 73.582 ms 73.570 ms

9 ( 86.025 ms 86.151 ms 86.136 ms

10 ( 80.877 ms ( 76.212 ms ( 80.884 ms

The hostnames and IP addresses sitting before and after a failed jump can help you determine whose machine is involved with the routing error. Lines with three asterisks (* * *) indicate fail jumps.

If you’re trying to fix network issues or someone like your ISP is looking into it for you then traceroute output can help track down the problem, and recording traceroute information can really help when the issue only happens infrequently.

The mtr Command

As with the traceroute tool, the mtr command is important in Linux System Administration. It can tell you about the route that internet traffic takes between the local system and a remote host. However, mtr also gives you extra information about the round-trip time for the packet, too. Think of mtr as a bit like a mixture of traceroute and ping.

An output from an mtr command might look like this:

HOST:              Loss%   Snt     Last    Avg     Best    Wrst    StDev

  1.                    0.0%    10      0.4     0.4     0.3     0.6     0.1
  2.        0.0%    10      0.3     0.4     0.3     0.7     0.1
  3.         0.0%    10      4.3     4.4     1.3     11.4    4.1
  4.  0.0%    10      64.9    11.7    1.5     64.9    21.2
  5.                   0.0%    10      1.7     4.5     1.7     29.3    8.7
  6.                    0.0%    10      23.1    35.9    22.6    95.2    27.6
  7.                   0.0%    10      24.2    24.8    23.7    26.1    1.0
  8.                  0.0%    10      27.0    27.3    23.9    37.9    4.2
  9.            0.0%    10      24.1    24.4    24.0    26.5    0.7

As with the ping command, mtr is great for Linux administration. In this case it tells you real-time connection speed. Use CONTROL+C to stop it manually and use the –report flag to make it stop automatically after 10 packets and produce a report, like this:

mtr --report

Don’t be surprised when it pauses while it’s producing the output. This is perfectly normal.

Linux System Diagnostics

If you’re having trouble with your system and it’s not related to networking or some other application problem, it might be useful to rule out hardware and issues at the operating system level. These tools can help you diagnose and fix such problems.

If you discover a problem with memory usage, you can use these tools and methods to find out exactly what’s causing it.

Check Level of Current Memory Use

Use this command:

free -m

Possible output should look like this:

            total       used       free     shared    buffers     cached

Mem:          1997        898       1104        105         34        699

-/+ buffers/cache:        216       1782

Swap:          255          0        255

Output like this will require some close reading to understand. It’s saying that the system is using 898 megabytes of memory (RAM) out of a total 1997 megabytes, and 1104 megabytes our free. Although, there’s also 699 megabytes of stale data in the system, buffered and Held in the cache. The operating system will empty its caches if more space is required, but it will hold onto a cache if no other process wants to use it. A system that uses Linux Administration will usually leave old data sitting in RAM until it’s needed for something else, so don’t worry if it looks like there is very little free memory.

In the example above, there are only 1782MB of free memory, which means that’s all that any extra process application will have left to work with.

Use vmstat to Monitor I/O Usage

The vmstat tool tells you about memory, swap utilization, I/O wait, and system activity. It’s especially good for the diagnosis of I/O-type difficulties. Here’s an example:

vmstat 1 20

This runs a vmstat every second for twenty seconds, so it will pick up a sample of the current system state. Here’s how the output will typically look:

procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----

 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa

 0  0      4  32652  47888 110824    0    0 0     2   15   15  0  0 100  0

 0  0      4  32644  47888 110896    0    0 0     4  106  123  0  0 100  0

 0  0      4  32644  47888 110912    0    0 0     0   70  112  0  0 100  0

 0  0      4  32644  47888 110912    0    0 0     0   92  121  0  0 100  0

 0  0      4  32644  47888 110912    0    0 0    36   97  136  0  0 100  0

 0  0      4  32644  47888 110912    0    0 0     0   96  119  0  0 100  0

 0  0      4  32892  47888 110912    0    0 0     4   96  125  0  0 100  0

 0  0      4  32892  47888 110912    0    0 0     0   70  105  0  0 100  0

 0  0      4  32892  47888 110912    0    0 0     0   97  119  0  0 100  0

 0  0      4  32892  47888 110912    0    0 0    32   95  135  0  0 100  0


The memory and swap columns give you the same kind of information as the “free -m” command, although in a format that’s a little more difficult to comprehend. The last column in most installations provides the most relevant information—the wa column. It shows how long the CPU spends idling while it waits for I/O operations to be completed.

If the number there is frequently a lot greater than 0, then this points to an I/O usage issue, but if the vmstat output is similar, don’t worry, because it’s not that.

Administration of Linux is sometimes hit with an intermittent issue, so run vmstat when it happens to let you diagnose it correctly, or at least discount the possibility of an I/O issue. Any support staff helping you will welcome vmstat output to help them diagnose problems.

Monitor Processes, Memory, and CPU Usage with htop

You can get a more ordered view of your system’s state in real time by using htop. You’ll have to add it to most systems yourself, and, depending on your distribution, you’ll use one of these commands to do so:

apt-get install htop

yum install htop

pacman -S htop

emerge sys-process/htop

To start it, type:


Press the F10 or Q keys at any time when you want to quit. Some htop behaviors may seem hard to fathom to start with, so be aware of the following:

  • The memory utilization graph shows cached memory, used memory and buffered memory, while the numbers displayed at the end of it indicate the total amount that’s available and the total amount installed as reported by the kernel.
  • The htop default configuration shows all application threads as separate processes, which might not be obvious if you weren’t aware of it. If you prefer to disable this then select the “setup” option with F2, then “Display Options,” and then toggle “Hide userland threads”.

The F5 key lets you toggle a “Tree” view that arranges the processes in a hierarchy. This is handy because it lets you see which processes were spawned by other processes and it shows it in an organized way. This can help you diagnose an issue when it’s hard to tell one process from another.

File System Management

The FTP protocol has often been used by web developers and editors to manage and transfer files on a remote system. But the problem with FTP is that it’s very insecure and doesn’t offer a very efficient way of managing for managing the files on your system when you have SSH access.

If you’re new to Linux systems administration you might want to use WinSCP instead, with rsync used to synchronize files using SSH and the terminal.

Uploading Files to a Remote Server

If you have used an FTP client before, the OpenSSH is similar, and you can use it over the SSH protocol. Dubbed “SFTP,” numerous clients such as WinSCP for Windows, Cyberduck for Mac OS X, and Filezilla for Linux, OS X, and Windows desktops support this protocol.

If you’re familiar with FTP, then you’ll be comfortable with SFTP. If you’ve got access to a file system at the command line then you’ll automatically have the same access over SFTP, so bear this in mind when you set up user access.

You can also use Unix utilities such as scp and rsync to securely transfer your files. A command to copy team-info.tar.gz on a local machine would look like:

scp team-info.tar.gz [email protected]:/home/username/backups/

After the scp command comes the path of the file on the local file system that you want to transfer, followed by the username and hostname of the remote machine separated by an “@” symbol. Use a colon (:) after the hostname and then put the path on the remote server where the file will be uploaded to. Here’s a less specific example:

scp [/path/to/local/file] [remote-username]@[remote-hostname]:[/path/to/remote/file]

OS X and Linux machines make this command available by default. It’s useful for copying files between remote servers in Linux Administration. If you use SSH keys, you can use the scp command without needing a password for each transfer.

The syntax of scp follows the form scp [source] [destination]. If you want to do the reverse operation and copy files from a remote host to your local machine and simply swap destination and source.

Protecting Files on a Remote Server

As someone involved with Linux Administration, it’s important to maintain file security when you let a number of users have network access to your network-accessible servers.

Best practices for security include:

  • Only giving users the minimum permissions required for whatever tasks they need to complete.
  • Only running services on public interfaces that are in active use. A frequent source of security vulnerabilities comes from unused daemons that have been left running, and this holds equally true for database servers, HTTP development servers, and FTP servers, too.
  • When you can, use SSH connections to encrypt any sensitive information that you want to transfer.

Symbolic Links

Symbolic linking, often referred to as “symlinking”, lets you create objects in your file system that can point to other objects. This is useful in the Administration of Linux if you want to let users and applications access particular files and directories without having to reorganize all your folders. This approach lets users have restricted access to your web-accessible directories without moving your DocumentRoot to their home directories.

Type a command in the following format to set up a symbolic link:

ln -s /home/username/config-git/etc-hosts /etc/hosts

This creates a link of the file etc-hosts at the location of the system’s /etc/hosts file. More generically:

ln -s [/path/to/target/file] [/path/to/location/of/sym/link]

Here are some features of the link command to be aware of:

  • The location of the link, which is the last term, can be left out, and if you do that, then one with the same name as the file you’re linking to will be created in the current directory.
  • When specifying the link location, make sure that the path doesn’t have a slash at the end. You can produce a symlink that targets a directory, but make sure that it doesn’t end with a slash.
  • If you take out a symbolic link this won’t affect the target file.
  • When you create a link, you can use relative or absolute paths.

Managing Files on a Linux System

If you’re new to handling files via the terminal interface as part of your Linux system administration role, here’s a list of basic commands to help you.

To copy files:

cp /home/username/todo.txt /home/username/archive/todo.01.txt

This will copy todo.txt to an archive folder and then append a number to the file name. If you want to repeatedly copy every file and subdirectory in one directory into another, use -R in the command like this:

cp -R /home/username/archive/ /srv/backup/username.01/

To move a file or directory:

mv /home/username/archive/ /srv/backup/username.02/

You can also rename a file using use the mv command.

To delete a file:

rm scratch.txt

This deletes the scratch.txt file from the current directory.

Package Management

Administration of Linux is made much easier by the package management tools that come with the majority of Linux systems. These make it simple to centrally install and maintain your system’s software. Installing your software manually makes it harder to manage dependencies and keep your system up to date. Package management tools help keep you on top of the majority of such tasks, so here are some basic package management tasks for use in Linux administration.

Track Down Packages Installed on Your System

Packages are easy to install and they often produce multiple dependencies that can be easy to lose sight of. These commands list all the packages installed on your system:

On Debian and Ubuntu systems:

dpkg -l

This example shows the first few lines of the output of this command on a production Debian Lenny system.

||/ Name                         Version                      Description


ii  adduser                      3.110                        add and remove users and groups

ii  apache2-mpm-itk              2.2.6-02-1+lenny2            multiuser MPM for Apache 2.2

ii  apache2-utils                2.2.9-10+lenny4              utility programs for webservers

ii  apache2.2-common             2.2.9-10+lenny4              Apache HTTP Server common files

ii  apt                              Advanced front-end for dpkg

ii  apt-utils                        APT utility programs

ii  bash                         3.2-4                        The GNU Bourne Again SHell

On CentOS and Fedora systems:

yum list installed

This example shows a few lines of the output from this command:

MAKEDEV.i386                 3.23-1.2                  installed

SysVinit.i386                2.86-15.el5               installed

CentOS and Fedora systems show the name of the package (SysVinit), the architecture it was compiled for (i386), and the build version installed on the system (2.86-15.el5).

For Arch Linux systems:

pacman -Q

This command pulls up a complete list of the packages installed on the system. Arch also lets you filter the results so that it only shows those packages that were explicitly installed (with the -Qe option) or that were installed automatically as dependencies (with the -Qd option). The command above is actually a combination of the output of two commands:

pacman -Qe

pacman -Qd

Here’s an example of the output:

perl-www-mechanize 1.60-

perl-yaml 0.70-1

pkgconfig 0.23-1

procmail 3.22-2

python 2.6.4-1

rsync 3.0.6-1

On Gentoo Linux systems:

emerge -evp --deep world

Here’s an example of this output:

These are the packages that would be merged, in order:

Calculating dependencies... done!

   [ebuild   R   ] sys-libs/ncurses-5.6-r2  USE="unicode -debug -doc -gpm -minimal -nocxx -profile -trace" 0 kB

   [ebuild   R   ] virtual/libintl-0  0 kB

   [ebuild   R   ] sys-libs/zlib-1.2.3-r1  0 kB

Because it’s usual for so many packages to be installed on most systems, these commands can produce quite a large output, so it can be used for tools like grep and less to narrow your results. For example:

dpkg -l | grep "python"

This will pull up a list of all packages where the name or description features the word “python.” You can also use less in a similar way:

dpkg -l | less

This gives you the same list as the basic “dpkg -l; but the results will appear in the less pager, which will let you search and scroll more easily.

Adding | grep “[string]” to these commands will let you filter package list results, or with all distributions you can add | less to show the results in a pager.

Finding Package Names and Information

The name of the package isn’t always intuitive, because it doesn’t always look like the name of the software. That’s why many package management tools exist to help you search the package database. Such tools are great for finding a particular piece of software when you don’t know its name and they make Linux Administration a lot easier.

For Debian and Ubuntu systems:

apt-cache search [package-name]

This searches the local package database for a particular term and then produces a list with descriptions. Here’s some of the output for apt-cache search python :

txt2regex - A Regular Expression "wizard", all written with bash2 builtins

vim-nox - Vi IMproved - enhanced vi editor

vim-python - Vi IMproved - enhanced vi editor (transitional package)

vtk-examples - C++, Tcl and Python example programs/scripts for VTK

zope-plone3 - content management system based on zope and cmf

zorp - An advanced protocol analyzing firewall

groovy - Agile dynamic language for the Java Virtual Machine

python-django - A high-level Python Web framework

python-pygresql-dbg - PostgreSQL module for Python (debug extension)

python-samba - Python bindings that allow access to various aspects of Samba

Be aware that apt-cache search queries all the records relating to every package and not just the titles and the descriptions shown here, which is why vim-nox and groovy are included, as both mention python in their descriptions. To view the complete record on a package use:

apt-cache show [package-name]

This will tell you about the maintainer, the dependencies, the size, the upstream project’s homepage, and the software’s description.

On CentOS and Fedora systems:

yum search [package-name]

This creates a list of all the packages in the database matching the given term. Here’s what the output of yum search wget typically looks like:

Loaded plugins: fastestmirror

 Loading mirror speeds from cached hostfile

  * addons:

  * base:

  * extras:

  * updates:

 ================================ Matched: wget =================================

 wget.i386 : A utility for retrieving files using the HTTP or FTP protocols.

The package management tools can tell you more about any individual package. To get a complete list from the package database use this command:

yum info [package-name]

This output will give you more detailed information about the package, its purpose, origins and dependencies.

On Arch Linux systems:

pacman -Ss [package-name]

This will search the local package database. Here’s a snippet from the results that a search for “python” would bring up:

extra/twisted 8.2.0-1

Asynchronous networking framework written in Python.

community/emacs-python-mode 5.1.0-1

    Python mode for Emacs

The terms “extra” and “community” tell you where the software is sitting. To ask for additional information regarding a particular package, your command should be set out like this:

pacman -Si [package-name]

If you run pacman with the -Si option, it will get the record for the package from the database that includes a brief description, package size and dependencies.

For Gentoo Linux systems:

emerge --search [package-name]

emerge --searchdoc [package-name]

The first command will just look for package names in the database. The second one will search for both names and descriptions. These commands will let you search your local package tree (i.e., portage) for a particular package name or term. The output of either command will look similar to the example below.


 [ Results for search key : wget ]

 [ Applications found : 4 ]

 *  app-emacs/emacs-wget

       Latest version available: 0.5.0

       Latest version installed: [ Not Installed ]

       Size of files: 36 kB


       Description:   Wget interface for Emacs

       License:       GPL-2

Since the output you’ll get from the emerge –search command will be so long-winded, there isn’t a tool to show you more information, unlike in some of the other distributions. If you want to narrow your search results down even more you can use regular expressions with the emerge –search command.

In Linux administration, produce a lot of text, so tools like grep and less can be very useful for making the results more easy to scroll through. For example:

apt-cache search python | grep "xml"

This will bring up all those packages that matched for the search term “python” and that also have “xml” somewhere in their name or description. In the same way:

apt-cache search python | less

This will give you the same list as the simple apt-cache search python but the results will be displayed in the less pager. This makes it easier to search and scroll.

If you add | grep “[string]” to these commands it will filter package search results, or you can use | less to show the results in the less pager. This works across all distributions.

Text Manipulation

On Linux and UNIX-like systems, the vast majority of system configuration information is held in plain text format, so next up are some basic Linux commands and tools for working with text files.

Search for a String in Files with grep

In Linux system administration the grep tool lets you search for a term or regex pattern within a stream of text, like a file or the output from a command.

Let’s look at how to use the grep tool:

grep "^Subject:.*HELP.*" /home/username/mbox

This will search your email subject headers which begin with any amount of characters, and which contain the word “help” in capital letters and are followed by any number of extra characters. It would then show the results in the terminal.

The grep tool gives some extra options, and if you use them, they force the program to return the context for each match (e.g., with -C 2 for two lines of context). With -n, grep it produces the line number of the match. With -H, grep it gives you the file name of each match, which is handy when you “grep” a group of files or when you repeatedly “grep” through a file system (using -r). Type grep –help for extra options.

To grep a collection of files, you can specify the file using a wildcard:

grep -i "jones" ~/org/*.txt

This will return every time the word “jones,” shows up. Case gets ignored because of the -i instruction. The grep tool will search all files in the ~/org/ directory that have got a .txt extension.

You can use it to filter the results from a different command that sends output to standard out (stdout). It manages this by “piping” the output of one command into grep. For example:

ls /home/username/data | grep "7521"

In this example, we assume that there are a lot of files with a UNIX timestamp in their file names in the /home/username/data directory. The command will filter the output so it only shows files with the digits “7521” in their file names. In these cases, grep only filters the output of ls and doesn’t check the contents of the file itself.

Search and Replace In a Group of Files

The sed tool, or the Stream EDitor, can search for a regex pattern and replace it with another string. Use it as an alternative to the grep tool, which is strong on text filtering of regular expressions, but not as good with editing a file or otherwise manipulating text.

Do be warned that sed is powerful enough to do a lot of damage if you don’t know how to wield it safely, so we suggest that you make backups so you can test your sed commands in safety before you run them. Here’s a simple sed one-liner, to demonstrate its syntax:

sed -i `s/^good/BAD/` singularity.txt

This replaces any appearances of the word “good” at the beginning of a line (noted by the ^) with the string “BAD” in the file singularity.txt. The -i option tells sed to do the replacements “in place.” The sed command can produce backups of the files that it edits if you include a suffix after the -i option, as in -iBAK. In the above example, it would back up the original file as morning-star.txt.BAK before making changes.

A sed statement is generally formatted to look like:


To match literal slashes (/), you must escape them by using a backslash (\), which is to say that if you want to match a / character you would need to use \/ in the sed expression. When searching for a string with a number of slashes, you can swap them for a different character. For example:


This would remove the slashes from the string r/e/g/e/x so that it would become regex after the sed command was run on the file that contains the string.

This example searches and replaces one IP address with another. In this case, is replaced with

sed -i 's/97\.22\.58\.33/87\.65\.33\.31/'

Here, period characters are escaped as \.. In regular expressions, the full-stop (period) character matches with any character if you don’t escape it.

Edit Text

You’ll often need to use a text editor to edit the contents of a file, and some distribution templates include the vi/vim and nano text editors. Both are small yet powerful tools that are at home manipulating text in the terminal environment.

Other options are available though, including emacs and “zile.” Use your operating system’s packet manager to install these programs if you want. Be sure to search your package database in order to install a version that has been compiled without GUI components (i.e. X11).

To open a file, type a command that begins with the name of the editor you would like to run then the name of the file you want to edit. Here are some examples of commands that open the /etc/hosts file:

nano /etc/hosts

vi /etc/hosts

emacs /etc/hosts

zile /etc/hosts

Once you’ve edited a file, save and exit the editor to get back to the prompt. The actual procedure as a bit different with each editor. In emacs and zile it’s the same key sequence. You hit ctrl, x and s to save, usually written as “C-x C-s” and then it’s “C-x C-c” to close the editor. In nano, use Control-O (written as \^O) and confirm the file name to write the file. Hit Control-X to exit.

For administration of Linux it helps to know that vi and vim are modal editors, and the way they work is a little more complicated. After you open a file in vi, you press the “I” key to switch to insert mode, which will allow you to edit text in the usual way. To save the file, you need to go back into “normal” mode, so just press the escape key (Control-[ also works), and type:wq to write the file and exit the program.

This is just a brief introduction to using these text editors in Linux system administration, but there are many online resources available online that will  help you go from beginner to expert.

Webservers and HTTP Issues

It’s best to install and configure your webserver in a way that best suits your application or website. Let’s go over a number of basic webserver tasks and functions and offer some advice for beginners.

Serve Websites

Webservers work by listening on a TCP port, usually port 80 for HTTP and port 443 for HTTPS. When a visitor requests content, the servers respond by delivering it. Resources are usually specified with a URL that has the protocol, http or https; a colon and two slashes, ://; hostname or domain, or; followed by a file path, /images/avatar.jpg, or index.html. A complete URL would look something like:

To offer these resources to visitors, your system must be running a webserver. There are lots of different HTTP servers and endless configurations to support various web development frameworks. The three recommended webservers for general use are Apache HTTP server, Lighttpd, and Nginx. There are pluses and minuses for all of them, and the one you choose will largely depend on a combination of your needs and your experience.

Once you’ve decided which webserver go for, you need to decide what (if any) scripting support you need to install. Scripting support lets your webserver run dynamic content and also program server-side scripts in languages like Python, PHP, Ruby, and Perl.

How to Choose a Webserver

Most visitors don’t know which webserver you use so the one you choose really comes down to your own requirements and preferences. This can make Linux system administration a challenge for anyone new to it, so let’s consider some of your choices.

The Apache HTTP Server is thought by many to be the ideal webserver. It’s the open-source option that’s used more than any other, its configuration interface has enjoyed many years of stability and its modular architecture suits all kinds of deployments. Apache is the basis of the LAMP stack, and it helps to integrate dynamic server-side apps into the webserver.

The thing with webservers like Lighttpd and nginx is that they’re more weighted towards serving static content efficiently. If you’re dealing with high demand and limited server resources then one of these servers might be the better option. Lighttpd and nginx offer stability and functionality and they don’t strain system resources, but on the downside, they can be harder to configure when you want to integrate dynamic content interpreters.

So, choose your Webserver according to your needs, taking into account factors like the type of content you’ll be serving, how in-demand it will be, and how comfortable you are managing Linux system administration with that software.

Apache Logs

With Apache, webserver problems can be difficult to troubleshoot, but there are known common issues which will give you clues about where to start. When things get a little trickier Linux administration you might need to look through the Apache error logs.

These are located in the /var/log/apache2/error.log file by default (on Debian-based distributions). You can track or “tail” this log with this command:

tail -F /var/log/apache2/error.log

We suggest you add a custom log setting:

Configuring Apache Virtual Host

1 ErrorLog /var/www/html/ CustomLog /var/www/html/ combined

Here is a stand-in for the name of your virtual host and the place where its resources are kept. Apache creates two log files with logged information relating to that virtual host, making administration of Linux easier as you troubleshoot errors on specific virtual hosts. To track or tail the error log:

tail -F /var/www/html/

This displays new error messages when they appear. You can take specific parts of an error message from an Apache log and do a web search to diagnose problems. Common ones include:

  • Missing files, or mistakes in file names
  • Permissions errors
  • Configuration errors
  • Dynamic code execution or interpretation errors

DNS Servers and Domain Names

DNS stands for Domain Name System, and it’s the service used by the Internet to link the difficult-to-remember chain of numbers in IP addresses with more memorable domain names. This section will look at some DNS-type tasks.

Redirect DNS Queries using CNAMEs

Using CNAME DNS records makes it possible to redirect requests for one hostname or domain to a different hostname or domain. This helps when you need to reroute requests for one domain to a different one, thus avoiding the need to set up a webserver to handle such requests.

CNAMEs only work in relation to redirecting from one domain to another. If you need to point a full URL somewhere else, you’ll have to set up a webserver and do some server-level redirection configuration and/or web hosting. CNAMEs let you redirect subdomains, like, to other ones, like CNAMEs have to point to a valid domain with a valid A Record, or to another CNAME.

Despite some limitations, CNAMEs can be occasionally quite helpful in the administration of Linux, particularly if you need to switch a machine’s hostname.

Setting Up Subdomains

A name that comes before a first-level domain indicates that it’s a subdomain. In, team is a subdomain for the root domain

Follow these steps to create and host a sub-domain:

  1. First, create an A Record for the domain in the DNS zone. You can do this using the DNS Manager. You can host the DNS for your domain with the provider of your choice.
  2. Set up a server to respond to requests sent to this domain. For webservers like Apache, you’ll need to configure a new virtual host. For XMPP servers configure another host to accept the requests for this host. For more information, consult the Linux system administration documentation for the particular server you want to deploy.
  3. Configured subdomains work almost like root domains on your server. You can set up HTTP redirection for the new subdomain if you need to.

SMTP Servers and Email Issues

In this section, we’ll be looking at setting up email to suit your requirements and configuring your system to send email.

Which Email Solution?

Email functionality with Linux administration hinges on two major components. The SMTP server or “Mail Transfer Agent” is the most significant one. The MTA—as it’s known—sends mail between servers. The second part of the system is when a server gets mail to the user’s own machine. These servers often use a protocol like POP3 or IMAP to give remote access to the mailbox.

The email server tool chain can also feature other components, which you might have access to depending on your deployment. They include filtering and delivery tools such as procmail, anti-virus filters such as ClamAV, mailing list managers like MailMan, and spam filters like SpamAssassin. These components work independently of the MTA and remote mailbox server.

The most widely used SMTP servers or MTAs in the UNIX-like arena are Postfix, Exim, and Sendmail. Sendmail is the oldest and lots of Linux adminsitration professionals know it well. Postfix is modern and robust, and it slots into many different configurations. Exim is the standard MTA in Debian systems, and many feel that it’s easier to use for basic tasks. Servers like Courier and Dovecot are also popular for remote mailbox access,

If you’re looking for an email solution that is easy to install, you could take a look at Citadel groupware server. Citadel offers an integrated “turnkey” solution that comes with an SMTP server, remote mailbox access, real time collaboration tools including XMPP, and a shared calendar interface.

If you’re looking for a simpler and modular email stack, it’s worth taking a look at Postfix SMTP server.

Sending Email From Your Server

For simple configurations, you might not need a full email stack, but applications running on that server we’ll still need to be able to send mail for notifications and to meet other day-to-day needs.

We can’t go into configuring applications to send notifications and alerts in this guide, but the majority of applications come with a simple “sendmail” interface, which you can access via several common SMTP servers that include Postfix and msmtp.

To install Postfix on Debian and Ubuntu systems:

apt-get install postfix

On CentOS and Fedora systems:

yum install postfix

When you’ve installed Postfix, your applications should be able to access the sendmail interface, which can be found at /usr/sbin/sendmail. The majority of applications running on your system should be capable of sending mail with this setup.

If you need to use your server to send email through an external SMTP server, you might want to think about a simpler tool like msmtp because it’s included in the majority of distributions, and it can be installed using the appropriate command:

apt-get install msmtp

yum install msmtp

pacman -S msmtp

Use type msmtp or which msmtp, to find where msmtp is on your system (usually at /usr/bin/msmtp). You can set authentication credentials with command line arguments or by declaring SMTP credentials in a configuration file.

Linux vs Unix – What’s the Difference?

Linux vs Unix

Most software developers under the age of 40 wouldn’t have thought about Unix vs Linux many years ago. In recent years they will have mostly known Linux as the dominant operating system, particularly in the data center, where it might now be the OS of choice as much as 70% of the time (although this is an estimate—it’s hard to get a definitive figure) and Windows variants account for most of the other 30%. Developers who use any major public cloud can assume the target system will run Linux, and the explosion of Android and Linux-based systems in smartphones, TVs, automobiles, and other devices also point to its prevalence, so the question of Linux vs Unix appears decided.

Despite this, even those software developers who have only known the rise of Linux will have at least heard that Unix exists, even if it’s only on those occasions when they’ve heard Linux being described as a “Unix-like” operating system. Some may still wonder which is best; who would win in a Linux vs Unix contest? That’s a good question, but we really need to explore the history of these two contenders in order to answer it conclusively.

So, what’s Unix? The caricatures that swirl around the creation of this OS feature bearded men with elbow patches and sandals writing C code and shell scripts on green screens back  in the 1970s. But Unix actually has a much richer history than such easy stereotypes would suggest, and we’ll attempt to lay out some of it, along with the differences between Linux and Unix, in this article.

The beginnings of Unix

A small team of programmers at AT&T Bell Labs in the late 1960s wanted to create a multi-user, multi-tasking operating system for a machine called the PDP-7, and two of the team’s most notable members were Ken Thompson and Dennis Ritchie. A lot of Unix’s concepts are recognizable from its predecessor, Multics, which was rewritten in the 1970s by the Unix team using the C language. That’s what sets Unix apart from all the rest.

Back then, it wasn’t common for operating systems to be portable. Use of low-level source language meant that the hardware platform that an operating system had been written for was the one that it was stuck with. But Unix being written in C made it possible to port it to other hardware architectures.

As well as its portable nature—which assisted Unix’s quick adoption in other research, academic, and commercial settings—some of the operating system’s core design concepts made it attractive to programmers and users. Ken Thompson’s Unix philosophy was geared towards modular software design, the idea of which was that small, purpose-built programs could be combined to tackle large and complicated tasks. Because Unix had been designed around pipes and files, this approach to “piping” the inputs and outputs of programs together into a direct set of operations on the input is still popular today. In fact, the present cloud functions-as-a-service (FaaS)/serverless paradigm has its origins in the Unix way of thinking.

Quick growth and competition

Unix grew in popularity through the 1970s and 1980s, expanding into research, academia, and commercial business, but Unix wasn’t open source software though. This meant that anyone wanting to use it needed to buy a licence from AT&T, which owned it. The University of Illinois bought the first known software licence in 1975

Thanks to Ken Thompson’s sabbatical at Berkeley University in the 1970s, a lot of Unix activity got underway there, resulting in the creation of Berkeley Software Distribution, or BSD. At first, BSD wasn’t offering a competitor to AT&T’s Unix, just an add-on with some extra software and capabilities. When 2BSD (the Second Berkeley Software Distribution) came along in 1979, Bill Joy, a Berkeley graduate student had made more programs available like vi and the C shell (/bin/csh).

Along with BSD, which enjoyed enduring popularity as a member of the Unix family, commercial Unix offerings became prevalent throughout the 1980s and early 1990s thanks to names such as HP-UX, IBM’s AIX, Sun’s Solaris, Sequent, and Xenix. As different branches of the Unix family tree took shape, the “Unix wars” ensued, and the community was now focused on standardization. The results came in 1988 with the POSIX standard, and The Open Group added more follow-on standards in the 1990s.

This period saw Sun and AT&T release System V Release 4 (SVR4), which many commercial vendors were quick to pick up. The BSD group of operating systems had been busy growing too, resulting in various open source versions that came out under the BSD license, including FreeBSD, OpenBSD, and NetBSD.  Many of these variants of Unix still get used today, although a lot have experienced a decline in their share of the server market, with many managing no better than single digits popularity. BSD might currently have more installations than any modern Unix system. Also, BSD can be found in every Apple Mac hardware unit sold recently, as the OS X (now macOS) operating system it uses is derived from BSD.

We could say a lot more about Unix’s history but that’s beyond the scope (and the length) of this piece. We are more interested in talking about Unix vs Linux, so let’s look at how Linux got started.

Linux Appears

The Linux operating system is the descendent of two projects that began in the 1990s. Richard Stallman wanted to build a free and open source alternative to Unix. He called the program GNU, a recursive acronym that meant “GNU’s not Unix!” A kernel project got going, but progress was difficult, and with no kernel, any hopes of a free and open source operating system would be in vain. But then came Linus Torvald with a feasible kernel named Linux that completed project. Linus used a number of GNU tools (like the GNU Compiler Collection, or GCC) and they proved to be a perfect match for the Linux kernel.

Linux distributions appeared thanks to GNU components, the Linux kernel, MIT’s X-Windows GUI, and other BSD components that were permitted under the open source BSD license. Slackware and then Red Hat distributions were popular because they enabled the typical 1990s PC user to use the Linux operating system. For many this complemented the proprietary Unix systems they used in their working or academic lives, so out of Linux and Unix, Linux offered clear appeal.

Because of the free and open source nature of Linux components, anyone was allowed to create a Linux distribution, and soon there were hundreds of distros. currently lists 312 unique Linux distributions. Naturally, numerous developers make use of Linux either via popular free distributions like Fedora, Canonical’s Ubuntu, Debian, Arch Linux, Gentoo, and many other variants, or through cloud providers. Commercial Linux offerings, which offer support in addition to the free and open source components, achieved viability when numerous enterprises—and IBM was among them—moved from Unix and its proprietary model to supplying middleware and software solutions for Linux. Red Hat Enterprise Linux was built on the basis of a proprietary model of commercial support. German provider SUSE followed suit with SUSE Linux Enterprise Server (SLES).

Unix vs Linux

Up to now, we’ve had a brief overview of the history of Linux and Unix and the GNU/Free Software Foundation underpinnings of a free and open source alternative to Unix. Let’s take a look at what’s different about Linux vs Unix, two operating systems that share similar histories and aspirations.

There aren’t many obvious differences between Linux and Unix from the user’s point of view. A lot of Linux’s appeal came from the fact that it worked on different architecture types (the modern PC included) and it’s tools were familiar to Unix users and system administrators.

Compliance with POSIX standards made it possible to compile software written on Unix for a Linux operating system without too much difficulty. In a lot of cases, shell scripts could be used directly on Linux. Some tools in Unix and Linux may have had slight differences in flag/command-line options, but many worked in the same way on either system.

It’s worth noting here that macOS hardware and operating system became popular as a Linux development platform because a lot of Linux tools and scripts also work in the macOS terminal. Linux tools like Homebrew make a lot of open source software components available.

The other differences between Linux and Unix mostly relate to licensing. Linux vs Unix is largely a contest of free v licensed software. Alongside this, the fact that Unix distributions lack a common kernel affects software and hardware vendors, too. With Linux, a vendor can create a device driver for a particular hardware device with the reasonable expectation that it will work fine across the majority of distributions. But with Unix having commercial and academic branches to cater to, it might be necessary to release different drivers for all the Unix variants. There was also the problem of licensing, and other worries related to access to an SDK or a distribution model for the software as a binary device driver across multiple versions of Unix. Linux and Unix are clearly  different.

Many of the advances seen in Linux have been mirrored by Unix, which shows that Linux and Unix developers keep a close eye on each other. A lot of GNU utilities became available as add-ons for Unix systems on occasions when developers wanted features from GNU programs that were not part of Unix. For instance, IBM’s AIX had AIX Toolbox for Linux Applications which contained hundreds of GNU software packages (like Bash, GCC, OpenLDAP, among numerous others) that could be added to an AIX installation to make transitioning between Linux and Unix-based AIX systems go more smoothly.

Proprietary Unix still exists, and there are many major vendors who promise to support their current releases for several more years yet, so Unix won’t be disappearing from sight any time soon. Also, the BSD branch of Unix is open source, and NetBSD, OpenBSD, and FreeBSD all boast strong user bases and open source communities. They might not be as vocal as their Linux equivalents, but their numbers continue to outstrip those of proprietary Unix numbers in areas like the web server arena.

Linux has become ubiquitous across a multitude of hardware platforms and devices. Linux drives the Raspberry Pi, which has become hugely popular among enthusiasts. It’s a platform that has ushered in a whole array of IoT devices running Linux. We’ve already mentioned Linux’s prevalence in Android devices, cars, and smart TVs. Every cloud service features virtual servers running Linux, and a lot of the most popular cloud-native stacks are based on Linux, whether that means container runtimes or Kubernetes or the sundry serverless platforms that are coming to the fore.

Finally, it’s interesting to note that Microsoft’s creation of the Windows Subsystem for Linux (WSL), along with the Windows port of Docker, including LCOW (Linux containers on Windows) support, would have been unthinkable back in 2016. They point clearly to the fact that Linux vs Unix is a contest that’s been pretty much decided.

Easy Steps to List All Open Linux Ports

Open Linux Ports

If you wanted to know what you need to do to list all of the open ports in a Linux instance you’ve come to the right place. But, what is a port and why would you want to have a list of all the open ports?

In short, a port is an access point that an operating system makes available so that it can facilitate network traffic with other devices or servers, while also differentiating the traffic in order to understand what service or app the traffic is being sent to.

There are two common protocols when it comes to ports: TCP, or the transmission control protocol; and of course, UDP – the user datagram protocol. Each of these protocols have a range of port numbers which is commonly classified into three groups:

Linux System Ports

Also known as “well-known” ports. These are port numbers from 0 to 1023 which are considered important for typical system use, commonly these ports are considered quite critical for ensuring ongoing communications services.

Linux User Ports

Also know as “registered ports” which range from 1024 to 49151. It is possible to send a request to the Internet Assigned Numbers Authority (IANA) to request retention of one of these ports for your application.

Linux Private Ports

Also known as “dynamic ports” range from 49152 to 65535. These ports are open for whatever use case you deem privately necessary and so are dynamic in nature – they are not fixed to specific applications.

Now, even though many ports have specific uses, it is important to keep an eye on ports which are “open” without the need for that port to be open. This is because ports that are unnecessarily left open can be a security risk – and also a sign that an intrusion is actively occurring.

Understanding which ports are open and “listening” for communications is therefore absolutely crucial to ensuring that you block efforts to break into your systems. Of course, some common ports need to be left open in order to facilitate ordinary internet communications. For example:

  • FTP (the file transfer protocol) uses port 20 for data transfers
  • Likewise, FTP uses port 21 to issue commands and to control the FTP session
  • Port 22 is dedicated to SSH, or secure shell login
  • Telnet uses port 23 to facilitate remote logins but this port entails unencrypted messaging which is not secure so it’s not really recommended for use
  • E-mail routing via SMTP (the simple mail transfer protocol) is achieved on port 25
  • Port 43 is dedicated to the WHOIS system which can check who owns a domain
  • The domain name service (DNS) makes use of port 53
  • DHCP uses port 67 as the server port, and port 68 as the client port
  • HTTP, the hypertext transfer protocol, uses port 80 to deliver web pages
  • POP3, the e-mail centric “post office protocol” uses port 110
  • Port 119 is used by the news transfer protocol, NNTP
  • The network time protocol, NTP, uses port 123
  • IMAP, another email protocol, makes use of port 143 to retrieve email messages
  • SNP or the simple network management protocol uses port 161
  • Port 194 is dedicated to IRC, the internet relay chat app
  • Port 443 is dedicated to HTTPS, the secure version of HTTP delivered over TLS/SSL
  • SMTP, the simple mail transfer protocol, uses port 587 to submit emails

It is often possible to configure a specific service to use a port which is not the standard port, but this configuration needs to be made on both the sender and recipient side – in other words, on both client and server. Otherwise if only one side uses a non-standard port configuration communication won’t be possible.

How do you get a simple list of common ports that are open? Use this command:

$ cat /etc/services

Alternatively, you can modify the size of the list you get by adding “less” to your command

$ cat /etc/services | less

However, you can use a range of other commands on a Linux machine which will give you all the TCP and the UDP ports which are open and ready to receive communication from other machines. We will cover three in the following section – Isof, netstat and nmap.

The netstat or network statistics command

Most Linux distributions will include netstat by default, in their installations. It’s a really capable tool which can display all the TCP/IDP network connections that are active – both for incoming connections, and outgoing connections. It also displays routing tables plus the number of the network interface alongside comprehensive statistics for network protocols.

So, you can use netstat to troubleshoot and to measure the performance of your network. While basic, it is a useful and essential too for finding faults in network services. It clearly tells you which ports are open, and where a program or service is listening on a specific port. We will now give you some examples on how to make use of netstat.

Retrieving a list of all TCP and UDP ports which are currently listening

It’s simple really, just use the -a flag alongside a pipe that specifies less, this will give you TCP and UDP ports which are currently listening

$ netstat -a | less

To list all the connections that are listening

Make use of the -l flag in the netstat command to get a list of every port connection which is actively listening

$ netstat -l

Display ports that are open, alongside current TCP connections

Here, we combine a couple of flags in order to show a list of ports which are open and the established (TCP) connections.

$ netstat -vatn

A list of open UDP ports

You might only want to see the UDP ports which are open, excluding the open TCP ports. The command you need is this:

$ netstat -vaun

Get a list of your Linux services which are listening on TCP and UDP, a list of the open ports on your machine which are free, alongside the name and the PID of the service or program

This command gives you all the services and apps which listen on either TCP or UDP. It also gives you the open ports on your Linux instance which are free, plus the program name and process ID that is associated with every open socket.

$ netstat -tnlup

So you can see how the different commands you can use with netstat makes it very versatile, allowing you to see what the status quo is on your Linux machine. But what exactly does these individual flags mean? It’s simple really:

  • -a will show all sockets that are listening and all non-listening sockets too
  • -l only shows ports which are actively listening
  • -v means “verbose” and tells netstat to include additional information about any address families that are not currently configured
  • -t restricts the listing to TCP connections only
  • -u restricts the listing to UDP connections only
  • -n tells netstat to display the numerical addresses too
  • -p adds the process ID (PID) as well as the name of the program

Keep in mind that the seven flags we’ve shown above are just a couple of the many flags you can specify for netstat. Check out the help file by triggering

$ man netstat

You’ll get a full listing of all the options and features you can make use of with netstat.

nmap – the Network Mapper command

An open source tool, nmap is great for exploring your network, scanning it for security vulnerabilities and to audit your network. That said, new users might find nmap challenging to use because it is so feature-rich: nmap comes with so many options that you might find it difficult to figure out, even if it does mean it is a very robust tool.

It’s worth remembering that nmap will deliver very extensive information about the network that it is scanning. So, do not use nmap on a network unless you have permission to examine it – permission to scan it, basically. You need to have a reason to use nmap, in other words, and the permission of the network owner.

We will now give you a basic overview of nmap including typical usage of the map command. To start off with, here is the instructions you need to install nmap if you have Ubuntu or Debian server:

$ sudo apt-get install nmap

The command is slightly different if you’re using RHEL or CentOS:

$ sudo yum install nmap

There’s a file you can view for a wider picture of ports and services. Use this command:

$ less /usr/share/nmap/nmap-services

It’s an example of exactly how extensive the details are when you use nmap as a tool. If you want to experiment with nmap you could try to check out your own virtual private server, but you could also give nmap a go on the official nmap test server – located at

In order to try out some basic nmap commands we will make use of sudo privileges to ensure that the queries give complete results – not partial results. Remember, some nmap commands will take a little bit longer to execute.

Throughout these examples we will make use of as the example domain; replace your actual domain in place of when you run this command.

Scanning for open ports on a domain

$ sudo nmap -vv

Here you can see we have used the -vv flag, which has a specific function. When you use -vv it means “verbose”, in other words it will show you extensive output, including the process as nmap scans for open ports. Leave out the -vv flag and you will quickly see the difference.

List of ports that are listening for connections via TCP

$ sudo nmap -sT

You’ll note the -sT flag, this is usually what you’d specify to scan for TCP connections when a SYN scan cannot be performed.

List of ports that are listening for connections via UDP

$ sudo nmap -sU

So, -sU is what you use to get a UDP scan. However you can scan for both UDP and TCP connections by using another flag, -sS. You’ll get a list covering both UDP and TCP.

Look at a specific port (instead of all ports)

$ sudo nmap -p port_number

In this case, -p means that you only look at the port number specified in place of “port_number”.

Scan every open port on both TCP and UDP

$ sudo nmap -n -Pn -sT -sU -p-

We use two flags here: first -n which specified to nmap that it must not make a reverse domain resolution for an active IP address, where it finds one. -Pn disables pinging, treating all of the hosts as if they are online.

It’s just a few examples but nmap is a really fantastic tool than can help you a lot. Remember, typing $ man nmap will give you a full list of all the tools at your disposal; many of these are very useful for exploring the security of your network and to find potentially vulnerable points.

The lsof (List Open Files) command

It’s easy to remember what lsof means – the list open files command – just take ls as “list” and of as “open files” and you’ll clearly see why lsof means “list open files”.

Listing all active network connections

Use the -i flag with lsof in order to get a full list of every network connection which is both listening and established.

$ sudo lsof -i

Find a process that is using a specified port

As an example, for all processes which are currently operating on port 22, you’ll run this command:

$ sudo lsof -i TCP:22

Get a list of all the UDP and TCP connections

To list every single UDP and TCP connection just use this command:

$ sudo lsof -i tcp; sudo lsof -i udp;

Just like with nmap, you can check the manual for lsof in order to get a full view of all the options you have when you are using lsof.

So, to wrap up, Linux fans must understand at least a little bit about ports – particularly if they plan on managing Linux servers. We’ve given three examples of great tools – nmap, lsof and netstat – which will help you on the way to understanding which ports are open on your machine, and which services are active on your server.

We suggest that you take a look at the man pages for each of these commands so that you can get a better idea of what they do. While these tools are great for checking the exposure on your own network, never abuse any of these tools by scanning networks that do not belong to you.

Basics of Linux Vi Editor

Vi Editor For Linux

Linux Vi editor is a powerful and versatile text editor. It does have something of a learning curve that you might find initially disconcerting, but in time you will find that it gets under your skin, so to speak, and you find yourself using the working methods it gives you elsewhere. Vi will definitely repay your time and attention. It’s a very powerful tool with many features, and we can’t hope to cover everything that it can do here, so here’s an overview.

Vi, Command Line Editor

Vi is a command line text editor. The command line behaves differently to your GUI. It’s a single window that only has text input and output. Vi was created to work within these constraints, and many would say that that’s actually been instrumental in making it so powerful. It’s a plain text editor that resembles Notepad on Windows, or Textedit on Mac. That means that it doesn’t have quite the same amount of word processing horsepower as programs like Word or Pages. It is more capable than Notepad or Textedit though.

You can send your mouse on holiday because everything in Vi is done using the keyboard.

Insert (or Input) mode and Edit mode are Vi’s two modes. Input mode lets you enter content into a file. Edit lets you delete, copy, search and replace. Users often mistakenly start typing commands without first going back into edit mode, or they start typing in text without going into insert mode. It’s fairly easy to put these mistakes right though.

Here’s what it looks like when you want to open file:

vi <file>

If you forget to name the file you want to open then the easiest thing to do is just close vi down and try again. When you specify the file, remember it can be with either an absolute or relative path. Okay, time to start typing! Let’s go into the directory where your files are kept and edit our first file.

vi myfile

This command opens up the file. If the file doesn’t exist yet, then it will create it for you and open it up. (Once you enter vi it will look something like this (though it can look a bit different according to what system you’re running it on).

Edit mode is always the default mode to start with, so change to insert mode by pressing i. Any time you want to know what mode you’re in currently, just look in the bottom left corner. Now add a few lines of text and press Esc which will return you to edit mode.

Vi Editor – Save and Exit

There are a few ways to do this which order the same thing, so choose the one that suits you best (and don’t forget to ensure that you’re in edit mode first.)

If you are unsure if you are in edit mode or not you can look at the bottom left corner. It will always tell you. If it doesn’t say INSERT then you are good to go. Or you can just press Esc to be certain. If you are already in edit mode, pressing Esc doesn’t do anything so you won’t be doing any harm.

  • ZZ (Note: capitals) – Save and exit
  • :q! – Get rid of all changes since the last save, and then exit
  • :w – save file but don’t exit
  • :wq – once more, save and exit

Most commands in vi are performed as soon as you hit a sequence of keys. If a command starts with a colon ( : ) then you need to tap <enter> to complete it. Save and exit the file you have open right now

Other ways to look at files

Vi lets us edit files, but we can view them as well. The first of two convenient commands that help us do that are  cat (short for concatenate)—which joins files together but can simply be used to just view files.

cat <file>

If you use cat, giving it a single command line argument which is the file we just created, you will see the contents of that file shown on screen, and then the prompt.

If you mistakenly run cat without giving it a command line argument, then you’ll see that the cursor advances to the next line and nothing happens. As we didn’t specify a file, cat reads from something called STDIN instead which defaults to the keyboard. If you type something, followed by <enter> you will see cat mirror your input to the screen. To get out of this you can press <Ctrl> + c which is the universal combination for Cancel in Linux.

In fact, whenever you get in trouble you can generally press <Ctrl> + c to get yourself out of trouble.

This this works fine with viewing small files, but with big ones it will be hard to see all the content as it zips across the screen. The only viewable part will be the last bit. For these files we use a command called less.

less <file>

It’s great because it lets you use the arrow keys to move up and down within a file, the SpaceBar to jump forward a page, and b to jump back a page. Press q to quit when you’re finished.

Have a look at the file you just created now using both these commands.

Navigating a file in Vi Editor

Now we can go back into the file that we just created and add some additional content. Enter insert mode and use the arrow keys to move the cursor around. Add a couple or more paragraphs of content then tap Esc to return to edit mode.

Here’s a list of some of the many commands you can enter to move around the file. It’s worth spending some time trying them out to see how they work.

  • Arrow keys – move the cursor around
  • j, k, h, l – move the cursor down, up, left and right (like the arrow keys)
  • ^ (caret) – move cursor to start of current line
  • $ – move cursor to end of the current line
  • nG – move to the nth line (eg 5G moves to 5th line)
  • G – move to the last line
  • w – move to the beginning of the next word
  • nw – move forward n word (eg 3w moves three wrds forwards)
  • b – move to the start of the last word
  • nb – move back n word
  • { – move back one paragraph
  • } – move forward one paragraph

If you type :set nu in edit mode in vi it will enable line numbers. You might find that doing this makes it much easier to work with files.

Deleting content in Vi Editor

Deleting is not too different from movement, and there’s a few delete commands which will let us use a movement command to define what is going to be deleted.

Here are a few of the ways that you can delete content in vi. You can test a few out now. (If you want to undo anything you’ve done then check out the delete section further down.)

  • x – delete one character
  • nx – delete n characters (eg 4x deletes for characters)
  • dd – delete the present line
  • dn – d followed by a movement command. Delete to where the movement command would have taken you. (eg d6w means delete 6 words)

Undoing the changes

Undoing the things you change in vi is relatively simple.

  • u – Undo the last action (keep hitting u to carry on undoing)
  • U (Note: capital) – Undo all changes to the current line


We can now insert content into a file, move the file around, remove content and undo it then save and exit. You can now do basic editing in vi. This is only just start what vi can do though. You can find a lot more by searching for vi online. There are lots of vi cheat sheets available so you can take your time in learning all of the various commands and concepts. Enjoy your explorations within the interesting world of Linux vi editor. There’s no doubt that it will be a difficult journey to start with, but as you get used to it, you’ll wonder what you ever did without it.

The Admin Benefits You’re Getting with Plesk Server Control Panel

Admin Benefits of the Plesk Server Control Panel - Plesk

Web experts design hosting server control panels to help any kind of user, no matter how technically-skilled. Why? Because their goal is to help properly set up and manage websites. Instead of having to type complicated commands, users can just have a user-friendly GUI that performs actions. At the moment, the Plesk server panel is one of the most popular web hosting control panels in the world.

It has an intuitive and clear interface that everyone can find their way around. Especially true if the user has WordPress experience since the Plesk interface assumes a WordPress approach in terms of usability. Keep scrolling for details of Plesk’s core features.

Don’t have Plesk yet?

Get a Plesk Quote Try Plesk for Free

1. Easily Customizable Plesk Server Panel

Easily Customizable Plesk Server Panel - Admin Benefits of Plesk - Plesk

So why is Plesk so intuitive and simple? Because it provides the scope of all necessary tools you need to start – the right way. Essential tools that help you manage your website’s whole lifecycle. With Plesk Onyx, the latest release, you can use category pages for easy navigation while looking for the right tools.

Apart from Onyx basics, you can even customize your Server Panel with various extensions, split into categories within the menu. You’ll find the most popular ones on the main page and more extensions you may need in any of the categories. Narrow your search if you like and you can quickly add extensions in just a few clicks. Here are some favourites:

Also, you don’t have to pay for extensions you don’t need. Because Plesk designed its interface for you to only add what you use. This is how the overview is kept clean and simple within the server panel. Plesk Onyx and other latest versions also provide better support with new extensions such as Git, Node.js, Ruby, and Docker.

2. High Level Of Compatibility

High Level Of Compatibility - Admin Benefits of Plesk - Plesk

Plesk server panel supports many different operating systems, platforms and technologies. Thus multiplying its strength and contributing to the fact that most Windows Server installations use the Plesk control panel. Since cPanel and others don’t support Windows OS.

Still, the Plesk server panel isn’t limited to Windows servers only – it supports many different Linux versions too. Plesk also works with lots of different tools and platforms. Like the out-the-box WordPress Toolkit extension. This comes available and ready to use with most Plesk Onyx editions.

3. Variety of Admin Tools

Variety of Admin Tools Available - Admin Benefits of Plesk - Plesk

Administrators’ tools and extensions also include Magento, Patchman, CloudFlare CDN, and Let’s Encrypt. Compatibility with various OSs, tools, apps, and platforms allow admins to run their sites the way they see fit. Not just limiting themselves to useless or unappealing options.

Give users the ability to find what they need and add it to their control panel. You’ll ensure a clutter-free environment that’s easy to use and navigate.

4. Automation and Easy Management

Automation and simplified setup procedures are among the core benefits that Plesk brings to the table. Because server admins get to reduce the efforts and time for routine tasks when they need to.

You can install Plesk on Windows easily as it has a very intuitive GUI. It’s also easy to set up on Linux because it only requires one command to install with default settings. To set up a website using Plesk Onyx, you’ll have to go to the Domains page. First, simply click the domain name. Then choose Files > Databases > Install Apps > Install WordPress to make a brand-new website.

If you use a CMS like Drupal, Joomla or WordPress, you can create, secure and launch a site in minutes. Just drag and drop to add new content or features without having to insert a single line of code. Most extensions have one-click installation so you can set them up instantly.

You can automate server tasks by going to Tools and Resources. Then choosing Scheduled Tasks on the Tools and Settings page. Here you can schedule commands or PHP scripts too.

Moreover, you get extensions like Perfect Dashboard that give you more task automation power. For example, one-click updates for all websites on one account and automated backup integrity verifications. Or engine tests to show if any layout changes have cost you broken SEO tags, social tags, or display errors.

Who can start on Plesk Server Panel?

Various end-user groups with any level of experience can easily use the Plesk server panel. Because it has a clean and user-friendly GUI, huge compatibility potential and a large extension ecosystem. The latest Onyx release has a similar approach. Thus giving more capabilities, including tools and multi-server abilities.

Although there are administrators who still prefer working on CLI, Plesk may still save their time. And beginners get to learn quickly and get rid of the need for third-party support services.

Don’t have Plesk yet?

Get a Plesk Quote     Try Plesk for Free

Linux Logs Explained

Linux Logs Explained - Plesk

Linux logs give you a visual history of everything that’s been happening in the heart of a Linux operating system. So, if anything goes wrong, they give a useful overview of events in order to help you, the administrator, seek out the culprits.

For problems relating to particular apps, the developer decides where best to put the log of events. So with Google Chrome for instance, any time it hangs, you want to look in ‘~/.chrome/Crash Reports’ to discover the gory details of what tripped the system up.

Linux log files should be easy to decipher since they’re stored in text form under the /var/log directory and subdirectory. They cover all kinds of things, like system, kernel, package managers, MySQL and more. But now, we’ll focus on system logs.

To access the system directory of a Linux or UNIX-style operating system you will need to tap in the cd command.

How can I check Linux logs?

You can look at Linux logs using the cd /var/log command. Type ls to bring up the logs in this directory. Syslog is one of the main ones that you want to be looking at because it keeps track of virtually everything, except auth-related messages.

You also use /var/log/syslog to scrutinise anything that’s under the syslog. But picking out one particular thing will take some time because it’s usually a pretty big file to wade through. Pressing Shift+G will take you all the way to the end, and you’ll know you’re there because you will see the word “END.”

You can also check logs using dmesg. This shows the kernel ring buffer and prints everything after sending you to the end of the file. You can then use the dmesg | less command to scroll through everything it has produced. If you’d like to see log entries relating to the user facility, use dmesg –facility=user.

Finally, as a super-handy command called tail, which lets you look over log files. It’s so useful because it just displays the last bit of the logs. Which is often where you’ll find the source of the difficulty. Use tail /var/log/syslog or tail -f /var/log/syslog. Tail keeps a close eye on the log file, and displays every written to it, which lets you check what’s being added to syslog in real time.

For a particular group of lines (say, the last five) type in tail -f -n 5 /var/log/syslog, and you’ll be able to see them. Use Ctrl+C to turn off the tail command.

Most Valuable Linux Logs Players

Most directories can be grouped under four headings:

  • Application Logs
  • Event Logs
  • Service Logs
  • System Logs

Checking each log is a really enormous task. So that’s why developers rely on log data checking tools like Retrace. Because they put APM and log management right at your fingertips. You have plenty of choice over what you want to monitor. But there’s little doubt that scrutinising the following should be considered essential.

What’s in these Linux Logs?

  • /var/log/syslog or /var/log/messages:
    Shows general messages and info regarding the system. Basically a data log of all activity throughout the global system. Know that everything that happens on Redhat-based systems, like CentOS or Rhel, will go in messages. Whereas for Ubuntu and other Debian systems, they go in Syslog.
  • /var/log/auth.log or /var/log/secure:
    Keep authentication logs for both successful or failed logins, and authentication processes. Storage depends on system type. For Debian/Ubuntu, look in /var/log/auth.log. For Redhat/CentrOS, go to /var/log/secure.
  • /var/log/boot.log: start-up messages and boot info.
  • /var/log/maillog or var/log/mail.log: is for mail server logs, handy for postfix, smtpd, or email-related services info running on your server.
  • /var/log/kern: keeps in Kernel logs and warning info. Also useful to fix problems with custom kernels.
  • /var/log/dmesg: a repository for device driver messages. Use dmesg to see messages in this file.
  • /var/log/faillog: records info on failed logins. Hence, handy for examining potential security breaches like login credential hacks and brute-force attacks.
  • /var/log/cron: keeps a record of Crond-related messages (cron jobs). Like when the cron daemon started a job.
  • /var/log/daemon.log: keeps track of running background services but doesn’t represent them graphically.
  • /var/log/btmp: keeps a note of all failed login attempts.
  • /var/log/utmp: current login state by user.
  • /var/log/wtmp: record of each login/logout.
  • /var/log/lastlog: holds every user’s last login. A binary file you can read via lastlog command.
  • /var/log/yum.log: holds data on any package installations that used the yum command. So you can check if all went well.
  • /var/log/httpd/: a directory containing error_log and access_log files of the Apache httpd daemon. Every error that httpd comes across is kept in the error_log file. Think of memory problems and other system-related errors. access_log logs all requests which come in via HTTP.
  • /var/log/mysqld.log or /var/log/mysql.log : MySQL log file that records every  debug, failure and success message, including starting, stopping and restarting of MySQL daemon mysqld. The system decides on the directory. RedHat, CentOS, Fedora, and other RedHat-based systems use /var/log/mariadb/mariadb.log. However, Debian/Ubuntu use /var/log/mysql/error.log directory.
  • /var/log/pureftp.log: monitors for FTP connections using the pureftp process. Find data on every connection, FTP login, and authentication failure here.
  • /var/log/spooler: Usually contains nothing, except rare messages from USENET.
  • /var/log/xferlog: keeps FTP file transfer sessions. Includes info like file names and user-initiated FTP transfers.

Does Plesk for Linux keep logs too?

Plesk Onyx on Linux

As a Linux-friendly hosting panel, Plesk uses log files for a wide range of software packages that run under Linux in addition to its own logs. The following list shows the location of Plesk logs. And we hope it helps you fix issues.

Plesk System

  • Error log: /var/log/sw-cp-server/error_log and /var/log/sw-cp-server/sw-engine.log
  • Access log: /usr/local/psa/admin/logs/httpsd_access_log
  • Panel log: /usr/local/psa/admin/logs/panel.log

Plesk Installer

  • /var/log/plesk/installer/autoinstaller3.log
  • /tmp/autoinstaller3.log

Web Presence Builder

  • Error log: /usr/local/psa/admin/logs/sitebuilder.log
  • Install/upgrade logs: /usr/local/sb/tmp/

Backup Manager

  • Backup logs: /usr/local/psa/PMM/logs/backup-<datetime>
  • Restore log: /usr/local/psa/PMM/logs/restore-<datetime>

Plesk Migrator

  • /usr/local/psa/var/modules/panel-migrator/logs/

Migration Manager

  • /usr/local/psa/PMM/logs/migration-<datetime>

Website Import

  • /usr/local/psa/var/modules/site-import/sessions/

Health Monitor Manager

  • /usr/local/psa/admin/logs/health-alarm.log

Health Monitor Notification Daemon

  • /usr/local/psa/admin/logs/health-alarm.log


  • /usr/local/psa/var/log/xferlog
  • /var/log/plesk/xferlog
  • /var/log/secure


  • /usr/local/psa/var/log/maillog


  • /usr/local/psa/var/log/maillog


  • /usr/local/psa/var/log/maillog


  • Error log: /var/log/psa-horde/psa-horde.log


  • Error log: /var/log/plesk-roundcube/errors


  • /usr/local/psa/var/log/maillog

Parallels Premium Antivirus

  • /usr/local/psa/var/log/maillog
  • /var/drweb/log/*

Watchdog (monit)

  • /usr/local/psa/var/modules/watchdog/log/wdcollect.log
  • /var/log/wdcollect.log
  • /usr/local/psa/var/modules/watchdog/log/monit.log
  • /var/log/plesk/modules/wdcollect.log

Let’s Encrypt

  • /usr/local/psa/admin/logs/panel.log


  • /var/log/plesk-php7x-fpm/

Acronis Backup

  • /var/log/plesk/panel.log
  • /var/log/trueimage-setup.log
  • /opt/psa/var/modules/acronis-backup/srv/log/

It’s important to understand the advantages and limitations of logging. But which Linux logs do you think demand most attention? We’d love to hear your thoughts in the comments below.

Plesk on DigitalOcean is now a one-click app

According to Alex Konrad, Forbes Editor of the Cloud 100 list, Cloud companies like DigitalOcean are revolutionizing how businesses reach their customers today. From digitizing painful old processes to allowing them more time to focus on what they really care about. This is what makes their products unique.

As a Web Professional (Developer, Agency owner, IT Admin) your goal is to provide valuable services to your customers. You want to be able to focus on the things you’re good at. And leave the nitty gritty of technical server management, cost streamlining, running instances, backups, and account management to a VPS. Because a virtual private server fits this purpose exactly. Tired of managing infrastructure and security, when what you want is to focus on coding and improving your product or service? Then Plesk Onyx is the ideal solution.

What is Digital Ocean?

We know DigitalOcean, founded in 2011, as a cloud infrastructure provider with a “developer first” mentality. They simplify web infrastructure for software developers and their mission is to smooth out the complexities of infrastructure. How? By offering one simple and robust platform for developers to easily launch and scale their applications. DigitalOcean is now the second largest and fastest-growing cloud computing platform of all public apps and websites, according to Netcraft.

Over 750,000 registered customers have launched more than 20 million Droplets combined on DigitalOcean. The company is now investing heavily in advancing its platform to further support growing teams and larger applications in production.

DigitalOcean cloud hosting
Image: DigitalOcean

Plesk on DigitalOcean

Plesk manages and secures over 380,000 servers, automates 11 million websites and at least 19 million mailboxes. It’s the leading WebOps, Hosting and Web Server Control Panel to build, secure and run your applications, websites and hosting business. You’ll find it in 32 languages and 140 countries, with 50% of the top 100 worldwide service providers partnering with Plesk today.

Key Plesk Onyx Features

The versatile Plesk Onyx control panel
  • The WebOps platform

Manage all your domains, DNS, applications websites and mailboxes from one single platform.

  • DigitalOcean DNS – integrated into Plesk

The free Plesk DigitalOcean extension integrates Plesk with the DigitalOcean DNS service. This web service is highly available and scalable and you can use it as an external DNS service for your domains. The extension will automatically sync DNS zones between Plesk and DigitalOcean DNS. Here’s how:

  1. After installing Plesk, add your first domain/website.
  2. Then navigate to the domain and click “DigitalOcean DNS” for that domain.
  3. Enter your DigitalOcean API credentials into the extension. Or use OAuth to authorize your DigitalOcean account.
  4. Start having your domains in sync with Digital Ocean DNS.
  • Automated Server Administration

Easily manage your server, including automated updates, application deployment, monitoring, backups and maintenance.

  • User-Friendly Interface

One dashboard to manage multiple sites. Build websites, run updates, monitor performance, and onboard new customers from one place.

  • Security

Plesk on DigitalOcean secures  your applications and websites automatically.  You get a firewall, fail2ban and a web application installed and activated by default. Plus various additional options available on demand as Plesk Extensions. Or by simply upgrading to a Premium Plesk Edition.

  • Ready-to-Code Environment

Enable and manage multiple PHP versions and configurations, JavaScript, Perl, Ruby or Node.js, all in one-click. Every stack is deployed automatically and allows you to do custom configurations as you need.

  • Self-Repair Tools

We built automated healing and recovery functionality into Plesk, so many technical issues can self-repair without any need for support. This starts at fully-automated (safe) updates, including all OS components. And goes to various available manual self-repair tools up to a complete repair panel, in the unlikely event of something going wrong. Additionally, Plesk continuously monitors all relevant system components’ health, and provides notifications to the administrator before something goes wrong.

  • Multi-Language support

Plesk is available in 32 languages.

  • Plesk Extensions

Plesk  is a super-light application, automating all your server components and management needs on Lightsail in a single environment. As your business needs grow, you can use the in-app Plesk Extensions catalog to enable additional features on-demand. Many are free and some provide extra value when premium. Get access by clicking on “Extensions” inside Plesk itself.

Plesk WordPress Toolkit – secure and simple

Staging environment best practices - Plesk WordPress Toolkit

Find full details on Plesk WordPress Toolkit here, but here are some key features below.

  • WordPress Simplified:

One-click installer to initialize and configure WordPress from start to finish. One dashboard to mass-manage multiple WordPress instances.

  • Secure Against Attacks

Hardens your site by default, further enhanced with the Toolkit’s security scanner. No security expertise necessary.

  • Run and Automate your WordPress

Singularly or mass-execute updates to the WP core, themes or plugins. Monitor and run all your WordPress sites from one dashboard.

  • Simple, but not Amateur

Get full control with WP-CLI, maintenance mode, debug management, search engine index management and more.

  • Stage and Test*

Test new features and ideas in a sandbox before pushing them to production – No plugins required, no separate server needed.

  • Cut Out Complexity*

Stage, Clone, Sync, Update, Migrate and more. Execute all these complex tasks with one click. No more high-risk activities or stressed-out dev teams.

  • Smart Updates powered by AI*

Smart Updates feature for WordPress Toolkit analyzes your WordPress updates and performs them without breaking your site. Otherwise, it will warn you the update may be dangerous.

  • One-Click Performance Optimized*

You can reach a maximum performance of your WordPress sites and no time and with great simplicity. Just enable NGINX caching in one click and combine with Speed Kit, powered by a distributed Fastly® CDN and Varnish cache.

*Some of these features are not available within the free Plesk Web Admin SE but require an upgrade to a higher value premium edition of Plesk or Plesk Extension.

Plesk on DigitalOcean (free) includes Plesk Web Admin Edition SE, a free version of Plesk with up to 3 domains and good for small websites and certain limitations. To gift yourself with a higher value Plesk edition, check out our Plesk Upgrades.

How to deploy Plesk on DigitalOcean

  1. First, log in to your DigitalOcean account.

2. Then, from the main dashboard, click “Droplets” and “Create” -> “Droplets”.

Plesk on DigitalOcean now a one-click app - How to deploy - Create Droplet

3. Under “Choose an image”, click “one-click apps”

4. Select “Plesk”.

PLesk on DigitalOcean - Now a one-click app - Choose a size - Droplets

5. Choose your size and then a data center region. If you plan to host small business websites, we recommend choosing the zone closest to their geographic location to reduce page load times for local visitors.

Note: Plesk runs smoothly with 1GB RAM and 1 vCPU for smaller websites and environments. Running many websites or higher traffic requires a larger droplet size.
Please also refer to the Plesk infrastructure requirements for details.

Plesk on DigitalOcean now a one-click app - Finalize and create Droplet

6. Additional options such as Private networkingBackupsUser data, and Monitoring are not necessary for most Plesk users. Then click “Create”.

7. You can log in to your droplet using:

  • A root password, which you will receive by email. If you go with this option, skip the “Add your SSH keys” step and go to the next one.
    • Just type in your browser: https://<your-droplet-IP>:8443 . You will potentially see for 30 seconds some finishing procedure of the automatic deployment. Afterwards you will automatically land in the initial on-boarding of Plesk.
  • An SSH key. If you go with this option, click New SSH key to add a new SSH key or select a previously added key (if you have any).

Note: Using SSH keys is a more secure way of logging in. If you use a root password, we strongly recommend that you log in to the droplet command line and change the root password received by email. The command line will automatically prompt you to do so.

Enjoy and let us know if there are any questions!

Three TED talks on Technology that will blow your mind

We’re living in an era that reveals new innovations on the daily. From automation to complex and brilliant security systems, the future of technology is being shaped by minds like these three whose ideas elevate our minds and spark our imagination.

Watch our top TED Talks on technology

These three speakers have tested the boundaries of how we can integrate the physical world and the digital one. Here are three must-watch TED talks on technology that have mesmerized us and left us wondering what the future holds.

Will automation take away all our jobs? | David Autor

As a company focusing heavily on automation and simplicity as a time-saving solution, we found David Autor’s paradox intriguing. He says that in the last century, despite having created machines that to do our work for us, the proportion of adults in the US with a job has consistently gone up for the past 125 years.

So why hasn’t human work become redundant yet and how are our skills still not obsolete? In this talk about the future of work, economist David Autor addresses the question of why there are still so many jobs and comes up with a surprising answer. Do you agree with his theory?

Hackers: the internet’s immune system | Keren Elazari

Keren Elazari is a cybersecurity expert who claims that we actually need hackers in today’s day and age. Her shocking exclamation comes from her belief that hackers force us to evolve and improve. “They just might be the immune system for the information age”, she says.

Some hackers are fighting corruption and defending our rights. They also expose the loop holes and vulnerabilities in our systems and make us fix them.

But not all hackers use their superpowers for good. Would you take any chances with security loopholes? Let us know what you think about this video and learn more about Plesk security here.


Are you safe? Take the Plesk Security Quiz.

The mind behind Linux | Linus Torvalds

This is the guy who has transformed technology, not once, but twice. Linus Torvalds first gave us Linux kernel, which helps power the Internet, and then Git, the source code management system that developers use all over the world. This is more than a talk, but an interview where Torvalds discusses his personality traits which shaped his work philosophy and engineering. Plus, some useful open source tips for the developers watching.

“I am not a visionary, I’m an engineer,” Torvalds says. “I’m perfectly happy with all the people who are walking around and just staring at the clouds … but I’m looking at the ground, and I want to fix the pothole that’s right in front of me before I fall in.” Are you like Linus and do you agree with his philosophies?

Empowering you with TED talks on technology

As we got a glimpse of what these three researchers presented on stage, the common theme in all of the talks was making a better digital world together. Technology can empower people by educating them and giving them a voice, future designs succeed in bridging the two worlds together. A concept we at Plesk are on definitely board with.