It’s easy to consider Linux to be one of the wonders of the internet. Everyone knows that open-source tech powers the internet, and Linux has set the gold standard for reliable and comprehensively supported open source technology. Linux is highly established too, with over twenty years of history – Linux was designed with a UNIX base and was released under the now well-know GNU license.

Linux has grown explosively over the past two decades in part because of its huge pool of developers who are a true community – and, of course, the fact that Linux was a ground-breaking concept at the time. It is now one of the most popular and most common UNIX-based operating systems in the world. Furthermore, Linux is unique in that it is popular as both a desktop operating system for personal use, and as a server operating system that delivers everything from web pages to complex applications. As a result Linux is used by countless millions of people.

Understanding how GNU, UNIX and Linux relate

There are plenty of commercial server and desktop operating systems around – operating systems are either UNIX based, or based on Microsoft’s Windows technology. However, because Linux has managed to consistently mix a range of important operating system features it has managed to become a highly preferred alternative to Windows and commercial UNIX distributions. Why? Simply because Linux has proven itself to be adaptable, secure, stable and very fast.

This mix of characteristics means that many users have given Linux preference to commercial operating systems. In turn, Linux’s capable mix of features means that Linux is now powering most of the internet and indeed most of the websites we use every day.

Linux is, as we said, based on UNIX which in itself date from more than half a century ago – the first UNIX release was made public in the 1960s. Like UNIX, Linux is highly modular which means that Linux is incredibly customizable, and incredibly stable. Interestingly, when GNU was initiated in 1985 the plan was to provide a software system that is fully UNIX compatible. That said, the GNU project took a long time to come to fruition – until the 1990s most of the work on the GNU kernel and the drivers that power it was still at an initial stage.

Linus Torvalds, seen as the father of Linux, found this progress frustrating and it is Linus that decided to create the Linux kernel – Torvalds was simply fed up with the slow pace of progress. So, in 1991, Torvalds created his own kernel using utilities and libraries which were part of the GNU project. This laid the cornerstone for the Linux, GNU project and is the point in time in which the core of one of the world’s most important operating systems was established.

Why the Linux kernel is so important

The kernel of an operating system is the part of the operating system which manages the communication between the computer hardware, and the software that runs on the operating system. Arguably, the Linux kernel is what makes Linux so unique – and so capable.

This is in large part because the Linux kernel was designed from the ground up to be incredibly fast and also particularly small in size. As a result Linux operating systems are known to be very efficient when it comes to managing essential computer resources including the central processing unit (CPU) as well as RAM storage and of course disk space.

So, the Linux kernel is really what drives every Linux OS by handling all the processes in Linux and by managing the behaviour of the applications that run on a Linux OS. In contrast to some other operating systems, Linux does not run its graphics system (GUI, or graphics user interface) as part of the kernel. So, if the GUI on Linux crashes it can simply be restarted – it does not require a restart of the entire operating system.

The advantages of using Linux

Operating system security is a big issue – hacking and the resulting theft and losses can be expensive. One of the reasons why Linux is so popular is because Linux is known to be highly secure compared to other operating systems. Linux users are lucky in that they work in a largely virus-free environment, so Linux users can use the time spent on virus protection on more productive tasks – whereas other operating system users need to spend plenty of time guarding against viruses.

Linux is open source and everyone knows that it is constantly under development, with a very big community that is dedicated to improving Linux and expanding the feature set included in Linux. Even though Linux is highly dynamic it is still incredibly feature-filled. Whether it is functionality or a friendly user interface you can rely on Linux to meet your expectations.

The ongoing development behind Linux is designed to make sure that the Linux platform retains the required flexibility. This development is particularly important because the internet and the world wide web continues to change at breakneck speed, so Linux must remain adaptable in order to keep up.

The importance of Linux operating system

A very large section of the internet is powered by Linux. The reason for this is simple: Linux is known to be one of the most stable operating systems, and every website owner is rightly concerned about website uptime. For websites, Linux is often using alongside the popular web server software Apache. As a result there is a reliable, highly stable web server configuration that is incredibly popular: the Linux, Apache combination.

Finally, you may have heard of the LAMP open source platform for running websites. LAMP stands for Linux, Apache, MySQL and Perl/PHP/Python. These are a combination of the most popular technologies for building a website – the Linux operating system, Apache as the web server, MySQL as the database and of course Perl, PHP or Python as the scripting language.

Windows Server

Everyone knows the Windows desktop operating system, but Windows is also a very popular choice for server operating system (OS). In fact, there is an entire series of Windows Server OS that powers enterprise-grade applications, including support for shared services, multiple users and broad administrative tools that covers everything from applications through to enterprise networks and data storage.

Windows Server has a very long history. In fact, Microsoft started developing Windows Server in the 1980s as Microsoft developed two different types of operating systems – MS-DOS for personal, desktop use alongside a server operating system which was called Windows NT. The Microsoft engineer behind Windows NT (NT stands for New Technology) was a man called David Cutler who is the engineer that developed the kernel for Windows NT.

The plan for the Windows NT kernel was to include the right mix of reliability, speed and security so that big companies can rely on Windows NT as a server OS. This is because, before NT was released, most large companies trusted UNIX operating systems for their servers. These UNIX systems required hardware that was based on the expensive RISC architecture. In contrast, Windows NT could run on cheaper CISC or x86 computers.

Windows NT has one key feature: the ability to run applications on a symmetric multiprocessing basis. This means that applications could make use of multiple processors in a server to run much faster. Today’s NT-based operating systems are now incredibly flexible, with NT-based applications now running either in an on-premise data centre or indeed inside of Microsoft’s cloud offering – Azure.

There are plenty of unique features which are key to Windows Server. One of these are Active Directory which is an easy way to automate the management of user data alongside credentials and security and other distributed resources. Active Directory is completely interoperable with other directory services. Windows Server also includes a popular Server Manager which can be used to administrate many of the Windows Server roles, and which allows sysadmins to change the configuration of both remote and local machines.

Understanding the history of Windows Server

Windows Server, like most software products, was released with updates at regular intervals. Let’s take a look at how Windows Server progressed over the years.

1993 – the release of Windows NT 3.1 Advanced Server

The first major release of Windows NT was in 1993, with a version number matching another major Microsoft operating system release of the time – Windows 3.1 So, Microsoft released the Windows NT operating system as version Windows NT 3.1 Advanced Server in two different versions – one was designed for workstations (advanced desktops) while the other edition was developed for server roles.

There was a 32-bit version of Windows NT too: it was designed with an HAL – a hardware abstraction layer – and this meant that the operating system was more stable because it blocked applications from having direct access to the hardware on which the OS runs. Furthermore, companies using Windows NT 3.1 in its Advanced Server guise could use it as a domain controller which meant that it could store group access rights and access rights for users.

1994 – version 3.5 of Windows NT

In a major update Microsoft released Windows NT 3.5 in order to update important networking features including support for technologies such as Winsock as well as the ever-important TCP/IP. Networking advances included the ability of users on OS platforms that were not Microsoft-powered to be able to access applications and files stored on a Windows NT domain.

1995 – another update, this time to version 3.51

Sometimes fine-tuning an operating system can have a big pay-off. So, in 1995, Microsoft released version 3.51 of Windows NT mainly with the aim to improve performance and to reduce the amount of RAM required to run Windows NT. Further improvements included the ability to deliver services faster thanks to a networking stack which was updated.

Microsoft also helped companies that work in a mixed environment by adding more support for different types of connectivity. Now, both Netware and Windows NT servers allowed their users to be able to pick up services from their respective platforms using just one set of credentials.

1996 – a big step to Windows NT Server 4.0

Windows 95 was a big UI step for Microsoft, so the first thing to note about Windows NT 4.0 was the inclusion of the Windows 95 UI in Windows NT – alongside some of the desktop applications such as the File Explorer. In Windows NT 4.0 Microsoft also added various networking protocols so that machines that do not run on Microsoft platforms could easily make use of Windows NT resources.

Microsoft also added web capabilities by including Internet Information Server (IIS) with Windows NT 4.0. It also included a domain name server. Windows NT 4.0 also makes it easy for administrators to step through important tasks thanks to the inclusion of the Administrative Wizards in Windows NT 4.0, covering a range of actions including sharing a hard disk.

2000 – Introducing Windows Server 2000

The year 2000 was a milestone in many ways, and for Windows Server too. In the 2000 edition Microsoft added something called Active Directory – an improved directory service which makes it easy to manage and store important information on the objects on a network. Things like services, systems and indeed individual user data. Active Directory makes it much easier for sysadmins to do their job including the ability to set up data encryption, file sharing and indeed virtual private network configurations.

Other important features in Windows Server 2000 include the inclusion of the MMC (Microsoft Management Console) alongside version 3.0 of the NTFS file system and the ability for Windows Server to support dynamic disk volumes.

Note that there were three editions of Windows Server 2000: the Server edition, Advanced Server edition and a Data Centre edition. All of these were designed to work alongside the desktop version of Windows 2000 – the Windows 2000 Professional edition.

2003 – more improvements lead to Windows Server 2003

The 2003 edition of Windows Server included a renewed focus on security with big improvements over Windows 2000. IIS became popular in Windows 2000 and in the 2003 edition Microsoft focused on hardening IIS by amongst other things disabling some of the features previously activated by default and by reducing the opportunities to exploit characteristics of IIS.

In the 2003 edition Microsoft introduced the concept of server roles. In other words, a sysadmin could decide to implement a server specifically as a domain controller, or perhaps as a DNS server on the internet. The 2003 edition also included a built-in firewall alongside better encryption features, while Microsoft also added the Volume Shadow Copy Service alongside better support for NAT (network address translation). The 2003 Windows Server product offering included a Standard edition, an Enterprise edition alongside a Datacentre and Web version.

2005: a second release of Windows Server 2003

Microsoft decided pop not to release a new version number in 2005, instead opting to designate the updated Windows Server 2003 as R2 – release 2. So, Windows Server 2003 R2 is an update but it did have the benefit where companies who had already paid for Windows Server 2003 did not need to pay again to use the second release of the OS.

R2 of Windows Server 2003 included a number of new safety and security characteristics in addition to some new features. First, R2 included Active Directory Federation Services, a way for sysadmins to allow single sign on for applications that are outside of the corporate network firewall. It also included Active Directory Application Mode which is a way to store data for applications which the sysadmin considers risky, or at least not safe enough to use the main Active Directory.

Other enhancements include beefed up data compression and file replication for servers based at branch offices. R2 also included a security improvement in the shape of the Security Configuration Wizard. The wizard allows sysadmins to copy security policies across to multiple servers, ensuring these policies are applied consistently.

2008: now, a new release – Windows Server 2008

Windows Server 2008 was a major new release for the venerable commercial server OS from Microsoft. Some of the most important features include the ability to install failover clustering, while Windows Server 2008 also included the Hyper-V virtualization software for the first time.

The 2008 edition of Windows Server also added something called Server Core, which is a stripped-down version of Windows Server 2008 that can be managed using the command line – ideal for minimal deployments. It also includes an Event Viewer plus the Server Manager console which can be used to manage and add server roles plus features on machines that are both local and remote.

In other improvements, Microsoft went and overhauled both Active Directory and the entire network stack so that Group Policy could experience important enhancements – alongside improved identity management.

Windows Server 2008 was again released in four editions: Standard, Enterprise, Web and Datacentre.

2009: an update to Windows Server 2008

Just like with Windows Server 2003 Microsoft opted for a smaller update for Windows Server 2008, calling it Windows Server 2008 R2. This time, R2 of Windows Server 2008 uses the Windows 7 kernel which Microsoft suggested had important improvements in terms of availability and scalability.

R2 in this case also included enhancements to Active Directory, this time to give better handling of user accounts and the ability to control policies with more granularity. Terminal Services was also updated with better functions, in fact it was renamed to be called RDS – Remote Desktop Services.

Furthermore R2 included some new features including DirectAccess and BranchCache which were aimed at users who work remotely, with the goal to improve how remote workers connect to the head office.

Similar to the edition that came before it, R2 of Windows Server 2008 shares security and admin functions that were included in Windows Vista, but it does mark a big departure in that it is exclusively a 64-bit operating system, not 32-bit.

2012 – bringing Windows Server 2012

Going cloud-first, Microsoft started including many cloud relevant features in the 2012 version of Windows Server. In fact, the company decided to call Windows Server 2012 the “Cloud OS”. In essence, Microsoft meant to say that with this edition companies would be more easily capable of running applications in both public and private clouds.

Furthermore, updates were made to the Hyper-V services including its virtual switch, storage spaces, replica and indeed the ReFS file system. There’s also a new default installation option – the Server Core – which requires administration via command line. However, it’s worth noting that Microsoft’s Power Shell command line included 2,300 cmdlets in the Windows Server 2012 edition, making Power Shell very capable.

Again, Windows Server 2012 comes in four editions but there has been a slight change – this time the editions are Windows Server 2012 Essentials, Foundation as well as Standard and Datacenter. Both Standard and Datacenter have the same features but there is a difference in that Standard allows an organisation to run two virtual machines whereas Datacentre edition allows for unlimited virtual machines under the license.

2013 – a quick update to R2 for Windows Server 2012

Microsoft didn’t wait long to update Windows Server 2012 to its second release – but the changes were very extensive, covering all sorts of ground from storage and networking through to virtualization and web services. Many security updates were also made.

PowerShell had a major update, the DSC or desired state configuration, which allows for consistency across the machines deployed in an organisation to combat configuration drift. Storage Spaces gained storage tiering which resulted in a boost in performance because frequently accessed blocks of data were shifted off to solid stage storage.

Finally, R2 of Windows Server 2012 includes Work Folders which lets users save and recall company files on both personal and work devices thanks to the replication of servers in the data centre of the organization that they work for.

2016 – major release in the shape of Windows Server 2016

With a three-year gap Microsoft made a change in Windows Server 2016 in the sense that companies were pushed towards making more use of the cloud. In particular, the 2016 edition included features that made migrating workloads to the cloud easier. This includes Docker containers alongside enhancements to networking.

In further progress towards minimal server deployment options Microsoft launched Nano Server partly with the intent of giving security a boost thanks to a shrunken attack vector. Microsoft said its Nano Server release is 93% smaller when compared to the full Windows Server release. Security was also boosted by the release of Hyper-V in shielded VM guise, this uses encryption to make sure that the data in a VM is not compromised.

Networking received an update too as the Network Controller was included as a key new feature that lets administrators manage the switches as well as subnets and many other devices that occur on both physical and virtual networks.

In the case of the 2016 release, Windows Server comes in Standard and Datacenter editions. In contrast to past releases, the Datacenter editions includes more than just additional license rights and usage advantages: it also includes exclusive features around storage, networking and virtualization.

2017 – watch out for a semi-annual change and service channel releases

2017 brought a different approach from Microsoft – in June 2017 the company announced that Windows Server will follow two different release channels. First, there is the semi-annual channel or SAC. Next, companies could choose the long-term servicing channel or the LTSC. This used to be known as the long-term servicing branch.

Depending on a company’s preferences the SAC might be the way to go. It is more attuned to enterprises who operate a DevOps framework where shorter lapses between feature updates are useful – and where getting the latest updates are valuable for rapid application development scenarios. SAC releases are made public every six months, typically one in spring and one in autumn. Each release will enjoy mainstream support for just 18 months.

LTSC, on the other hand, is more suited to companies who want a release cycle that delivers major updates in two to three year intervals. Likewise, support is extended thanks to a mainstream support period of five years alongside extended support which runs for another five years.

In terms of naming convention, LTSC releases will carrying on using Windows Server 20XX as the naming format but the SAC releases will use a naming format along the lines of YYMM. Microsoft does say that it will add the enhancements made to SAC releases in the coming LTSC version.

The first SAC release of Windows Server was version number 1709 which was rolled out in October 2017. Some of the enhancements in this edition included support for Linux containers, while it also added kernel isolation via Hyper-V plus a Nano Server which was refactored to be used as a base OS in a container image.

Microsoft Software Assurance is a big thing in the enterprise environment and companies which take advantage of this on Windows Server Standard or Datacenter editions (or anyone with a MSDN license) will be able to download the SAC versions that Microsoft issues from its Volume Licensing Service Centre.

Companies who do not have Software Assurance can use the latest SAC releases via Microsoft’s Azure or via another cloud hosting environment.

2019 – new container services and security features

Windows Server 2019 was announced on March 20, 2018 and brought the list of new features – container services including support for Kubernetes, support for Linux containers under Windows; Storage solutions Spaces Direct, System Insights and Storage Replica. In terms of security there were Shielded VMs, advanced Threat Protection under Windows Defender. As concerns administration – there are Windows Admin Center and SetupDiag.