Infographic: A Brief History of Containerization

Software containerization platform history - infographics

Containerization and Isolation

Containerization and isolation are not new concepts. Some Unix-like operating systems have leveraged mature containerization technologies for over a decade.

In Linux, LXC, the building block that formed the foundation for later containerization technologies was added to the kernel in 2008. LXC combined the use of kernel cgroups (allows for isolating and tracking resource utilization) and namespaces (allows groups to be separated so they cannot “see” each other) to implement lightweight process isolation.

2013, Docker was introduced as a way of simplifying the tooling required to create and manage containers. It initially used LXC as its default execution driver (it has since developed a library called libcontainer for this purpose). Docker made them accessible to the average developer and system administrator by simplifying the process and standardizing on an interface. It spurred a renewed interest in containerization in the Linux world among developers.

Here is a brief timeline of key moments in software containerization platforms history

1979: Unix V7

The concept of containers was started way back in 1979 with UNIX chroot. Unix V7 It’s an UNIX operating-system system call for changing the root directory of a process and it’s children to a new location in the filesystem which is only visible to a given process.

The idea of this feature is to provide an isolated disk space for each process. Later in 1982 this was added to BSD.

2000: FreeBSD Jails

The need for FreeBSD jails came from a small shared-environment hosting
provider FreeBSD Jails (R&D Associates, Inc.’s owner, Derrick T. Woolworth) desire to establish a clean, clear-cut separation between their own services and those of their customers, mainly for security and ease of administration.

Instead of adding a new layer of fine-grained configuration options, the solution adopted by Poul-Henning Kamp was to compartmentalize the system — both its files and its resources — in such a way that only the right people are given access to the right compartments – called “jails” – with the ability to assign an IP address for each system and configuration.

2001: Virtuozzo

Virtuozzo developed the first commercially available container technology in
2001, Linux V Server which today is used by over 700 service providers, ISVs and enterprises to enable over 5 million virtual environments running mission-critical cloud workloads. Today, Virtuozzo continues to innovate in areas ranging from industry-leading virtualized object storage to cloud-optimized Linux distributions to groundbreaking container migration technologies.

A significant force in the open source community, Virtuozzo sponsors and/or is a contributor to numerous open source projects including OpenVZ, CRIU, KVM, Docker, OpenStack, and the Linux kernel.

2001: Linux VServer

Introduced in 2001, Linux VServer is a another jail mechanism that can Linux V Server be used to securely partition resources on a computer system (file system, CPU time, network addresses and memory). Each partition is called a security context, and the virtualized system within it is called a virtual private server.

Experimental patches continue to be available, but the last stable patch was released in 2006.

2004: Oracle Solaris Containers

2004 Oracle released Solaris Containers for x86 and SPARC systems, Oracle Solaris that combines system resource controls and boundary separation provided by zones, which were able to leverage features like snapshots and cloning from ZFS.

A Solaris Container is a combination of system resource controls and the boundary separation provided by zones. Zones act as completely isolated virtual servers within a single operating system instance.

2005: Open VZ (Open Virtuzzo)

Released by Parallels (formerly SWsoft), Open VZ Open VZ offered an operating system-level virtualization technology for Linux which uses a patched Linux kernel for virtualization, isolation, resource management and checkpointing. Each OpenVZ container would have an isolated file system, users and user groups, a process tree, network, devices, and IPC objects.

A live migration and checkpointing feature was released for OpenVZ in the middle of April 2006

2006: Process Containers

Process Containers, developed by Google primarily — Paul B. Menage and
Rohit Seth — in 2006 Process Containers for limiting, accounting, and isolating resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. Later on it was renamed to Control Groups to avoid the confusion multiple meanings of the term “container” in the Linux kernel context and merged to the Linux kernel 2.6.24.

It shows how early Google was involved in container technology.

2007: Control Groups merged into Linux kernel

Process Containers were renamed Control Groups (cgroups) Control Groups and added to
the Linux Kernel in 2007. Redesign of cgroups started in 2013, with additional changes brought by versions 3.15 and 3.16 of the Linux kernel.

2008: LXC

LXC stands for LinuX Containers and it is the first, LXC Containers most complete implementation of Linux container manager. It was implemented using cgroups and Linux namespaces. LXC was delivered in liblxc library and provided language bindings for the API in Python3, Python2, Lua, Go, Ruby, and Haskell.

The LXC project is sponsored by Canonical Ltd.

2011: Cloud Foundry Warden

CloudFoundry started Warden in 2011, using LXC in the Cloud Foundry Warden early stages and later replacing it with its own implementation. Warden can isolate environments on any operating system, running as a daemon and providing an API for container management.

Cloud Foundry developed a client-server model to manage a collection of containers across multiple hosts, and Warden includes a service to manage cgroups, namespaces and the process life cycle.

2013: LMCTFY

LMCTFY stands for “Let Me Contain That For You”. Let Me Contain That For You It is the open source version of Google’s container stack, which provides Linux application containers. Applications can be made “container aware,” creating and managing their own subcontainers. Active deployment stopped in 2015 after Google started contributing core LMCTFY concepts to libcontainer.

The libcontainer project was initially started by Docker and now it has been moved to Open Container Foundation.

2013: Docker

Docker is one of the most successful open source projects in recent history,
Docker it’s fundamentally shifting the way people think about building, shipping and running applications, smoothing the way for microservices, open source collaboration, and DevOps. Docker is changing both the application development lifecycle and cloud engineering practices.

Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as aufs and others to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.

Every day, lot’s of developers are happily testing or building new Docker-based apps with Plesk Onyx – understanding where the Docker fire is spreading is the key to staying competitive in an ever-changing world. Want an overview of how Docker can fit into your stack? Check out 6 essential facts here.

2014: Rocket

Rocket started by CoreOS released reference implementation of an open Rocket specification, standardizing the packaging of images and runtime environments for Linux containers.

2016: Windows Containers

Microsoft also took an initiative to add container support Microsoft Containers to the Microsoft Windows Server operating system in 2015 for Windows based applications, called Windows Containers. Recently, Microsoft announced the general availability of Windows Server 2016, and with it, Docker engine running containers natively on Windows.

With this implementation Docker is able to run Docker containers on Windows natively without having to run a virtual machine to run Docker (earlier Docker ran on Windows using a Linux VM). This blog post by Michael Friis describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM.


Enjoy our free Infographic “Moments in Container History”. Download it, print it out and hang it at your desk.

As always, we’re looking forward to hearing your feedback and invite you to join the conversation with us on Twitter and Facebook.

[Get the high resolution version here]

Software containerization platform history in infographics

Be well, do good, and stay Plesky!

Deploying Plesk Onyx on Microsoft Windows Azure

Cloud computing provides businesses the ability to quickly scale computing resources without the costly and laborious task of building data centres, and without the costs of running servers with idle capacity due to variable workloads. To simplify dynamic provisioning in the Cloud for infrastructure providers (including service providers who offer dedicated servers, VPS or IaaS), Plesk now provides ready-to-go images for deploying on Microsoft Windows Azure.

What is Windows Azure?

Quite simply, anything you want it to be.

This cloud platform from Microsoft provides a wide range of different services, to help you build, deploy, and manage solutions for almost any purpose you can imagine. In other words, Windows Azure is a world of unlimited possibilities. Whether you’re a large enterprise spanning several continents that needs to run server workloads, or a small business that wants a website with a global reach, Windows Azure provides a platform for building applications that can leverage the cloud to meet the needs of your business.

In addition to traditional cloud offerings, Azure offers services that leverage proprietary Microsoft technologies. For example, RemoteApp allows for the deployment of Windows programs using a virtual machine running Windows, OS X, Android, or iOS through a remote desktop connection. Azure also offers cloud-hosted versions of common Microsoft enterprise solutions, such as Active Directory and SQL Server.

Questions about Windows Azure?

There are two great places you can go online to ask questions about Windows Azure and get answers from the community:

  • The Windows Azure forums on MSDN.
  • Get involved with the Azure Community on Stack Overflow here.

The best way to keep up with new features and enhancements in Windows Azure is by following the official Windows Azure Blog. If you use a newsreader, you can subscribe to the RSS feed for this blog and get the news as it happens.

Microsoft Windows Azure Dashboard
Image: Microsoft

What is Plesk Onyx?

It’s what Web Professionals like Developers, Designers, Agencies, IT-Admins use to simplify their work lives.

Plesk is the leading WebOps platform to build, secure and automate applications, websites and hosting businesses. Available in more than 32 languages across 140 countries in the world, 50% of the top 100 worldwide service providers are Plesk partners. Our WebOps platform is designed to help infrastructure providers create targeted solutions for Web Professionals, Web Hosts, and Hosting Service Providers.

Key solution areas include:

  • Unlimited domains
  • WordPress Toolkit
  • Developer Pack
  • Subscription Management
  • Account Management
  • Reseller Management

The new Plesk Onyx for Windows and Linux (WebHost) also includes a tightly integrated set of mass-management and security tools that can be used to protect and automate WordPress. All Plesk-powered systems come with built-in server-to-site security, promising more reliable infrastructure and reduced support costs.

Plesk Onyx at Microsoft Windows Azure Virtual Machines Marketplace
Image: Microsoft

Here’s the good news: Plesk Onyx now runs on Microsoft’s cloud infrastructure to provide the scalability, security, and performance that customers depend on.

Better news yet, Plesk provides a variety of virtual machine images with the most popular configurations. So no actual installation is required. You’ll just need to create a virtual machine from the appropriate image. All available images for virtual machines can be found in the Microsoft Azure Marketplace.


Which virtual machine images are provided?

The new Plesk Onyx images are shipped in three editions and are available for both Windows and Linux.

The ‘Bring Your Own License’ (BYOL) instance of Plesk Onyx allows you to purchase your own license directly from the Plesk Online Store or from a Plesk reseller. For Plesk Onyx WebHost images, the cost of your license is included in the hourly charge for the instance. Plesk Onyx licenses are available for two platform types: for Dedicated Servers and for VPS.

License and OS version:
Plesk Onyx images at Microsoft Windows Azure

Now that you’re familiar with the Windows Azure platform and Plesk, you’re ready to take the next steps. And there’s no better way to experience the powerful capabilities of Windows Azure than trying out the platform for yourself.


Getting started with Plesk and Windows Azure

Microsoft is currently offering a free one-month trial of Windows Azure that provides you with $200 of Windows Azure credits you can use for whatever you want. You get full access to the platform with no strings attached. Just sign in with your Microsoft account and fill out the form.


These tutorials by Cynthia Nottingham, Technical Writer at Microsoft, shows you how easy it is to create a Windows virtual machine (VM) from a Plesk-published image by using the Azure portal.


Quick Start Guide: Create a virtual machine

Log into the Azure Portal and on your Dashboard, select New> Compute. Search for the Plesk virtual machine images and select the appropriate Plesk configuration.

Microsoft Windows Azure - Plesk Onyx images
Image: Microsoft

When configuring a virtual machine, you will be asked to specify the following settings:

1. Basic settings: virtual machine name, disk type (SSD or HD), username and password, your Azure subscription and resource group.

Note: The root username cannot be used during the VM creation. You may grant the root user access to the VM later from the console.

For Linux VM, you can choose the following authentication types:

  • SSH public key. In this case, you should specify your SSH public key. You can find information about creating public and private SSH keys here.
  • Password. In this case, you should specify and confirm the password that will be used for connection to the virtual machine.

2. VM size. You can choose one of available standard sizes provided by Azure.

3. Storage and network settings, including virtual network, subnet, public IP address, network security group (firewall). It is OK to leave the default values for most options.

Note: By default, your machine will have a dynamic IP address, so that the IP address will be changed each time when the virtual machine is restarted. If you want to avoid this, click Public IP address and then select the Static option. The virtual machine will be created with a static IP address.

4. Deployment. When you’ve dialed in all the settings, you’ll be presented with a summary. Confirm these settings for your new VM and click OK. Finally, your offer details will be generated and you can now purchase your virtual machine by clicking the Purchase button. The deployment process will start, and you will see its progress on your Dashboard.

5. You’ve created a VM. Your new VM will deploy in a couple of minutes. Once your virtual machine is deployed, it will be automatically started and the setting page will be displayed. You also can view and manage your virtual machine settings by going to Virtual Machines and selecting your virtual machine name.

Of course, you will be able to see the Public IP address of the machine.

Microsoft Azure - virtual machine configuration
Image: Microsoft

Access Plesk Onyx on your virtual machine

Connect to the virtual machine.

  • If you’ve created a Windows Virtual Machine, you can connect to it via Remote Desktop. Go to the Azure Portal Dashboard >Virtual Machines, choose your VM, and click Connect. This will create and download a Remote Desktop Protocol file (.rdp file) that acts like a shortcut to connect to your machine. Open this file and connect to your virtual machine using your login and password.

  • If you have a Linux VM, you can SSH into its public IP address that is displayed in the virtual machine’s settings. Depending on your selected authentication type, you may either use a login and password, or your SSH public key.

  • From a Mac or Linux workstation, you can SSH directly from the Terminal. For example:

     ssh -i ~/.ssh/azure_id_rsa [email protected]
  • If you are on a Windows workstation, then you will need to use PuTTY, MobaXTerm or Cygwin to SSH to Linux. For details, see How to Use SSH keys with Windows on Azure.


Get a one-time login for logging in to Plesk

  • On the virtual machine, run

     $ sudo plesk login

    to get a one-time login for logging in to Plesk. You will receive two links: based on the virtual machine name and based on the IP address. Use the link based on the IP address to log in to Plesk.

login via ssh to Plesk Onyx

Note: You cannot use the link based on the virtual machine name the first time you log in because Plesk has not passed the initial configuration and the full hostname has not been created yet. You should use the link corresponding to your public IP address.


Running the Installation/Configuration wizard

1. When you log into Plesk, you will see the View Selector page. On this page, you can choose the appearance of the panel as per your requirements.

Plesk Onyx configuration wizard
Image: Plesk Onyx

Once your purpose has been identified, a second drop-down menu will emerge asking you select your preferred layout. This can be changed later.

2. Then comes the Settings page. Here you need to fill in your hostname, IP configuration, and admin password.

  • New hostname: Fill in your primary domain (i.e.
  • Default IP Address: Leave the IP as default.
  • New password: Change the default administrator password.

Plesk Onyx Settings
Image: Plesk Onyx

3. Next is the Administrator information page. Just fill in the information asked and proceed to the next page.

4. Then comes the License key install page. Your Microsoft Azure instance is billed on an hourly basis, starting when it boots up and ending with the instance termination.

  • If you have a Bring Your Own License (BYOL) Plesk Onyx image, your hourly charge for the Microsoft Azure instance will be lower but you need to purchase and install the Plesk product license yourself. You can order, retrieve and install a 14-day full-featured trial license from this page. Since you have already purchased a license key, proceeding with installing the license key.
  • If you have a non-BYOL Plesk Onyx image, for example, Plesk Onyx on Windows 2012 R2 (WebHost), the cost of the license will be included in the hourly charge for the instance.

5. On the Create your Webspace page you can specify the domain name of your first subscription, and system user account username and password that you will use to manage it. This will create a subscription for hosting multiple sites.

6. Woohoo! Plesk is now configured!

Plesk Onyx - Administration dashboard
Image: Plesk Onyx

Thanks to the Microsoft Windows Azure team for co-authoring the introduction to this article and for providing feedback and technical insights on Windows Azure.

Be well, do good, and stay Plesky!

6 essentials on Docker containers

Docker containers

Docker is one of the most successful open source projects in recent history, it’s fundamentally shifting the way people think about building, shipping and running applications. If you’re in the tech industry then the chances you’re already aware of the project. We’re going to look at 6 key points about Docker.

According to Alex Ellis, Docker Captain, Containers are disruptive and are changing the way we build and partition our applications in the cloud. Gone are monolithic systems and in come microservices, auto-scaling and self-healing infrastructure. Forget heavy-weight SOAP interfaces – REST APIs are the new lingua franca.

Whether you are wondering how Docker fits into your stack or are already leading the way – here are 6 essential facts that you and your team need to know about containers.

1. Containers are not VMs

Containers and virtual machines have similar resource isolation and allocation benefits – but a different architectural approach allows containers to be more portable and efficient. The main difference between containers and VMs is in their architectural approach.

Difference between containers and VMs

Virtual machines

VMs include the application, the necessary binaries, libraries, and an entire guest operating system — all of which can amount to tens of GBs. VMs run on top of a physical machine using a Hypervisor.  The hypervisors themselves run on physical computers, referred to as the “host machine”. The host machine is what provides the VM with resources, including RAM and CPU. These resources are divided among VMs.  So if one VM is running a more resource heavy application, more resources would be allocated to that one than to the other VMs running on the same host machine.

The VM that is running on the host machine is also often called a “guest machine.”

This guest machine contains both the application and whatever it needs to run that application (e.g. system binaries, libraries). It also carries an entire virtualized hardware stack of its own, including virtualized network adapters, storage, and CPU — which means it in turn has its own full-fledged guest operating system. From the inside, the guest machine behaves as its own unit with its own dedicated resources. From the outside, we know that it’s a VM — sharing resources provided by the host machine.


For all intents and purposes, containers look like a VM. The *key* is that the underlying architecture is fundamentally different between the containers and virtual machines. The big difference between containers and VMs is that containers *share* the host system’s kernel with other containers. The image above shows that containers package up just the user space, and not the kernel or virtual hardware like a VM does.

Each container gets its own isolated user space to allow multiple containers to run on a single host machine. All the operating system level architecture is being shared across containers.

The only parts that are created from scratch are the bins and libs – this is what makes containers so lightweight and portable. Virtual machines are built in the opposite direction. They start with a full operating system and, depending on the application, developers may or may not be able to strip out unwanted components.

  • Basically containers provide same functionality which provides by VMs, with out any hypervisor overhead
  • Containers are more light weight than VMs, since it shares kernel with host without hardware emulation (hypervisor)
  • Docker is not a virtualization technology, it’s an application delivery technology.
  • A container is “just” a process – literally a container is not “a thing”.
  • Containers use kernel features such as kernel namespaces, and control groups (cgroups)
  • Kernel namespaces provide basic isolation and CGroups use for resource allocation


  • Kernel namespaces provide basic isolation
  • It guarantee that each container cannot see or affect other containers
  • For an example, with namespaces you can have multiple processes with same PID in different environments (containers)
  • There are six types of namespaces available
  1. pid (processes)
  2. net (network interfaces, routing…)
  3. ipc (System V IPC)
  4. mnt (mount points, filesystems)
  5. uts (hostname)
  6. user (UIDs)


  • CGroups(Control Groups) allocate resources and apply limits to the resources a process can take (memory, CPU, disk I/O)
    between containers
  • It ensure that each container gets its fair share of memory, CPU, disk I/O(resources),
  • Also It guarantee that single container not over consuming the resources

2. A Container (Process) can start up in one-twentieth of a second

Containers can be created much faster than virtual machines because VMs must retrieve 10-20 GBs of an operating system from storage. The workload in the container uses the host server’s operating system kernel, avoiding that step. According to Miles Ward, Google Cloud Platform’s Global Head of Solutions, a container (process) can start up in ~1/20th of a second compared to a minute or so for a modern VM. When development teams adopt Docker –  they add a new layer of agility, and productivitiy to the software development lifecycle.

Docker catalog

Image: Plesk Onyx

Having that speed right in place allows a development team to get project code activated, to test code in different ways, or to launch additional e-commerce capacity on its website –  all very quickly.
3. Containers have proven themselves on a massive scale
The world’s most innovative companies are adopting microservices architectures, where loosely coupled together services from applications. For example, you might have your Mongo database running in one container and your Redis server in another while your Node.js app is in another. With Docker, it’s become much more easier to link these containers together to create your application, making it easy-to-scale or update components independently in the future.

According to InformationWeek, another example is Google. Google Search is the world’s biggest implementer of containers, which the company uses for internal operations. In running Google Search operations, it uses containers by themselves, launching about 7,000 containers every second, which amounts to about 2 billion every week. The significance of containerization is that it is creating a standard definition and corresponding reference runtime that industry players will need to be able to move containers between different clouds (Google, AWS, Azure, DigitalOcean,…) which will allow applications and containers to become the portability layer going forward.
Docker helped create a group called the Open Container Initiative formed June 22nd 2015. The group exists to provide a standard format for container images and a specification for container runtimes. This helps avoid vendor lock-in and means your applications will be portable between many different cloud providers and hosts.
4. Containers are “lightweight”

As mentioned before, containers running on a single machine share the same operating system kernel – they start instantly and use less RAM. Docker for example has made it much easier for anyone — developers, sysadmins, and others — to take advantage of containers in order to quickly build and test portable applications. It allows anyone to package an application on their laptop, which in turn can run unmodified on any public cloud, private cloud, or even bare metal – the mantra is: “build once, run anywhere.”

Container architecture
5. Docker has become synonymous with containers
Docker is rapidly changing the rules of the cloud and upending the cloud technology landscape. Smoothing the way for microservices, open source collaboration, and DevOps. Docker is changing both the application development lifecycle and cloud engineering practices.


  • 2B+ Docker Image Downloads
  • 2000+ contributors
  • 40K+ GitHub stars
  • 200K+ Dockerized apps
  • 240 Meetups in 70 countries
  • 95K Meetup members

Every day, lot’s of developers are happily testing or building new Docker-based apps with Plesk Onyx  – understanding where the Docker fire is spreading is the key to staying competitive in an ever-changing world.

Web Professionals understood that containers would be much more useful and portable if there was one way of creating them and moving them around, instead of having a proliferation of container formatting engines. Docker, at the moment, is that de facto standard.

They’re just like shipping containers, as Docker’s CEO Ben Golub likes to say. Every trucking firm, railroad, and marine shipyard knows how to pick up and move the standard shipping container. Docker containers are welcome the same way in a wide variety of computing environments.
6. Docker’s ambassadors: the Captains
Have you met the Docker Captains yet? There’s over 67 of them right now and they are spread all over the world. Captains are Docker ambassadors (not Docker employees) and their genuine *love* of all things Docker has a huge impact on the community.

That can be blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events.

Here, you find out on how you can follow all the Captains without having to navigate through over 67 web pages.

The Docker Community offers you the Docker basics, and lots of different ways to engage with other Docker enthusiasts who share a passion for virtual containers, microservices and distributed applications.

Got a cool Docker hack? Looking to organize, host or sponsor Docker meetups? Want to share your Docker story?

Get involved with the Docker Community here.
Docker basics

7. Alex Ellis – Docker Captain

I became a Docker Captain after being nominated by a Docker Inc. employee who had seen some of my training materials and activity in the community helping local developers in Peterborough to understand containers and how they fit into this shifting landscape of technology. The engergy and enthusiasm of Docker’s team was what lead me to start this journey on the Captains’programme.

It’s all about raising up new leaders in the community to advocate the benefits of containers for software engineering. We also write and speak about exciting new features in the Docker eco-system and  presence ourselves in conferences, meet-up groups and in the marketplace. Start my self-paced, Hands-On Docker tutorial here. If you have questions, or want to talk I’m on Twitter.

Thank you to Docker Captain Alex Ellis for co-authoring the introduction to this write-up and for providing feedback and technical insights on containers.

Be well, do good, and stay Plesky!


Sources:, Alex Ellis, Google Cloud Platform BlogInformationWeek, Freecodecamp

Next post >> What’s new in Stack Overflow’s 2016 survey

Github guide for newbies

Github guide

You’ve heard of it. You know they’re singing its praises from the rooftops and yeah, you kind of understand what it’s about. But how exactly does Github work? Plesk has a soft spot for newbies and we aim to break down one of the most important developments of late in the realm of coding.  So you’ll never feel Git stumped again – check out our Github guide

What is Github?

GitHub, as its name implies, can be divided into Git and Hub. Don’t roll your eyes, this is important.

The Git implies the distributed version control system (dvcs) – a tool which allows developers to keep track of the constant revisions to their code. A version control system, also known as a revision control system, is the system used to manage the changes to documents, computer programs, large websites and other collections of information.

Github guide - Starting

  • $ git status (The git status command displays the state of the working directory and the staging area.)

The Hub is the community of individuals who share a common goal: to participate social coding. It’s all about the collaborative effort of the community, in reviewing, improving, and deriving new ideas from the uploaded code. As we’ve all experienced, it can be intimidating to introduce yourself to a community of strangers, let alone strangers with technical knowledge that might far surpass your own.

For years now, GitHub has provided a space for developers to securely store file changes, and aid each other in ensuring the file integrity of their code. As such, GitHub can, and will continue to be, a means of sharing volumes of information with other coders, for personal, and of course, commercial use.

They go about many of these improvements through the process of forking a repository.

Repositories work much like folders.  Herein lie all the files and documentations for a project and it’s also where all revisions are stored.

Forking is the process of copying someone else’s repository, or repo, and contributing yourself. GitHub encourages users to create a repository into which you can place your current work for others to either view, or indeed edit or correct. It introduces the important distinction between open source and closed source.

The difference, in simplified terms, is public access versus private access.

Github guide - Repository

To quote the GitHub site, a fork is: a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original product. Most commonly, forks are used to either propose changes to someone else’s project or to use someone else’s project as a starting point for your own idea.

It allows people to store their work, as well as network with like-minded folks. The system facilitates the interaction of these people, who can view, or even access, each other’s work, discuss errors, and even potential solutions.

A code which runs from start to finish without error is the ideal. That is what GitHub attempts to assist. It’s an environment that encourages the improvement of code. It also enables developers to keep track of changes made during their coding process.

Sometimes it can be difficult to grasp the amount of hours needed to string together those thousands upon thousands, often millions, of lines of code. Even the most basic App requires a very fine attention to detail or it will fail to operate. Sometimes a second set of eyes or a helping hand are required, be they from a colleague or a stranger.

It doesn’t matter if you develop a tiny website or a huge web portal – someday we have to deploy it to a public web server. Our noble ancestors used to deploy their web projects via FTP or, if they were concerned about security, SFTP. Such straightforward and reliable tools serve us well to this day. Nonetheless, there are now more interesting ways .

For example, Plesk Onyx lets you manage Git repositories and automatically deploy websites from such repositories to a target public directory. In other words, you can use Git as a transport for initial publishing and further updates. It’s a very accessible and user-friendly WebOps tool. So why not take a look for yourself?

Github guide - example

Source: Plesk Onyx

Thankfully, never before have such opportunities existed to take on the challenge. Opportunities such as Plesk Onyx have been opening doors for amateur coders with open-source teaching methods, and are definitely worth a look. Professional institutes, as well as multiple online courses have offered a means of participation outside of the traditional university route.

At least according to Kakul Srivastava, vice president of product management for GitHub. The source code repository consists of 38 million projects which is used by 15 million developers worldwide.

So to conclude, here’s a basic summary of what GitHub does: An environment for either collaborative or independent projects to develop the hosting and management of software projects and the storage of data.

Be well, do good, and change the world!

What’s new in Stack Overflow’s 2016 survey

Stack Overflow 's 2016 survey

Stack Overflow, perhaps the most prolific question-and-answer site for developers, conducts an annual survey of its user base. This year, more than 50,000 respondents in 173 countries contributed to the survey.

Star Trek vs. Star Wars

I deep dived into the huge amount of data to bring you the most surprising insights from it. Of course, I have to start with the most relevant data this year’s survey collected—whether developers prefer Star Trek or Star Wars.

Devs in their 40’s (that would be me!) prefer Star Wars whereas devs in their 50’s are diehard Trekkies.  I think it’s important to note that Firefly was the top write-in, followed by Stargate, Doctor Who, and Babylon 5.

Star Wars vs. Star Trek

Stack Overflow

Image: Stack Overflow

How many professional developers are at Stack Overflow?

¨Professional developers, Stack Overflow estimates 16 million of those people are.¨ [1]


Stack Overflow Community Geo

Image: Stack Overflow


In January 2016, 46 million people visited Stack Overflow to get help from or give help to a fellow developer. Stack overflow estimates 16million of those people are professionals.


  • United States: 3,869,095
  • India: 1,859,248
  • United Kongdom: 783,329
  • Germany: 744,940
  • Canada: 535,392
  • China: 372,730
  • Russia: 294,503
  • Brazil: 292,118
  • Australia: 240,658
  • Japan: 193,292
  • Mexico: 134,857


Everyone is “full-stack” now

“Much to learn you still have.This is just the beginning.¨


Image: Jedi Jörg Daydream


Developer Occupations

 Developer occupations at Stack Overflow

Image: Stack Overflow

28% called themselves full-stack developers, and the runners up were back-end developers (12%) or students (11%). According to the analysts, full-stack developers were comfortable using 5 to 6 major languages or frameworks, vs. 4 for all other occupations.

A full stack developer will benefit from knowing more programming languages as the basis for future growth.


Javascript is the King and SQL the Queen of the most popular technologies

“The most commonly used programming language on earth, Javascript is.  Hmmmmmm.”


Most Popular Technologies

Stack Overflow - popular technologies

Image: Stack Overflow

JavaScript is still the most extensively used tool, with more than 55% of people saying they use the language. It’s popular with developers who specialize on the front end. SQL has declined a bit in popularity, due in part to the rise of NoSQL databases like MongoDB, which uses JavaScript as its querying language instead of SQL.

Most Popular Technologies per Dev Type

Stack Overflow - programming languages

Image: Stack Overflow



Image: Jedi Jörg Daydream

Trending Tech on Stack Overflow

“Growing in use, newer web-development technologies like React, Node.js, and AngularJS  are.  Yes, hmmm.”

Stack Overflow most popular dev technologies

Image: Stack Overflow

Newer web-development technologies like React, Node.js, and AngularJS are growing in use. So is Swift, which is stealing market share from Objective C.

Developers appear to be dropping CoffeeScript, Haskell, and Windows Phone. And though the survey showed many developers want out of Visual Basic and WordPress, those technologies don’t seem to be shrinking just yet.


Stack Overflow - StarWars illustration

Image: Jedi Jörg Daydream

Every so often a new technology appears and disrupts the status quo. Docker is rapidly changing the rules of cloud and upending the cloud technology landscape. Smoothing the way for Microservices, open source collaboration, and devops, Docker is changing both the application development lifecycle and cloud engineering practices.

Desktop Operating System

Last year, Mac edged ahead of the Linuxes as the number 2 operating system among developers. This year it became clear that trend is real. If OS adoption rates hold steady, by next year’s survey fewer than 50% of developers may be using Windows.

Speaking of the Linuxes, Ubuntu is tops among them with 12.3% of the entire OS market for developers. Fedora, Mint, and Debian accounted for 1.4%, 1.7%, and 1.9% of all responses, respectively.


Stack Overflow - OS preference

Image: Stack Overflow

Self-taught Developers Went Way Up

“Partly Self-taught, 69% of developers said they were.  Yesssssss.”




Stack Overflow - Education

Image: Stack Overflow

Last year, 41.8% of developers said they were self-taught. This year 69% of all developers shared they are at least partly self-taught. 13% of respondents across the globe tell them they are only self-taught. 43% of developers have either a BA or BS in computer science or a related field. 2% of developers have a PhD.

There are at least nine male developers for every one female developer.



Stack Overflow - Usage by Gender

Image: Stack Overflow

The survey results show that we do not have enough women in tech. This is news to no one. But a 15-to-1 ratio of males to females? This is much wider gap than most people realize. And looking at the age distribution of female developers sheds some light on why.

“Do or do not. There is no try.”

Stack Overflow - Lego illustration

Image: Jedi Jörg Daydream


There are far fewer female developers in their 30s and 40s.

“When nine hundred years old you reach, look as good you will not.”

Gender Distribution per Age Cohort

Stack Overflow - distribution by age

Image: Stack Overflow


Most female developers are either in their 20s or over 50+ years old. While women make up about 6% of total respondents, they make up an even smaller percentage of respondents in their 30s and 40s. On the other hand, this indicates that a generation of women might’ve chosen not to work in software development or tried it and evolved to other positions within their organization.

Unrealistic expectations are the most common workplace gripes.

Challenges At Work

Stack Overflow - Challenges

Image: Stack Overflow


When asked about the biggest challenges at work, the respondents answer “unrealistic expectations” the most often (35%) while “poor documentation” (35%) and “unspecific requirements” (33.5%) were close behind. Developers feel these challenges increase as they become more experienced.

May the Force Be With You!

Please share if you enjoyed reading through these findings. You can check out the full Stack Overflow survey here.

[1] All quotes provided by Jedi Jörg Daydream – A Padawan Learner from Nubia …it’s powered by Plesk!