Server virtualization has been around for a while – it become prominent in the early 2000s and was arguably pioneered by VMware. Companies quickly started using virtualization broadly because it rapidly became apparent how virtualization can help companies to consolidated different servers on a single virtual machine. In doing so companies made sure that all their physical servers were fully utilized – instead of idling expensively without doing very much.
VMWare is still the leader in server virtualization but other companies ranging from IBM and Microsoft through to Red Hat and Citrix are also working on server virtualization, often offering new advanced features such as containerization, software-defined computing and of course hyper convergence. These new technologies are cutting edge but the bread and butter technology of virtualization remains incredibly common with estimates suggesting that virtual machine saturation is around 90%.
Even though containers and serverless infrastructure exist it is hard to see why companies would move existing, mission-critical server workloads across to these types of technologies – and away from bog-standard virtual machines as we know it. Particularly where the enterprise computing environment is heterogenous it makes little sense to move from virtual machines to containers because containers demand that the same operating system is used throughout – you cannot mix Linux containers with Windows containers for example.
Nonetheless where agile and DevOps developers create brand new applications there may very well be a choice between using virtual machines and indeed containers – or even a serverless environment. So, in the long run, new ways of developing applications will entail a challenge to classic server virtualization.