Server virtualization has been a huge win for the data center. Nimboxx CTO David Cauthron explains how the next phase will deliver dramatic benefits in the cloud Over the past decade, the whole world seems to have embraced virtualization. Is there nothing left to conquer? Hardly. Virtualization technology itself is changing very fast. And the right solutions to address the challenges of legacy application support and migration for modern applications can be tough to find.This week in the New Tech Forum, David Cauthron, co-founder and CTO of Nimboxx, gives us a bit of virtualization history, how that relates to the current reality of the commodity hypervisor, and his take on where it’s all going from here. — Paul VeneziaThe hypervisor is a commodity — so where do we go from here? Virtualizing physical computers is the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multitenancy, and more. Early virtualization methods were rooted in emulating CPUs, such as the x86 on a PowerPC-based Mac, enabling users to run DOS and Windows. Not only did the CPU need to be emulated, but so did the rest of the hardware environment, including graphics adapters, hard disks, network adapters, memory, and interfaces.In the late 1990s, VMware introduced a major breakthrough in virtualization, a technology that let the majority of the code execute directly on the CPU without needing to be translated or emulated.Prior to VMware, two or more operating systems running on the same hardware would simply corrupt each other as they vied for physical resources and attempted to execute privileged instructions. VMware intelligently intercepted these types of instructions, dynamically rewriting the code and storing the new translation for reuse and fast execution. In combination, these techniques ran much faster than previous emulators and helped define x86 virtualization as we know it today — including the old mainframe concept of the “hypervisor,” a platform built to enable IT to create and run virtual machines.The pivotal change For years, VMware and its patents ruled the realm of virtualization. On the server side, running on bare metal, VMware’s ESX became the leading Type 1 (or native) hypervisor. On the client side, running within an existing desktop operating system, VMware Workstation was among the top “Type 2” (or hosted) hypervisors.No longer a technology just for developers or cross-platform software usage, virtualization proved itself as a powerful tool to improve efficiency and manageability in data centers by putting servers in fungible virtualized containers. Over the years, some interesting open source projects emerged, including Xen and QEMU (Quick EMUlator). Neither was as fast or as flexible as VMware, but they set a foundation that would prove worthy down the road.Around 2005, AMD and Intel created new processor extensions to the x86 architecture that provided hardware assistance for dealing with privileged instructions. Called AMD-V and VT-x by AMD and Intel respectively, these extensions changed the landscape, eventually opening server virtualization to new players. Soon after, Xen leveraged these new extensions to create hardware virtual machines (HVMs) that used the device emulation of QEMU with hardware assistance from the Intel VT-x and AMD-V extensions to support proprietary operating systems like Microsoft Windows.A company called Qumranet also began to include virtualization infrastructure in the Linux kernel — called Kernel-based Virtual Machine (KVM) — and started using the QEMU facility to host virtual machines. Microsoft even eventually got into the game with the release of Hyper-V in 2008. A new industry is born When virtualization essentially became “free” — or at least accessible without expensive licensing fees — new use cases came to light. Specifically, Amazon began to use the Xen platform to rent some of its excess computing capacity to third-party customers. Through their APIs they kicked off the revolution of elastic cloud computing, where the applications themselves could self-provision resources to fit their workloads.Today, open source hypervisors have matured and become pervasive in cloud computing. Enterprises are venturing beyond VMware, looking to architectures that use a KVM or Xen hypervisor. These efforts are less about controlling costs and more about leveraging the elastic nature of cloud computing and the standards being built on these open source alternatives.The future: High-performance elastic infrastructures With the commoditization of the hypervisor, innovation is now focused on the private/public cloud hardware architectures and software ecosystems that surround them: storage architectures, software-defined networking, intelligent and autonomous orchestration, and application APIs. Legacy server applications, which have been conveniently containerized into virtual machines, are slowly retiring to give way to elastic, self-defining cloud applications that truly are the future of computing — although both will operate side by side for some time.Going forward, the way in which IT shops will react to the commoditization of virtualization can categorized as follows:Status quo: Change can be hard, and some organizations will find comfort deploying the same solutions they’ve been deploying for years. This means living with storage and management architectures that are 20 to 25 years old. It also means continuing to pay for hypervisor licenses, being locked into virtualization platforms designed for legacy applications, and lacking a path to support elastic cloud applications on premises.Public cloud: This removes the burden of managing your own infrastructure. However, the public cloud is probably not the best place to run legacy server applications that require dedicated resources and enhanced security. In addition, while public cloud resources can be cost effective initially, at scale the recurring costs can make in-house capital investment seem more attractive by comparison.Cloud frameworks: This includes toolkit options like OpenStack, which is an excellent open source framework for true cloud computing. Companies like Rackspace can make it work at scale. However, the number of IT shops that can actually build and manage an OpenStack deployment is very small.Hyperconverged infrastructure: Companies like Nimboxx are delivering turnkey solutions that offer the same elastic cloud benefits as frameworks and the workflows to support legacy applications in single, modular appliance. These data-center-in-a-box solutions allow companies to start small and scale out in minimal increments. They also serve as a bridge between legacy applications and elastic cloud applications.When considering hyperconverged infrastructure solutions, an important distinction must be drawn between “stack owners” and “stack dependents.” Stack dependents are solutions that run in virtual machines and sit on top of another vendor’s hypervisor. Stack owners are vendors who run on bare metal and build the entire stack themselves. Here’s how these differences play out:Licenses: Stack owners leverage the same open source hypervisors (KVM or Xen) used by the major cloud providers, eliminating the need to pay for costly software licenses. Stack dependents typically offer support for multiple hypervisors, but have limited integrations with open source versions.Performance: Stack owners run on bare metal, giving them direct control over hardware resources for storage and compute. Stack dependents run in a virtual machine, which means every I/O operation follows an unnecessarily inefficient path. Where stack dependents advertise 16,000 IOPS from a three-node cluster, stack owners can deliver 180,000 IOPS from a single node.Simplicity: Stack owners manage the entire end-to-end infrastructure from a single pane of glass, providing a public cloud experience in a private on-premises solution. Stack dependents alleviate some storage management complexities, but overall, system and virtual machine management still requires multiple applications with multiple interfaces.Security: Stack owners have direct control over all aspects of the hardware and can support technologies like data-at-rest encryption. Stack dependents lack that control because they run inside virtual machines. Inherent in their design is the requirement that something else (such as a hypervisor) is booted before the stack rider boots, impeding their ability to secure sensitive parts of the data set. Software-defined: Stack owners own everything, which means that software-defined anything is possible, including real-time, self-learning systems that can power up or power down resources as needed and redistribute workloads. Stack dependents merely own the storage pool.The real breakthrough will be in making these complex technologies consumable by enterprises, as well as smaller organizations. The next generation of VMware-like companies will be the ones that successfully deliver the complexity of a true elastic private cloud and support for legacy workloads in a simple, easy-to-deploy, easy-to-scale, and high-performance product.New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com. This article, “How virtualization is lifting us to the cloud,” was originally published at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. Software DevelopmentCloud Computing