As network virtualization matures, the software-defined data center will establish an open-ended environment for innovation You know the story of how the Internet was created: The military wanted a redundant “network of networks” and figured out how to do it with a new protocol using existing networking equipment.Something nearly as historic is happening now, again using existing infrastructure: the software-defined data center.Just as the world changed when isolated networks became the Internet, computing is about to make a quantum leap to “data centers” abstracted from hardware that may reside in multiple physical locations. This pervasive abstraction will enable us to connect, aggregate, and configure computing resources in unprecedented ways. A totally virtual world The key enabler of the software-defined data center is virtualization. We can now virtualize and pool the three key components of computing: servers, storage, and networking. At the same time we are reaching a critical mass of sophistication in being able to slice, dice, and compose those pooled virtual resources.The least mature technology to enable the software-defined data center has been network virtualization. But work is under way at Arista, Cisco, Microsoft, and VMware — the last getting a boost from the acquisition of Nicira — to allow virtual networks to be provisioned, extended, and even moved within and across physical networks as quickly and easily as we now create and migrate virtual servers.What does it mean to be able to create software-defined data centers? Imagine if, based on the requirements of key applications, you could wave a mouse and provision a data center to match, configuring pooled resources to meet those requirements point by point. Multiple software-defined data centers could use overlapping physical infrastructure so that each tenant could have its own virtual network with its own authentication and authorization scheme, without the availability and scalability limitations of conventional VLANs. Evolving standards An early use case of this type of software-defined infrastructure surfaced last week, when eBay went public about its implementation of OpenStack and the Nicira Network Virtualization Platform (NVP). But for network virtualization to proliferate, standards must take root. There are two competing standards for network virtualization: VXLAN and NVGRE. The OpenFlow protocol stack, which establishes a standardized interface for controlling network switches, supports VXLAN and also enjoys the backing of most network equipment vendors.Another important piece of the puzzle is Quantum, the evolving networking component of the open source OpenStack project. Quantum provides an application-level abstraction of network resources and features an API for plugging in virtual switches, such as Cisco’s Nexus line or the open source Open vSwitch. This fall will see the first release of OpenStack to include Quantum as well as an improved version of the Compute (Nova) component.Although InfoWorld has covered OpenStack extensively, it’s important to note that OpenStack alone cannot bring about the software-defined data center. It’s a management framework into which various solutions plug in — such as Red Hat’s KVM for server virtualization or Nicira’s NVP for network virtualization. Nonetheless, it’s pretty clear OpenStack will play a key role in the evolution of the software-defined data center. The latest big vendor to offer support is none other than VMware, which said it was committed to “bringing additional value and choices” to OpenStack when it acquired Nicira, the startup that has led the development of both Quantum and Open vSwitch. Software-defined everything Is “the software-defined data center” just another way of saying “the cloud”? Not really. I think of the cloud as a marketing term for application, platform, or infrastructure services that internal or external customers procure on demand through Web forms. The software-defined data center is the mechanism through which those cloud services can be delivered most efficiently.As network virtualization falls into place, the nearest-term benefit to enterprises will be the easing of the network bottleneck in virtualization. Spinning up and moving around virtual machines has become almost too easy, but the network provisioning to accommodate big changes in virtual server loads has been hard manual labor by comparison. That will change over the next few years.But in the long run, who can say where the software-defined data center will lead? The fact is, the software-defined data center could only begin to happen now, because up until the present we have not had compute, storage, and networking hardware with the capacity to accommodate the overhead of virtualized everything. Now we do. Soon we’ll have the ability to experiment iteratively with all sorts of new data center architectures that cross public clouds and private infrastructure. Just as no one at ARPANET in the 1970s could have anticipated YouTube, no one can predict where the ability to freely provision and configure abundant virtual resources will take us.This article, “What the software-defined data center really means,” originally appeared at InfoWorld.com. Read more of Eric Knorr’s Modernizing IT blog. And for the latest business technology news, follow InfoWorld on Twitter. Technology IndustryCloud ComputingIaaSHybrid CloudPrivate Cloud