Cisco's Nexus 1000V gives server and network administrators separate, familiar interfaces with which to manage their respective chunks of infrastructure The relentless march of network convergence is transforming the data center into a leaner, more efficient animal than ever before. But convergence also presents serious challenges to traditionally siloed IT departments — blurring the roles of server, network, and storage administrators. The frequent result is that these three roles merge into one group of “data center administrators” who must be conversant with the entire range of skills. This may work in smaller environments, but it can be extremely difficult to maintain the level of skill necessary to support a large environment without depending too much upon a handful of individuals. One answer is to develop converged products that present separate interfaces catering to different administrative roles. Consider, for example, Cisco’s Nexus 1000V virtual switch software. Though it has been well received by just about anyone I know who has deployed it, it’s often looked at as simply an extension of the Cisco Nexus switch line — but that misses the point of what the N1000V can do. It also represents what I hope will be a continuing effort on the industry’s part to allow easier separation of administrative control in converged networks without trading off capability or efficiency in the process. vDS in a nutshell Before taking a look at the N1000V, it’s worth examining the VMware switching technology that forms the framework for it. Prior to vSphere 4.0, virtual switching within VMware was based solely on traditional vSwitches, which are defined on a per-host basis. While fairly uncomplicated, traditional vSwitches did not scale well as the number of hosts or virtual machine networks increased. Setting up a traditional vSwitch — say, for a pair of VLAN-trunked 1Gbps Ethernet links serving the VMs on a host — involved creating a vSwitch, linking it to the physical NICs that would act as that vSwitch’s uplinks to the physical network, and creating port group definitions for each of the VLANs that you wanted to place VMs in. In addition to these basic settings, you could also implement fairly simple traffic shaping and rough security controls, but that’s about it. If you had only a few networks and a few hosts, managing this wasn’t terribly difficult. However, as the number of hosts and networks scale, maintaining the networking configuration of each host separately could become a serious challenge — almost inevitably resulting in configuration inconsistencies from one host to the next. In addition, these traditional vSwitches offered very little visibility into the network traffic crossing them, so it’s relatively difficult to troubleshoot network problems for which you might want to use a protocol analyzer in the physical world. With the release of vSphere 4.0, VMware introduced the vNetwork Distributed Switch or vDS in their high-end Enterprise Plus licensing package. With a vDS, you configure the switch and port groups centrally, then add hosts to it — thereby enforcing the same configuration on all involved hosts at once and providing a single point for management. vDS also offered a few other features that weren’t possible with traditional networking, including private VLANs, which do not allow cross-talk between VMs and force traffic out onto the physical network where it can be inspected by tools you might already have in place. However, implementation of vDS did not do a great deal to solve the issue of administrative control. The server administrator still had to depend upon the network administrator to correctly configure the physical switch ports to which his host would attach. Likewise, the network administrator had to rely heavily upon the server administrator to troubleshoot networking issues involving virtualized resources. The Nexus 1000V The Cisco Nexus 1000V builds a fully featured software switch on top of VMware’s vDS architecture, which allows complete separation of the network administration and server administration roles. The N1000V is made up of two distinct components: the Virtual Supervisor Module (VSM) and Virtual Ethernet Module (VEM). The VSMs, generally installed as a pair of redundant virtual appliances, act as the management head for the VEMs, which are embedded in the vSphere hosts via the automated installation of a vSphere host extension. When configured, the N1000V feels and acts like a Cisco modular switch with the VSMs filling the role of the supervisor modules and the VEMs acting as modular line cards. Once initially installed, the network administrator can configure uplink port profiles to match the configuration of the physical switch ports that he already controls. One example of the 1000V’s NX-OS pedigree is that it makes it possible to use dynamic 802.3ad LACP load balancing as well as subgroup load balancing — which neither traditional vSwitches or plain-old vDS can do. After the uplink profile is configured, the network admin can deploy port profiles for virtual machines to attach to. These port profiles can contain any supported NX-OS commands, including everything from VLAN configuration and QoS to fine-grained ACLs. As the network admin deploys and enables these port profiles, they immediately become visible within the vSphere environment as vDS port groups. The server admin can then add his hosts to the N1000V vDS — automatically installing the VEM host extension and configuring the uplink interfaces in the process — and add virtual machines to the virtual machine port groups. As the server admin adds virtual machines, virtual Ethernet interfaces become visible within the N1000V configuration, allowing visibility into the network behavior of each virtual machine. The N1000V also supports NetFlow, RSPAN, and ERSPAN, so it’s exceptionally easy to troubleshoot network problems that might exist between two virtual machines on the same host. In short, the Nexus 1000V allows the network administrator a great deal of visibility into and control over the virtualized networking infrastructure without forcing him to learn new virtualization-specific tools or have access to virtualization management tools at all. Likewise, the server administrator no longer has to worry about how the network is configured or how upstream network changes might affect him. While I certainly advocate cross-training anywhere you can get away with it (the more people know about each other’s jobs, the better), not being forced to work with something you may not be particularly familiar or skilled with can only result in saved time and fewer administrative mistakes. Changes coming in vSphere 5.0 As VMware announces vSphere 5.0 and rolls it out this week, new features are being added to VMware’s vDS to allow some of the same QoS, NetFlow, and SPAN features that previously required the Nexus 1000V. Though Cisco hasn’t yet made an announcement on what, if any, new features the 1000V will have when it is released for vSphere 5.0, you can count on the fact that they will take advantage of the improvements VMware has made in the vDS architecture. No matter what improvements vSphere 5.0’s vDS enhancements may bring, the Nexus 1000V’s real and continuing benefit in large data centers is the efficient separation of administrative control that it allows. Too often, attempts at this kind of separation in converged networking results in unnecessarily fragmented management tools or needlessly hiding information or functionality from administrators that could make use of it. This is one great example of how to allow silos to stay in place without shooting yourself in the foot. This article, “The network must converge — but admin roles don’t have to,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. IT Skills and TrainingTechnology IndustrySoftware Development