End-to-end virtualization -- and upheaval for network admins -- is coming much faster than you might think Many new concepts have arisen with the advent of large-scale virtualization technologies, such as the ability to seamlessly migrate running virtual machines among physical hosts and even across data centers — not to mention transitioning entire running data centers between real-world locations.These newfound abilities are quickly dispensing with long-held best practices in disaster recovery and disaster planning. At the same time, they’re heavily modifying the underlying network and server architectures that have served IT so very well for many years.In the not so distant past, a business harboring a keen interest in business continuity with a traditional data center would have a hot site and a warm or cold site located some distance away. The warm or cold site would generally house a subset of the services running in the main location, but would have enough horsepower to maintain critical services in the event of a major outage. Data would be replicated as well as possible given budgetary constraints and bandwidth availability, and while it would be feasible to maintain business operations using the disaster-recovery site, it would still be a major fire drill for IT to stabilize all elements in the face of a disaster. Now, new virtualization technologies make it possible to dynamically shift an entire data center from one location to another without taking down a single server. Given enough bandwidth between sites and the use of newer virtualization management tools, a few clicks of a mouse can result in hundreds or thousands of VMs being relocated to a site 200 miles away without missing a beat. It allows businesses many more options, such as the ability to evacuate critical systems prior to a forecasted weather event, ensuring that no matter what happens to the site, business can continue.Back to school However, this kind of agility comes at a price. That price is a thorough renovation of traditional networking concepts. EMC VMware’s new release is a prime example of this. While we’ve had technologies like VXLAN for a while, the new features in vSphere 5.1 eliminate a significant number of what would otherwise be external functions. Firewalls, load balancing, VLANs, routing — they’re now part of the hypervisor network stack, and they’re capable enough that in many deployments it will no longer be necessary to maintain separate hardware appliances for those functions.The new load balancer isn’t quite to the level of an F5 box, but it provides a significant number of fundamental features that will allow admins to dispense with external gear. Likewise, the new firewall features and management tools go a long way toward removing the need for those devices. VXLAN itself supplies pseudo-VLAN functions that reside wholly within the hypervisor, using network switching as basically a dumb transport, with the hypervisor peeling away the encapsulation and shuttling traffic securely along virtual LANs. Microsoft’s Hyper-V does something similar using GRE. Essentially, this abstraction reduces the actual network to a flat Layer 2 foundation, with the hypervisors handling everything else.To frame this, consider that it’s now possible to plug a bunch of physical hosts and storage into a switch that has a default configuration with every port on the same VLAN. Then, by configuring VXLANs, firewalls, and load balancers within VMware vSphere, you can create dozens of networks connecting hundreds of VMs, all without touching the switching configuration. In fact, that switch could have a live Internet connection fed into it on one port and all security elements handled at the hypervisor level. An entire data center built with only a single switch, a bunch of storage, and a pile of physical hosts — this is VMware’s software-defined data center (SDDC).A new kind of isolation The upshot of all this: The role of the network administrator is changing. It’s heading in the direction of controlling a subset of the virtualization platform’s configuration and away from the traditional work of modifying switches, routers, and firewalls. How will network admins take to the new waters? One hurdle is that the new approach flies in the face of what we now consider to be networking best practices. Physical separation of untrusted networks has been the rule almost from the beginning, though it’s been waning in recent years. Moving to an architecture such as VMware’s SDDC dispenses with those boundaries altogether. VMware, Microsoft, and others betting on the virtualization of networks will need to build trust — lots of it — among technologists who are historically wary of deviations from known and trusted paths. It’s an approach that will take much time and evidence to be fully accepted, but I have little doubt it will eventually become the way at some point in the future, for better or worse.That will leave IT in a position where literally everything in the data center is virtual: the servers, the storage, the network, the applications, the desktops, the whole shebang. They’ll all be controlled from a central console and managed more or less as a single entity. There’s much to like about this scenario, but there’s much work to do to make it real — and even more to make it acceptable to the majority of IT shops around the world.This story, “VMware vSphere 5.1 and the end of traditional networking,” was originally published at InfoWorld.com. Read more of Paul Venezia’s The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. Technology Industry