paul_venezia
Senior Contributing Editor

Virtualization shoot-out: VMware vSphere

reviews
Apr 13, 201112 mins

The world's leading server virtualization platform is still tops in performance, scalability, and advanced features

Abstract circular bokeh white light gray sliver colors
Credit: Nongnuch_L / Shutterstock

It should come as no surprise that VMware entered the test lab with the most feature-rich and problem-free solution of the four vendors in this virtualization roundup. After all, this is a market that VMware created and has dominated for years. VMware vSphere 4.1 is the most advanced server virtualization platform on the planet, and it’s priced accordingly. However, when you dig into those numbers, balanced against the features and VM densities per physical host, vSphere might not be as pricey as you think.

From available features to ease of installation to performance, VMware is either ahead, well ahead, or at least on par with the competition. Depending on the skills available in your shop, you may find that you can wring sufficient functionality out of the Citrix, Red Hat, or Microsoft solution at lower cost in the long run. Nevertheless, if the goal is to bring the greatest possible consolidation, scalability, or availability to your virtual server farm, VMware is undoubtedly your best choice.

VMware vSphere installation
Installing vSphere was the work of just a few minutes. We mapped an ISO to the blades through Dell iDRAC’s virtual media feature and fired it up; about 10 minutes later, a new VMware ESXi server was born. There’s the small matter of configuring passwords and possibly management network addresses, but otherwise the host is ready to go.

The next step is to use the vSphere client to log into the single host and configure all the network, storage, and associated parameters. This too is a simple process, requiring only that VLAN IDs be entered when defining the network to use, adding a VMkernel interface for the iSCSI storage, and defining a network to be used for VMotion VM migrations.

All told, the networking setup took about five minutes. After that, we configured the iSCSI storage — a significantly more fluid process in ESXi 4.1 than previous ESX versions. The host quickly discovered the available LUNs on the Dell EqualLogic array and correctly enabled VMware’s storage integration hooks to support copy and zero off-loading, block-level locking, and several other storage tweaks that can dramatically improve storage performance.

At this point, an odd problem cropped up. We presented a 1.5TB LUN that had been formatted as NTFS for the Microsoft Hyper-V tests to the ESXi server along with several other LUNs. Clearly flummoxed, the ESXi box paused for nearly five minutes, trying to determine the unsupported file system but getting nowhere. This wasn’t a huge problem, but presenting this LUN to the ESXi server caused significant delays in the boot process later on. Deleting and re-creating the LUN functioned as expected, with the ESXi server finding and formatting it as VMFS.

Test Center Scorecard
 
 25%20%20%20%15% 
VMware vSphere 4.199999

9.0

Excellent

We built a fresh 64-bit Windows Server 2008 R2 VM on the single host and installed VMware vCenter Server on this VM. In stark contrast to Hyper-V, which might require several different management systems, this vCenter Server VM is essentially all that’s needed for a full-fledged, fully redundant VMware virtual infrastructure. With vCenter Server up and running, we created a data center object and defined a cluster for the blades. After adding the hosts, we were ready to do the baseline host configuration.

Once the first host is built and configured, you can create a host profile based on that configuration and apply it to your other hosts. Very quickly, all the hosts in our cluster were up and running, ready to be loaded with VMs. In fact, VMware vSphere was as fast as or faster than the other four solutions to install and configure.

VMware vSphere management
With vCenter Server configured, all of the key virtualization management features — including VM migration, load balancing, and high availability — are ready to roll. For VMware, these features are old hat and quite mature. That said, high availability (HA) is still a bit of a pain, requiring valid forward and reverse DNS for each host, as well as at least two management networks configured to maintain heartbeats across the HA cluster.

The vSphere Client is much more refined than past VMware management client iterations, with better error reporting, logging, and information delivery. For example, it’s a pain to quickly find the IP addresses of VMs on some of the other solutions, but it’s extremely simple to do so with vSphere. This may seem like a small piece of the overall puzzle, but it’s indicative of the thoroughness of VMware’s solution — you generally don’t have to dig very deep to find what you’re looking for.

Building and maintaining VMs and VM templates is simple and straightforward. A VM can be converted to a template on a whim, so changing a template is a snap. Further, the guest customization associated with Linux and Windows VMs simplifies the deployment of VMs from those templates with unique names and addresses and other minor configuration changes. All of this makes working with VMs very fluid and simple.

For managing large infrastructures, VMware also provides the vSphere Management Assistant (vMA), a prepackaged VM that contains the powerful vSphere CLI (vCLI) scripting tool and various management hooks. To deploy a batch of new virtual machines from a single template is the work of but a few minutes in vCLI, and this framework extends to various SDKs that can weave Perl, Python, and other languages into the vSphere management fabric.

Although the Citrix, Microsoft, and Red Hat solutions now offer live VM migrations, cloning and templating, and other significant features, VMware still goes well beyond them. Perhaps vSphere’s most noteworthy advanced features is Fault Tolerance, which brings continuous availability to a VM by running primary and secondary versions across two hosts simultaneously. In testing, Fault Tolerance functioned quite well: Pulling a blade running the primary instance of a fault-tolerant, heavily loaded VM went unnoticed by the workload, which was suddenly happily ticking away on the secondary host without missing a beat. While this is certainly an impressive feat, the single-vCPU (virtual CPU) limit may preclude its use in many cases.

There’s also Distributed Power Management, a companion to vSphere’s Distributed Resource Scheduler that allows vCenter to consolidate VMs on fewer hosts during idle and off-peak hours, as well as dynamically power hosts off and on as the load on the farm changes. In a large farm, this feature can save significant power and cooling costs without any performance degradation.

VMware vSphere is also the only solution that can handle live storage migrations, where a VM disk can be shuffled between storage targets at will without disrupting normal operations. Neither XenServer nor RHEV have provisions for storage migrations, and though Hyper-V can move virtual machines between arrays, it needs to suspend the VM for a period of time to do so. With live VM disk migration, upgrading back-end storage behind a virtual server farm is no longer an all-night, downtime-laden affair, but a task you can do over coffee in the early afternoon.

In addition, there’s VMware Data Recovery, which allows for fine-grained VM snapshot scheduling, backup, and recovery. VMware’s capacity planning tool, vCenter CapacityIQ, monitors an existing vSphere farm and allows you to run what-if scenarios that can help increase the farm’s efficiency. Beyond that, VMware’s plug-in architecture supports features such as Update Manager, which can be used to maintain not just the vSphere host servers, but also the updates to the operating systems on the VM’s themselves.

Then there are the additional features found in the high-end versions of vSphere. Storage I/O Control and Network I/O Control help even out the load on heavily tasked hosts, ensuring that when I/O resources become oversubscribed, the most important VMs vying for storage and network access get what they need. VMware vSphere also supports distributed virtual switching and the Cisco Nexus 1000V.

Other features go beyond the competition, such as the ability to hot-add CPU, RAM, and disk resources to VMs running compatible operating systems. Although other solutions can hot-add disk and perhaps network resources, only VMware can handle CPUs and RAM as well.

Even beyond vSphere’s native features are the tools designed and built by other vendors, such as the Dell server management plug-in for vCenter. Dell has developed a crisp and clean interface for its iDRAC server management processors that functions within the vSphere Client. This means you can perform physical server tasks — everything from simply lighting up an ID light on a particular ESXi host to firmware updates and even bare-metal ESX host deployment — from the same place you manage VMs. Also, with just a few clicks, you can quickly locate serial numbers, asset tags, firmware revisions, and other physical server information.

The setup for the Dell Management Plug-in for VMware vCenter is quite straightforward, though it does require deploying a lightweight (one vCPU and 512MB of RAM) Linux-based VM to the farm. This VM is used to interface with vCenter and the physical servers, acting as a proxy of sorts for the host interaction. It’s responsible for performing scheduled inventories of each physical host and carrying out commands sent from the client.

The tight integration of physical host and virtual server management within one client is surprisingly captivating and seemingly indispensible once you fully realize the ease of administration it provides. You’ll find vCenter plug-ins from additional VMware partners as well, including HP, NetApp, and EMC.

VMware vSphere performance
VMware vSphere posted very good numbers across all tests; on those occasions it wasn’t at the top of the pack, the solution scales out well, as each physical host can maintain suitable performance even when pushed quite far with a significant VM load. There’s no doubt that the RAM management code is responsible for some of this scalability, but the recoded software iSCSI initiator also performs better than previous versions.

Although XenServer posted faster times than vSphere in some of the unloaded, single-VM tests, its results suffered as additional load was placed on the server. vSphere, however, largely maintained similar performance figures in both the loaded and unloaded tests, with the results showing only minor performance degradation even when all physical cores on a physical host were fully tasked with loaded VMs. (See the main article for test details and a discussion of comparative performance.)

VMware is quick to claim that you can run more VMs per host with vSphere than other solutions. From what I’ve seen, I have no reason to doubt this point, though RHEV also offers page sharing and memory compression, advanced memory management features that enable high VM density.

15TC-server-virtualization-vmware5.jpg
VMware vSphere’s host configuration layout is clean and easy to navigate.

VMware vSphere turned in the fastest VM deployment and migration times of any solution. The VAAI (vStorage API for Array Integration) storage off-loading functions showed marked improvement in VM delete, copy, and deployment operations. Deploying VMs from templates generally took less than 30 seconds — lickety split. (Citrix XenServer can also off-load storage operations through integration with compatible storage arrays.)

This speed was also bolstered by the natural operation concurrency inherent in ESXi. Powering large numbers of VMs on or off is a parallel operation that takes far less time than other solutions that serialize these tasks. This is especially true when migrating a large number of VMs off a single host if that host is having problems or needs to be taken down for maintenance. Where it may take hours for this to happen on other solutions that perform the migrations one at a time, vCenter can migrate many VMs at once, to different hosts, dramatically reducing the time required. In real-world terms, this can mean the difference between getting a downed infrastructure back online for the start of the working day after an abrupt power outage and missing that goal by many hours.

It’s easy to see that VMware has the best, most feature-rich solution on the market. It’s years ahead of the competition in many ways and will likely continue to lead the field for many years to come. However, many of the large-impact features that the company has been leaning on are appearing in competing products. For numerous shops, the need for load balancing, high availability, and live VM migrations precluded the use of other virtualization frameworks. Now that those features are present and reliable in lower-cost solutions, some of the wind may be taken out of VMware’s sails.

VMware vSphere is certainly the Cadillac of server virtualization, but suddenly the competition isn’t just Yugo and Volga. Red Hat, Microsoft, and Citrix are making it a much closer race than ever before.

Read the main article and the other reviews: