paul_venezia
Senior Contributing Editor

Virtualization shoot-out: Microsoft Windows Server 2008 R2 Hyper-V

reviews
Apr 13, 201113 mins

Microsoft's server virtualization platform pairs good performance with extensive management, at the cost of significant added complexity

Microsoft offices
Credit: StockStudio Aerials / Shutterstock

Late to the virtualization game, Microsoft has been running several lengths behind the competition in this space for years. However, the new features and strong performance present in Windows Server 2008 R2 SP1 show that the company hasn’t been twiddling its thumbs. It’s clearly been working hard at bringing a compelling and competitive virtualization solution to the market.

There’s plenty to like in Hyper-V these days, not the least of which is the price comparison to the other major players. But whereas that lower price used to mean significantly diminished features and performance, that gap has closed. Hyper-V now offers the big features, including live VM migrations, load balancing, and high availability, as well as a more fluid management interface in Microsoft System Center Virtual Machine Manager 2008 R2 (VMM).

One very notable addition to Hyper-V in Windows Server 2008 R2 SP1 is dynamic memory. By specifying a minimum and maximum RAM allotment per virtual machine, as well as a buffer to maintain over actual memory requirements, you can configure Hyper-V to grow and shrink RAM allocations as virtual machines require. This means you could give a virtual machine 2GB of RAM, but allow it to grow up to 4GB as needed. If the VM needs less, Hyper-V can then reduce physical RAM usage on the host. In situations where a host exhausts physical RAM, Hyper-V will begin reducing the allotted RAM to running virtual machines based on their priority.

Like memory management in VMware’s hypervisor, Hyper-V’s dynamic memory allows you to run a higher density of VMs on each host. Microsoft’s method of memory allocation, which utilizes a memory balloon that can expand and contract as needed, has clear benefits, but doesn’t go as far as VMware’s or Red Hat’s, which leverage advanced features such as page sharing and RAM compression. Plus, Hyper-V’s dynamic memory works only with Windows guests; VMware and Red Hat have no such limitation.

Hyper-V R2 installation
Installing and configuring a Hyper-V cluster is not as straightforward as implementing the VMware, Red Hat, and Citrix solutions. One reason is that installation of hosts and VMs are handled by separate tools in the case of Hyper-V, but are combined in the other solutions.

Another reason is that some of the foundational pieces for Hyper-V are borrowed, such as the use of Microsoft Cluster Services to handle a farm of Hyper-V servers. Although it may seem to make sense to repurpose these existing tools in the virtualization realm, there are inherent drawbacks. Due to odd dependencies, cluster heartbeat configuration, storage and network configuration, and other administration tasks become cumbersome and time-consuming, and initial builds require plenty of repetitive manual steps on each host to get to a stable cluster. Also, the limit of 16 nodes in a cluster may be a problem for larger shops.

Test Center Scorecard
 
 25%20%20%20%15% 
Microsoft Windows Server 2008 R2 Hyper-V88987

8.1

Very Good

Unless you leverage additional Microsoft technologies such as System Center Operations Manager to build and manage your Hyper-V hosts (a significant task all to itself), be prepared to perform plenty of the same steps, over and over, on each host in the cluster as you build it and as you add hosts over time. For the test, we configured our four Hyper-V hosts manually.

We ran into a few relatively minor problems during the initial build revolving around VLAN tagging. Using the Intel X-520 driver’s VLAN capabilities, we set up virtual interfaces with VLAN tagging and presented them to the Hyper-V hosts as regular networks. Even though these interfaces were already tagged, it was necessary to specify each network’s VLAN tag not only within the network definition in the host, but also on each virtual machine built with a network connection to those networks — a step not necessary with other solutions. Oddly, migrating VMs from one host to another outside of VMM caused those tags to disappear, rendering the virtual machine disconnected from the networks. When using VMM to migrate the same hosts, the VLAN IDs were maintained.

There are other ways to connect Hyper-V virtual machines to trunked VLANs, but they aren’t as simple as defining a trunked VLAN as a network and applying that network to the VM. It’s functional, but not a very fluid process. This is a notable issue, as very few significant virtualized infrastructures operate without VLANs.

Hyper-V R2 management
The management aspects of Hyper-V are not contained in a single console, but scattered throughout the various supporting players that Microsoft has leveraged to bring higher-end features to the solution. Although most of the basic VM tasks can be controlled through Virtual Machine Manager, other tasks such as load balancing, backups, host updates, and patching are handled by Operations Manager and Configuration Manager. The plethora of management tools can get tedious when you’re looking for one specific function that might exist in one or more consoles. Also, there’s a noticeable lag in host and VM status points in the VMM console, so a virtual machine that might be heavily loaded might actually reflect a low CPU load in the display, which is annoying.

Further, the VMM console and the other supporting players are simply rife with options, context menus, and other management detritus that make it a challenge to locate certain configuration screens. You may right-click a host in the left-hand pane and select Configuration, only to realize that you actually have to left-click the host, then right-click an element in the resulting central pane, then select Configuration to find what you’re looking for. There’s also the potential that you actually want to click Settings, not Configuration. This Byzantine layout can trip you up no matter how much you work with Hyper-V.

That said, the abundance of System Center tools that Microsoft has brought into the Hyper-V fold provides extreme management functionality to physical and virtual servers alike. These include Opalis, which can be used to automate workflows, and Operations Manager, which can provide problem detection and resolution. All play nice with VMM and Hyper-V, extending the management capabilities from the host to the virtual machine and even to application sets within the VM, assuming that everything has been configured properly. This might include restarting a Web service when a problem is detected — even if that service is an Apache process running on a Red Hat Enterprise Linux VM.

The tight integration with PowerShell also contributes to very easy Hyper-V scripting. This process is further accelerated by the PowerShell integration into VMM that can spit out PowerShell code of a GUI operation in many cases. This means that you can create a new virtual machine and, on the last dialog box in the wizard, click a button that will open a text editor containing the PowerShell code equivalent of your selections. You can then modify that code as you wish, which is a very good thing because there’s no method in the GUI to build several VMs from a single configuration or template.

To build our test virtual machines, I modified the PowerShell code generated from a simple clone action, changed the VM name, and then ran the script again to build the next VM. This process is simplified by a PowerShell button right on the console that launches a PowerShell prompt.

There are also provisions in VMM that allow for automated configuration of cloned instances, not unlike VMware’s guest customization specifications. However, Microsoft’s auto-configuration is limited to Windows guests and isn’t as malleable as VMware’s tools.

Live migrations of Linux and Windows VMs proved snappy and resulted in no significant processing or networking performance problems during the operation. Flood pings from servers during migrations showed no packet loss at 1,000 packets per second; there were delays in packet delivery during the switch, but nothing out of the ordinary.

The high availability and load balancing capabilities, delivered through VMM, Operations Manager, and Cluster Services, also worked as advertised. “PRO Tips” pop up alerts when certain thresholds are met or exceeded. The solution can then automatically act on these notifications and live migrate VMs to other hosts or simply issue recommendations that actions be taken.

It takes a while to dig into Operations Manager to set thresholds, and the whole process is significantly less straightforward than using VMware’s Distributed Resource Scheduler, for example, but it is functional. On the DR front, when a host blade was yanked out of the rack, the VMs that had been running on that blade began booting on another host very quickly.

Hyper-V does not support live storage migrations, but VMM can perform what Microsoft calls a “quick storage migration,” pausing the VM while switching back-end storage subsystems. Hyper-V also lacks support for fault-tolerant virtual machines, whereby primary and secondary copies of a VM run in parallel across two hosts and offer lossless failover capabilities.

Microsoft is making a big push to position VMM and company as a central management solution not only for Hyper-V, but also for VMware vSphere and eventually Citrix XenServer. Ostensibly, this is a move to allow customers already running these virtualization infrastructures to bring Hyper-V into their networks, perhaps to eventually transition to Hyper-V alone or at least allow central management. VMM’s hooks don’t work with vCenter 4.1 yet, but will soon.

Finally, Microsoft also gave me a brief preview of VMM 2012, which appears significantly different from the current iteration. It focuses on private cloud computing, allowing self-service VM allocation, very granular user rights assignments, and the construction of clouds as collections of server, network, and disk resources that can be used to deploy and manage complete application sets. It’s not fully baked yet, but looks promising.

15TC-server-virtualization-vmm5.jpg
Virtual Machine Manager’s Overview section is a handy way to get a quick and accessible view of the whole farm.

Hyper-V R2 performance
We tested Hyper-V performance under Windows and Linux VMs, both with and without other VM loads on the physical server. A significant caveat to this was that Hyper-V does not yet support Red Hat Enterprise Linux 6, so all of these tests were conducted on RHEL 5.5 with Microsoft’s Linux Integration Services for Hyper-V tools installed. Thus, while the numbers can be said to be similar to the other vendors, this discrepancy prevents direct performance comparisons.

That said, the Hyper-V performance tests showed impressive improvement of Linux guests versus the last time I took a close look. In thread-per-thread comparisons between the physical server and a VM, both running Red Hat Enterprise Linux 5.5, the VM ran with a 3 to 4 percent overhead depending on the test, which is quite acceptable. We did experience two kernel panic events related to the Microsoft driver code (the Linux Integration Services components) on the RHEL 5.5 VMs, but they were sporadic and not repeatable. A heavy-duty, 16-hour run of three four-vCPU RHEL VMs did not result in another problem.

Microsoft Hyper-V performed quite well in the benchmarks overall, posting very competitive numbers in both the Linux and Windows tests. One exception was the crypto benchmark, which tested cryptography speed of the virtual machine. While the VMware and Citrix solutions were posting numbers around 1.6GBps in these tests, Hyper-V consistently hovered around 500MBps. This significant lag in AES performance is due to the fact that, unlike VMware and Citrix, Hyper-V doesn’t expose the AES-NI instructions in the Intel Westmere CPU to the VMs.

The intercore bandwidth and latency tests were very close to those of VMware, and the memory tests were actually higher than the rest of the pack. All this points to the fact that Microsoft has definitely made strides in VM performance, especially on the Linux side of the coin.

Hyper-V has come a long way in providing enterprise-class features and delivering enterprise-class performance. However, one aspect of Hyper-V that cannot be overlooked is the reliance on a cast of supporting players. Once you add up all the Operations Manager VMs, the SQL Server VMs supporting the Virtual Machine Manager VMs, and the Configuration Manager VMs, you find yourself running eight VMs just to support your other VMs. On top of that, some of these management VMs consume vast amounts of RAM for what they’re doing.

As an example, with only a dozen or so VMs in play across three active hosts, the SQL Server VM was consuming 6GB of RAM and would take more if allowed. To a degree, you might consider that you’ll be dedicating the equivalent of an entire host server’s resources to the management VMs for a Hyper-V implementation of any size — and I’m not even counting the domain controllers and other general-purpose servers that are required. As a contrast, VMware manages to incorporate all of these functions in a single vCenter Server instance, possibly with an accompanying SQL Server if the implementation is large enough.

So has Hyper-V finally reached critical mass for corporate deployment? Yes. Is it more confusing and involved to implement, manage, and operate than other solutions? Yes. Is it significantly less expensive than VMware? Absolutely. If you’re a Windows-only house and you buy into Microsoft’s Windows Server 2008 R2 Datacenter edition, which allows for unlimited VMs per host, you can save a ton of money over VMware. It’ll also require more elbow grease to run and offer fewer high-end features than VMware, but at this stage in the game, that may finally be an acceptable trade-off for the right infrastructure.

Read the main article and the other reviews: