matt_prigge
Contributing Editor

Fear of virtualization in 2012? Believe it

analysis
Dec 3, 20125 mins

As commonplace as server virtualization has become, isolated pockets of fear and uncertainty still exist around the technology

In my post last week, I reviewed five technologies I’m thankful for. As I mulled over what to include, I strongly considered not putting virtualization on the list. I’m thankful for it, but it’s become such an integral part of delivery x86-based services in data centers of all shapes and sizes that I wasn’t certain it was worth a mention. From my perspective, it’s sort of like making a point to be thankful for intermittent windshield wipers or sliced bread — both are great, but they’ve become so commonplace that few of us really notice their presence.

Of course, realizing that not everyone shares your perspective (however mainstream you might think it to be) is one of the fun parts of being human. Only a few days after that story ran, I found myself on a conference call with a client and the hardware team from one of its primary software vendors. The goal of the conference call was to sort through a pair of technology bids the vendor had submitted for a pending upgrade of a mission-critical business application. One of the bids would see the application upgraded onto new, nonvirtualized hardware, while the other included virtualizing pretty much everything except for the back-end database layer.

At first, this was refreshing — this vendor had previously been very resistant to supporting virtualization. Outside of the servers supporting this application stack and a few high-utilization servers that’d be difficult to virtualize, the client’s data center is almost entirely virtualized. Being able to virtualize the servers for the upgraded application in the same manner as the rest of the infrastructure would have been an obvious win.

Sadly, whatever optimism I might have had before the call started was fleeting.

The physical technology bid included eight physical servers, consisting of a typical series of redundant load-balancing, Web, application, and database tiers. However, the virtualized design consisted of no fewer than 11 physical servers: nine high-end virtualization hosts and the same pair of database servers from the physical design. The virtualized design had more than double the computational capacity and dwarfed the cost of the nonvirtualized design.

What followed was an almost farcical discussion of the consolidation capabilities of virtualization versus the overhead incurred by virtualizing a workload. The vendor’s virtualization design had effectively dedicated each virtual host to running a single production workload, including a massive amount of physical host resources that (1) wouldn’t be allocated to those workloads and (2) doubled the number of workloads — all in an attempt to guarantee that necessary resources would always be available. To say that this demonstrated a truly staggering amount of ignorance about virtualization’s capabilities would be an understatement.

Early doubts

I’ve encountered this distrust of virtualization before. In the earlier years of virtualization (say, five or six years ago), it was common to see traditional hardware specifications totaled up, increased by some kind of margin to account for virtualization overhead, and translated directly into the specs for the virtual hosts. Anyone who’s run a virtualized infrastructure knows that this approach to infrastructure sizing almost always results in a massive amount of unused virtualization capacity — the workloads you’re planning for are extremely unlikely to fully use their resource allocations at the exact same time (easily demonstrated by closely monitoring the physical workloads before virtualizing them).

When virtualization was young and not well understood, anyone could be forgiven for not believing that you could consolidate large numbers of physical servers into a much smaller number of virtualized workloads. But today, it’s very difficult to understand how some of the largest software vendors remain ignorant to that simple truth in the face of overwhelming evidence to the contrary.

Vendors vexed by virtualization

To be honest, I have no idea what it will take for vendors to understand what virtualization can do for their clients, not to mention their own ability to deliver and support their software. Athough I don’t know for sure, I’d bet a sizable sum of money that this company and many others like it use virtualization heavily in the course of developing their products, and somewhere in their organizations, people understand what it’s all about. Why it doesn’t translate to the folks designing and selling customer environments is a mystery to me.

A pessimist might suggest it’s simply a way to continue to sell hardware to clients who don’t need it. That may be true to some extent, but I’d be surprised if that’s the whole reason. With these kinds of applications, the cost of the software itself and the associated third-party licensing far exceed the cost of the hardware in comparison. Moreover, a customer left with a massively overdesigned and underutilized infrastructure will typically feel very nearly as ill-treated as one plagued by performance problems resulting from lowballed hardware specs.

Although I don’t know what will make these software vendors realize that virtualization can be as good for them as it is for their customers, I do know it’s important to be aware of this attitude among vendors when you’re selecting new software.

In this client’s case, it is trapped because the vendor effectively refuses to support the installation if the client doesn’t follow its recommendations — a risk the client can’t take, given the criticality of the application. If you find yourself in this situation, there isn’t a whole lot you can do other than make your displeasure known, share it with other customers where you can, and consider replacing the product with one from a vendor that’s more current in its thinking.

This article, “Fear of virtualization in 2012? Believe it,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.