matt_prigge
Contributing Editor

How to avoid a forklift storage upgrade

analysis
Apr 9, 20125 mins

Despite your best attempts to look ahead, primary storage always seems to go obsolete before its time

Due to the enormous cost of selecting and migrating to a completely new primary storage infrastructure, most organizations try to wring every last drop of functionality out of their storage resources. That’s one reason why most storage deployments are viewed as five-year investments.

Yet with corporate data growing at geometric rates, the notion of deploying a platform that can scale out for such a long time — not to mention the idea you can plan that far into the future accurately — is becoming a joke. Many “long-term” storage investments have hit the wall much earlier than anticipated, incurring uncomfortable trips to the corner office. Hey, didn’t you say those big, expensive hunks of hardware were going to last?

Face it — upgrading your storage infrastructure is going to happen more often than you’d like. But at least server virtualization has dramatically decreased the pain involved in making a midstream migration from one storage platform to another or of running more than one system in parallel. The truth is your ability to predict your future needs is more difficult than ever. In fact, you’ll probably be wrong — and that’s OK.

The old approach Most enterprise SANs are built around a controller and disk-shelf architecture. Typically, the controller resources are sized based on the total amount of host I/O required on the front end and the amount of disk resources addressed on the back end.

In these types of platforms, an unexpected spike in the number of disk resources required might mean replacing the controllers while continuing to leverage the same disk. Fortunately, most vendors that use this kind of architecture make controller upgrades relatively easy — sometimes not even incurring downtime.

There are two major problems with this kind of approach. First, if you upgrade the controllers during year three of a five-year investment, you’ve tied new controllers to disk resources that are already more than halfway through their expected lifetime — effectively making those brand-new controllers a rather expensive Band-Aid.

Second, in response to rapidly shifting storage requirements, storage technology itself is changing in massive leaps and bounds, with SAS quickly replacing Fibre Channel as a back-end disk architecture and more advanced software features that leverage solid-state drives becoming commonplace. It’s almost a given that three years after you buy a storage platform, the latest advances in disk and controller technology will result in the next generation bearing little resemblance to your previous implementation. By continuing to invest in a three-year-old architecture, you can’t take advantage of those new advances.

If this approach has serious drawbacks, why do it? Because, in the past, the idea of migrating to an entirely new storage platform usually represented a massive undertaking. Not only would administrators need to learn the ins and outs of the new platform, but they’d also have to deal with the often manual process of migrating systems and data from the old platform to the new — requiring late nights and significant downtime.

In many cases, upgrading previous-generation technology simply to avoid the hassle of migrating was a key factor in deciding whether to extend the life of an existing platform.

The emerging reality That poor trade-off need not persist in environments where server virtualization has been aggressively employed. In such environments, the data migration process has become almost easy. Once the new storage has been set up in parallel with the old, it’s just a matter of a few mouse clicks to start moving virtual machines from one platform to the other, often without introducing any perceptible service interruption.

This new ease of shuffling data opens the door to new storage planning concepts. Instead of planning for and buying a primary storage platform that will take the organization through the next five or six years, many organizations find that planning to run a consistent rotation of two different systems in parallel is more desirable — and offers the ability to expand while retaining the value of the initial long-term investment.

In this approach, storage purchases are still made with the full intention that they last five years; the difference is that they aren’t intended to scale through five years of growth. Instead, somewhere around year three of that window, additional investment into the first platform stops, and a second, smaller deployment of a current-generation primary storage platform is made in parallel. That system then absorbs organic growth through the next two years, often fielding more demanding applications that can take advantage of improvements made between the generations.

Then, as the first platform reaches its end of life, the second is expanded to completely take over for it and the first system is retired. A year later when the second platform reaches its three-year midlife, the cycle continues and a new storage platform is introduced to absorb organic growth through its end of life, and so on.

This kind of consistent change would have been inconceivable without server virtualization or (an often prohibitively expensive) storage virtualization layer, but is increasingly available even to the smallest of enterprises. It shortens the storage planning horizon and helps avoid the overbuying that characterizes systems you hope and pray will last long into the future.

If your storage environment is growing faster than you can plan for, don’t resign yourself to buying too much storage as a hedge. Instead, consider adopting a more flexible approach to storage planning that leverages gains made in virtualization technology. And if you find that not enough of your infrastructure is virtualized, consider this yet another incentive to get there.

This article, “How to avoid a forklift storage upgrade,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.