matt_prigge
Contributing Editor

The good and bad of multiprotocol storage

analysis
Aug 22, 20116 mins

Primary storage that supports a wide range of protocols can be a lifesaver or a constant source of irritation

Not long ago, most primary storage platforms were likely to support only a single storage protocol, generally either iSCSI or Fibre Channel. But now the increasing popularity of deduplication, wide availability of 10Gbps Ethernet, and lure of low-latency network convergence made possible by Fibre Channel over Ethernet have given the industry potent motivation to offer as much choice as it can.

As a result, most major enterprise storage platforms today can support all of these block-level storage protocols, as well as file-based NAS functionality. Generally speaking, the more flexibility a storage platform can offer, the more likely it will be to survive the dramatic changes that are sweeping through the data center today — from the explosion in corporate data to the drive toward high-density virtualization and private cloud infrastructures that depend upon converged networking.

However good this flexibility may be, the ability to mix and match storage protocols comes with potential liabilities, including added complexity, compatibility problems, and training challenges.

The joy of multiprotocol platforms

Most modern storage platforms ship with built-in connectivity — either Ethernet or Fibre Channel — and then allow you to add interface cards to support FCoE, FC, or iSCSI. This gives you the flexibility to upgrade an existing iSCSI or FC-only solution and maintain the same infrastructure, while also putting you in a good position to move toward a different protocol as your needs (and technology) evolve.

That benefit may not be immediately apparent, especially if you’re happy with the storage fabric you already have. For example, you may opt to replace an older FC-only SAN with a newer multiprotocol SAN, but deploy only the traditional FC functionality for now. If you decide to deploy a new converged data center network in the next year or two, you may only be an expansion card away from adding 10Gbps Ethernet and FCoE. Likewise, growing businesses that start out by deploying iSCSI-only storage with 1Gbps Ethernet interfaces can rest assured they will be able to upgrade to 10Gbps iSCSI or FCoE in the future.

Along with the choice of block-level protocols, many storage vendors are starting to offer fully integrated NAS capabilities that draw file-based NFS/CIFS file sharing capabilities into the same management framework — usually throwing in data deduplication for the ride.

All of this flexibility gives you the ability to move from one technology to another without ripping and a replacing, while letting you mix and match technologies to make the most efficient use of your resources. For example, a collection of low-traffic physical servers might be well served by two dirt-cheap 1Gbps Ethernet switch ports using iSCSI (sometimes a welcome alternative to expensive 10Gbps Ethernet or Fibre Channel ports). Likewise, in some situations, applications may be best served by a file-based protocol like NFS; Oracle’s built-in support for dNFS is a great example of this.

The downside of diversity

Flexibility can have unintended consequences; take, for instance, the uncontrolled virtual machine sprawl that dogs many virtualization implementations. Just because you have the ability to support FC, FCoE, iSCSI, CIFS, and NFS all at the same time doesn’t mean you should. For one thing, you’re asking for quite a range of expertise from your admins — some of whom may already be forced to learn new skills as a result of infrastructure convergence. Moreover, just because a piece of hardware supports a given protocol doesn’t mean it does so well.

As any storage admin will tell you, it’s not always easy to get one type of storage fabric to perform to its full potential. The tweaking process almost always involves incredibly specific adjustments to storage hardware, storage software, and operating systems. For example, the guide for integrating VMware vSphere with a legacy FC-only SAN is a good 12 pages long, with recommended MPIO tweaks, timeouts, and firmware combinations — you name it. Throw in multiple protocols, and things may rapidly become a magnitude more complex.

Compatibility is also an increasingly common problem. A simple matrix that shows operating system support for a given storage platform won’t cut it. A new axis must show the protocols supported, the various kinds of multipathing allowed on those OSes, which DCBx switches they might support, and so on. As a result, it’s much easier to assume that a given configuration will work (or perform) well when, in fact, it won’t. Carefully studying these increasingly complex support matrices becomes more important than it has ever been.

Worse, whether you can use the neat backup and disaster recovery features present in today’s storage may depend on your choice of protocols. For example, many of the application-specific (Microsoft Exchange, SQL, Oracle) plug-ins that allow primary storage to take application-consistent snapshots have very specific protocol requirements. Supporting such add-ons becomes even more complicated when server virtualization is in the mix.

While most multiprotocol storage platforms can support a mix of block-level protocols without too much trouble, things can get a bit sticky when the vendor has stapled file-level protocols on top. Most traditional SAN vendors approached this issue in the past by developing stand-alone NAS gateways that would work well with their SAN platforms.

Today, the success of fully integrated solutions (for which NetApp is best known) has prompted these vendors to more tightly integrate their NAS products into the SAN — sometimes natively, but usually through heavy management interface integration. This isn’t always bad, but buyers expecting a fully integrated solution may be surprised to find that they actually bought a SAN with a few Windows Storage Server boxes stuffed into the rack above it.

Flexibility vs. complexity

Multiprotocol primary storage can be a real asset to any organization faced with meteoric data growth and changing data center technology. In theory, this multiplicity allows easy transition from traditional dedicated storage fabrics to newer converged fabrics (and brings much needed deduplication functionality to unstructured data stores). Without research and careful design, however, these capabilities can create major management headaches that could have been avoided. Striking the balance between using the right tool for the job and keeping it simple is critical.

This article, “The good and bad of multiprotocol storage,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.