Once you've built a solid network foundation, the next challenge is to use it as effectively as possible. With NFS, these best practices can help Last week I wrote about some of the basics of designing a network for use with IP storage. While building in an appropriate level of redundancy and properly configuring VLANs and Spanning Tree are critical, implementing those design fundamentals barely begins to scratch the surface of the work necessary to build an exemplary IP storage infrastructure. After you’ve set the foundation, the next step is to configure your servers and storage to make use of it. That typically involves determining how your servers and storage will leverage the redundant network you’ve deployed — both in terms of offering redundancy and additional throughput. While you certainly can use a single NIC off each server and a single gigabit interface on the storage device, doing so will dramatically limit throughput potential and won’t leverage the redundancy offered by a dual-switch architecture. It’s clear you want to use at least two dedicated storage network interfaces on each server that needs to attach to the storage and at least two — if not more — interfaces on the storage itself. That seems simple enough, but there are a lot of details to consider when configuring storage path redundancy and multipath throughput. To make matters worse, the best approach will vary wildly depending on the storage protocol, the specific storage hardware, and even the virtualization stack or server OS you’re using. To start, it’s important to contrast the two most popular IP-based storage protocols: iSCSI and NFS. Though both protocols allow you to access shared storage across a standards-based IP network, they are dramatically different — and require completely divergent approaches to offering network redundancy and optimizing throughput. NFS NFS (Network File System) is a file-level storage protocol that has gained in popularity for its relative simplicity and ease-of-use. A wide array of entry-level NAS devices and enterprise-class SAN/NAS devices offer NFS capability and are commonly used in concert with VMware’s vSphere hypervisor, which supports it natively. (Microsoft Hyper-V currently does not, but support is planned for the upcoming 3.0 release.) Since the NFS protocol doesn’t have built-in redundancy or multipathing capabilities, the best way to offer redundancy and high throughput usually lies in the use of the same NIC teaming technology you might implement to offer redundancy and additional throughput to a generic server. However, NIC teaming (also called bonding or channeling) is a complicated and often misunderstood technology. Understanding how to best configure your servers and storage requires a solid grasp of how that teaming works. In the most basic sense, NIC teaming is a methodology for aggregating the bandwidth of two (or more) network interfaces into a single logical interface that offers redundancy and twice the throughput. However, there are a large number of different types of NIC teaming; some will offer only redundancy with no throughput advantage, while others provide varying degrees of throughput optimization along with redundancy. The first thing to know is that even in the best of circumstances, bonding two 1Gbps Ethernet links does not simply give you 2Gbps of bandwidth to play with. In fact, no matter what kind of teaming methodology or load-balancing algorithm you use, a single connection between two network devices will never be able to transfer at a rate higher than the bandwidth offered by a single link — even if a large number of gigabit NICs are bonded together. In the best of circumstances, NIC teaming can only load-balance multiple connections or network conversations onto different members in the team to utilize them all simultaneously; it cannot spread the bandwidth used by a single connection over multiple NICs, as is possible with serial links like T1s. This sometimes comes as a surprise to even the most seasoned network engineers and is the first hurdle when determining the best way to utilize NIC teaming to your advantage. The next important thing to know is that you typically cannot load-balance traffic across two physically distinct network switches. This is due to the fact that the kinds of teaming technologies capable of load-balancing traffic across multiple team members require the switch to participate in the process — and this typically can’t be done across more than one switch. In the scenario I laid out last week in which two physically separate switches were used, you would be able to offer either redundancy by attaching both members of a two NIC team to opposite switches or you could attach both to a single switch (forgoing NIC redundancy), but not both. Fortunately, there are exceptions, but they generally require switches that can be “stacked” into a single logical switch with a single active control plane or switches designed to support building port channels across multiple distinct switches (check out Cisco’s vPC and VSS for an example). The bottom line is that if you want to both be able to load-balance NFS traffic across multiple NICs on your servers and storage array while also offering switching redundancy, you need to have a stackable switch. (Cisco 2960S and 3750-Series switches are common in small-business applications, but many other networking vendors make switches that will fit the bill.) Otherwise, you can have only one or the other. Assuming you’ve built your network using a pair of stacked switches, you can have the best of both worlds. However, simply constructing port channels (teams) on your switch ports and configuring the servers and storage to attach to them isn’t the end of the process. As stated previously, even dynamic teaming (based on the 802.3ad or LACP standard) can’t do more than load-balance connections onto different team members. You need to make sure you’ve configured your storage hardware and switch-load-balancing algorithm to account for traffic in the most advantageous way. Though the exact approach will vary depending on what kind of storage hardware you use, this typically involves utilizing the “Source and Destination IP Address” team load-balancing algorithm combined with some IP address aliases on the storage hardware to ensure the best distribution of traffic across network links. Using this algorithm means that each NIC team — whether on the servers, storage, or switches — will hash the source and destination IP address for each packet it sees and use that hash to determine which link to send the traffic down. This is an oversimplification, but in a two-member NIC team, you can imagine source/destination IP combinations that “add up to” an odd number might be pushed down link No. 1 while those that result in an even number might go down link No. 2. If you engineer your NFS traffic such that connections from a single server to your storage are distributed to two or more IP addresses on the storage hardware (typically on a per-volume basis), you can ensure that the traffic will flow over different NICs between the server and the switches and from the switches to the storage array. And if one of the two switches were to fail, the teams on the server and storage array will simply remove the failed links from the team and send all the traffic down the remaining links. Putting it all together As simple as NFS and other kinds of IP storage are to get working, they can require real attention to detail to get working well. This is one of the reasons why IP storage has made huge inroads in the small-business market where administrators might not require the additional performance and/or redundancy offered by these kinds of optimizations, but the “more complicated” Fibre Channel still retains a solid foothold in large enterprises. In those environments, the wide range of optimizations and additional configuration to get IP storage to stack up against Fibre Channel can often appear to erase the ease-of-use benefits that IP storage brings to the table. Stay tuned for next week when I delve into some of the optimizations you can use to make iSCSI-based storage hum. This article, “Get your storage network right for NFS,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. Technology IndustrySoftware Development