paul_venezia
Senior Contributing Editor

Old NFS is the new darling in virtualization

analysis
Oct 14, 20135 mins

The resurgence of NFS in virtual environments shows that some good ideas do not fade away

Some technologies work so well that they’ve become immortal — not, perhaps, because they are perfect, but instead because newer technologies have not improved on their advantages enough to unseat them, even if they may make inroads.

One example of this would be NIS. Though there are a host of newer network authentication mechanisms available, NIS is still ubiquitous. Another would be IPv4. Even though IPv6 is far more extensible and modern, most of us are still working with IPv4 and will be for a long time to come.

[ Virtualization roulette: One 10G switch is never enough | Virtualization showdown: Microsoft Hyper-V 2012 vs. VMware vSphere 5.1| Get virtualization right with InfoWorld’s Server Virtualization Deep Dive PDF guide and High Availability Virtualization Deep Dive PDF special report. ]

Then there’s NFS, which is turning 30 next year. NFS’s usefulness as a distributed file system has carried it from the mainframe era right through to the virtualization era, with only a few changes made in that time. The most common NFS in use today, NFSv3, is 18 years old — and it’s still widely used the world over.

It wasn’t always that way. There was a long time where NFS was used solely in Unix land, serving up files to Solaris, Linux, and FreeBSD servers in various places but eschewed by many as being too old and insecure to be of much use otherwise. Even the advent of virtualization didn’t immediately call on NFS for much other than a fallback option. iSCSI was on the rise, Fibre Channel was the go-to medium for providing fast network storage access, and NFS was just sort of there. But with the adoption of 10G networking and the subsequent price drops of 10G ports, NFS has seen a resurgence, specifically in the virtualization space.

Sure, there are still millions of Unix boxes using NFS, but now there are also millions of virtualized Windows servers that are running from NFS storage through the hypervisor. More and more storage vendors are recommending NFS over iSCSI for virtualization deployments for a wide variety of reasons.

For one, NFS is far less cumbersome to use and manage than iSCSI. You don’t have to cut LUNs for each set of virtualization hosts (or in the case of some hypervisors, cut LUNs for each VM); instead, you can simply export a file system on a dedicated, closed storage network, and any host can play in the game. Sure, you won’t have CHAP authentication, but in many cases, that’s not necessary. In many data centers, authentication for iSCSI exists simply to prevent problems with hosts accessing LUNs they shouldn’t while scanning.

Presenting storage through iSCSI rather than a file system places the onus of managing simultaneous host access on the hosts themselves. All locking and write management has to be handled outside the storage array, meaning you can run into problems that cause catastrophic effects when one host goes pear-shaped.

On a few occasions, I’ve lost an iSCSI LUN completely when a war between several ESXi hosts resulted in a horribly corrupted VMFS volume on the LUN. I had to destroy the volume and re-create it from backups. With NFS, all of the tasks at the file system layer are handled by the array itself, leading to a more cohesive environment for multiple systems to access — as in the case of virtualization.

In addition, NFS-exported volumes are more easily managed as a whole. If you need to adjust files on an NFS volume, you mount it from a workstation or even from a VM that’s running off that volume, and go to town. You can back up an entire NFS volume from a Linux box with only a few commands. You don’t need to worry about stepping on any toes while doing so. But if you inadvertently mount an iSCSI LUN on the wrong machine due to a presentation error, all could be lost.

In many cases, you might find that NFS performance beats iSCSI. Depending on the transport and the storage in use, NFS throughput can exceed iSCSI for some workloads, especially when there are a high number of writes. At worst, NFS usually performs on par with iSCSI. In terms of VMware’s VAAI (vStorage APIs for Array Integration) and the like, iSCSI has a more advanced set of tools, but some storage vendors support VAAI on NFS for some primitives, such as full copy and clone offload.

NFS versus Fibre Channel is a different discussion, given the fact that Fibre Channel requires dedicated HBAs and switching, but even against FCoE (Fibre Channel over Ethernet), NFS wins hands-down in terms of simplicity. As far as resiliency goes, redundant storage paths for iSCSI and Fibre Channel or FCoE are more advanced than NFS, as most hypervisors will allow for multiple concurrent paths to storage, while NFS is limited to failover or bonded NIC teaming. In practice, however, this is usually not a significant problem. Also, it’s hoped that widespread NFSv4 adoption will bring about multipathing and additional security features.

For a few years there, iSCSI was the darling of the virtualization world — at least for those not requiring the then-faster speeds of Fibre Channel. But the worm seems to be turning now, and NFS is pushing back into the limelight, just as you might expect for an immortal technology.

This story, “Old NFS is the new darling in virtualization,” was originally published at InfoWorld.com. Read more of Paul Venezia’s The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.