matt_prigge
Contributing Editor

The storage perks of VMware vSphere 5

analysis
Sep 12, 20116 mins

File system improvements in vSphere 5 enhance storage scalability and lay the groundwork for flashy features like Storage DRS

No one doubts that virtualization has made managing centralized storage environments far easier than managing an equivalent mass of physical SAN-attached servers. But as businesses spin up new VMs and create data at breakneck speeds, the limitations of VMware’s VMFS3 clustered file system have often yielded complex, hard-to-manage storage configurations.

Thankfully, with vSphere 5, VMware has upgraded the file system with a number of substantial improvements. Although some lmitations remain, the new VMFS5 file system provides both greater scalability and easier management.

Bigger and badder

The most visible change in VMFS5 is an increase in the maximum VMFS volume size from 2TB to approximately 64TB. In older versions of vSphere (and VMFS), you’d need to provision a series of partitions, or extents, that would be concatenated together to form a volume of up to 64TB using 32 separate SAN volumes, each containing a 2TB extent.

I’ve never actually seen this in real life with VMFS3, mainly because it’s an excellent way to turn your storage administrator into an alcoholic overnight. Instead, most shops simply deploy a mass of 2TB VMFS volumes and spread the VMs out across them. This makes your storage admin slightly less likely to hit the bottle, but it’s less capacity efficient and just spreads the management headache across the virtualization and storage admin, rather than decrease it in any meaningful way.

Doing away with masses of extents (and SAN volumes) yields a number of great real-world outcomes. The most obvious is that you’ll have a 1:1 relationship between VMFS volumes and SAN volumes — easing such management tasks as SAN-side snapshots, array-based replication, and LUN performance tracking. By decreasing the LUN count required to address a large amount of storage, you also stay much further away from other configuration maximums present in vSphere 4.1 and earlier, such as maximum LUNs per host and maximum storage paths per host.

A much smaller number of much larger volumes also syncs with the reality of managing modern storage hardware. In the past, storage provisioning usually involved dedicating a certain set of physical disks to each SAN LUN created so that each LUN had its own capacity and performance limitations. Today, it’s much more common to deal with virtualized SAN arrays, where each LUN is spread across very large numbers of disks aggregating both performance and capacity — which these new, large VMFS volumes can take advantage of.

New bells and whistles

The other improvements in VMFS5 mainly center around helping you leverage the increased capacity you can now address. Examples of this include an increased max file count (up to approximately 130,000 from around 30,000 in VMFS3), a standardized 1MB VMFS block size, and much more aggressive use of the Atomic Test and Set (ATS) vStorage APIs for Array Integration (VAAI) extension.

Before, when you’d create a VMFS volume, you needed to specify the block size you wanted to use, with a few choices ranging from 1MB to 8MB. Since that block size choice would dictate the size of the largest file (think a VMDK disk for a VM) and since there’s no way to change the block size after a VMFS volume is created, most admins chose to go with the largest block size even when it wasn’t immediately required. With VMFS5 — and 1MB block-size VMFS volumes able to handle 2TB files — you no longer need to make that decision.

Similarly, VMFS5 exclusively leverages the ATS VAAI extension for block-level file locking when it’s supported by the storage hardware. This is an important enhancement, because it largely does away with the previous use of volume-level SCSI reservations that had been exclusively or partially employed for this purpose in the past. The net effect is that multiple hosts sharing a single VMFS LUN (as in a cluster) are significantly less likely to step on each other’s toes when you create or modify files on a VMFS LUN.

Thanks to that improvement, you can jam more virtual machines onto a VMFS volume than before without running into file locking problems, though you’ll certainly need to make sure your underlying storage can deal with the load before you go wild. You’ll also need to make sure your storage array supports VAAI (most do, though you may need a firmware upgrade) — and run vSphere Enterprise or Enterprise Plus to have access to the feature.

What’s still missing

Storage and virtualization admins still didn’t get everything they wanted out of VMFS5. Though the overall capacity of a VMFS5 file system has increased 32-fold over VMFS3, the file size limit of 2TB persists in VMFS5. If you have a single virtual machine that requires more than 2TB of disk space, you still need to create multiple VMDK disks and balance your files across them — or use some kind of guest operating system disk management (software RAID0, LVM, and so on) to span the disks.

The only silver lining here is that you can now create physical-mode (or pass-through) Raw Device Mappings (RDMs) up to 64TB in size, allowing you to pass a very large SAN LUN directly to a VM. Unfortunately, in so doing, you lose the ability to use such neat vSphere features as Storage vMotion or vSphere snapshots in the process (only virtual-mode RDMs and VMDK disks support these). And because most third-party virtualization-aware backup tools such as Veeam, Esxpress, or vRanger use vSphere snapshots to obtain consistent backups of virtual disks, you’d need to find another way to back these systems.

Worth the upgrade

Upgrading existing VMFS3 datastores to VMFS5 is relatively easy, with some caveats. Once all of your hosts are running vSphere 5, you can perform a nondisruptive upgrade without so much as powering off a VM, which will enable many of the features of VMFS5, such as increased capacity, ATS, and so on. However, some VMFS5 features — specifically those pertaining to block sizes and increases in file counts — require that you create the volume from scratch. The best practice is to create new VMFS5 volumes and use Storage vMotion to move VMs from the old to the new if you can do it.

Overall, VMFS5 is a substantial improvement over previous versions — and sets the stage for flashier, more advanced features such as Storage DRS and Site Recovery Manager host-based replication. Yes, some lingering limitations will continue to frustrate virtualization and storage administrators. But in the final analysis, VMFS5 amply demonstrates VMware’s commitment to ensure that artificial constraints avoid driving customers from vSphere into the arms of virtualization competitors.

This article, “The storage perks of VMware vSphere 5,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.