Review: Riverbed Granite reins in remote servers

reviews
Dec 7, 201210 mins

Granite Core and Edge appliances split the difference in branch office consolidation, running servers in the branch based on storage in the data center

Bringing branch office servers and file shares back to the data center is generally easy to do, especially with the use of WAN optimization solutions. But sometimes — for the sake of performance, practicality, or politics — a server simply must remain in the branch. It was for these intransigent servers, and to satisfy the needs of both server-hugging branch offices and control-hungry IT, that Riverbed Granite was born.

By pairing appliances at the edge and at the core, Granite allows IT to “project” virtual machines and iSCSI storage volumes out to the branch office while keeping the actual assets in the data center. Through innovative technologies, Granite closes the gap between physical servers in the branch office and storage in the data center. As a result, VMware ESX and Microsoft Hyper-V servers running in the branch can launch virtual machines across the WAN, and the VMs can write back to storage located in the data center.

[ Also on InfoWorld: Review: Riverbed Steelhead closes the WAN gap | Use server virtualization to get highly reliable failover at a fraction of the usual cost. Find out how in InfoWorld’s High Availability Virtualization Deep Dive PDF special report. ]

Granite is available as a stand-alone product or as a bundled component on a Steelhead EX appliance. When used in combination with Steelhead WAN acceleration, performance improves dramatically, especially on subsequent VM launches, to rival the speed of true local storage.

Store centrally, execute locally Granite is a different kind of creature than Steelhead. Steelhead accelerates a wide range of TCP and UDP traffic over a WAN through application- and protocol-specific optimization engines. Steelhead also reduces bits over the WAN through data deduplication, and it compresses data to get more on the wire. Granite, on the other hand, is specifically designed to export iSCSI storage resources located in the data center across the WAN and present them as local storage.

There are two components to a Granite installation. The Granite Core appliance, which resides in the data center, is available in either physical or virtual versions. The Granite Edge appliance, which goes in the branch office, is available only as a physical appliance. It can be installed as a stand-alone device or as part of a Steelhead installation (Steelhead EX). Granite Core uses some digital trickery to export LUNs (that is, iSCSI disk volumes) from the data center out to Granite Edge. A server in the branch, such as VMware ESX or Microsoft Hyper-V, connects via iSCSI to these LUNs on Granite Edge, where they appear as local storage volumes.

Granite helps to overcome WAN latency in a few ways. First, writes are acknowledged and cached by the appliance at the branch, then forwarded over the WAN in the background. Reads are accelerated through prediction and prefetching, as well as by caching the most recently requested blocks at the edge and delivering them locally. Typically, five to 20 percent of the LUN will be cached at the branch (the Granite Edge will cache as much of the LUN as it has disk space available). However, you can ensure that the entire storage volume will be served locally by “pinning” the LUN to the Granite Edge (more on pinning below). 

12355792393814.png12355792399603.png12391385384448.png12372119204056.png12372119206773.png12355113543399.png
Test Center Scorecard
 
  35% 20% 20% 15% 10%  
Riverbed Granite v2.0.0 9 9 8 8 8

8.6

Very Good

Although Granite will work with any file system, prediction and prefetching is limited to VMFS (VMware’s file system) and NTFS (Microsoft’s file system). Granite leverages awareness of how storage blocks are mapped by these file systems to anticipate which blocks will be needed and proactively send them to the edge. For instance, Granite can recognize when an operating system is booting or when a large file has been accessed, then respond with all of the necessary blocks, even before receiving the requests.

If you pair Granite with Steelhead, you’ll also reap the benefits of data deduplication over the WAN. Among other things, this means the bits comprising the branch office’s virtual machines will be delivered locally from the Steelhead EX cache. Thus, virtual machines stored in the data center — where they can be easily backed up and maintained by IT — can perform nearly as well as if they were stored locally.

Tested from coast to coast I tested Granite in my lab in Florida against a storage array located completely across the country. My Granite Edge instances ran on two Steelhead EX appliances, deployed in a hot-standby configuration, and connected across a VPN back to a Granite Core appliance and an EMC storage array located in San Francisco. The EMC system was carved into multiple LUNs, some with Windows Server 2008 virtual machines already in place and others providing raw storage. From my local VMware ESX server, I was able to connect to four different LUNs and add each virtual server and disk into my inventory.

To test performance, I then booted a VM over the VPN and timed the event. The first launch of Windows Server 2008 took approximately 13 minutes to get to the log-in prompt. Once the VM was running, there was a slight delay whenever Windows Server performed a task for the first time, such as opening Server Manager, as the new bits made their first trek over the VPN. But after these initial delays and all the OS bits had been cached locally in the Steelhead, Windows Server worked at or near the speed of local storage. After just a short while, navigating the Windows Server UI and using various applications felt no different than if they had been running on local hardware. A reboot of the server was much faster, needing only approximately 1.5 minutes to boot up because of Steelhead’s caching.

While Granite can export any iSCSI LUN, its optimizations are specific to VMFS and NTFS. This means that whenever a request is made for data from Granite Edge, if the LUN’s file system is VMFS or NTFS, Granite Core will predict and prefetch the desired blocks (such as the boot blocks when a Windows OS is being launched). Other file systems, such as EXT3, can be used, but they don’t benefit from pre-fetching. 

To pin or not to pin There are two ways to make LUNs available to Granite Edge: pinned and unpinned. Pinning a LUN copies the contents of the volume from the data center to the Granite Edge appliance and stores the volume in local storage. Changes to the volume, from either the data center or the local office, are synchronized to keep the volume consistent. In situations where the WAN link is exceptionally slow, such as over a satellite hookup, pinning provides the branch with local storage, while also giving IT direct ownership and ultimate control. In addition, pinning is useful for file systems, such as EXT3, that don’t benefit from Granite optimizations.

Another use case for pinning a LUN is for I/O-intensive servers such as SQL and Exchange where performance is critical. By keeping the I/O operations local, performance is maximized, but all the while, a copy of the VM is synced back to corporate. The only problem with pinning LUNs is the possibility of running out of local disk space. If your storage needs are minimal, pinning is the way to go.

An unpinned LUN is one that consumes only cache storage in the Granite Edge; the rest of the data from the volume, if and when requested, is fetched over the WAN from Granite Core. This is a plus from a security standpoint, because the only complete copy of the volume stays safe in the data center. Performance can also be very good, as in my Windows Server 2008 boot tests. After all, the most active data is served locally from the Granite Edge cache. 

One feature you have to like is the ability to pin and unpin a volume as needed, even while the system is operational. For example, whenever a planned outage for the WAN circuit was scheduled, you could pin any unpinned LUNs to the edge to allow users to work during the maintenance. After the WAN goes back into service, and the edge LUN resyncs with the core, you could unpin it again.

Building the blocks There are a couple of steps necessary to get the LUNs into Granite. The first is to create an iSCSI connection from Granite Core to the storage array. Here you create iSCSI connections to the iSCSI portal (the storage system) just as you would with any normal iSCSI system. Granite supports all common iSCSI initiator options such as header and data digests, CHAP authentication, and MPIO. Once the storage is connected to Granite Core, you can give each available LUN a friendly name and map it to your branch office. For my test, I added a new LUN and assigned it the name VOL1. I then mapped it to my Granite Edge appliance and left it unpinned so that the volume stayed in the data center. The last step is to grant access to the LUN. By default, all newly created LUNs are unassigned and therefore unavailable for use. I added the group “all” to the allowed list, and my iSCSI volume was then available for my branch office.

Another very useful — and necessary — feature is Granite Edge’s active/standby fail-over capability. For offices that need maximum uptime, Granite Edge appliances can be deployed in fail-over pairs. Each appliance is kept in sync with the other through a local Gigabit Ethernet connection. If the primary fails, the other takes over, and the exported volumes remain up and available. I tested this with my pair of Granite Edges by pulling the power to the primary unit. The secondary appliance picked up the slack, and my Windows Server 2008 virtual machine kept working as if nothing had happened. When power was resumed to the primary, it became the backup unit and synchronized to the running appliance. This fail-over works for both pinned and unpinned volumes.

Riverbed Granite is a unique solution for running virtual machines over the WAN. With Granite, IT can collapse server resources — VMs and storage volumes — back to the data center while still providing server resources at the branch office. The ability to export storage volumes across the WAN and provide excellent performance at the branch is groundbreaking. Setup and configuration are minimal, and being able to pin and unpin volumes provides excellent flexibility. It’s possible no one ever dreamed of running VMs over the WAN, but now there is a solution — and Granite is its name.

Riverbed Granite at a glance

Cost Platforms Pros Cons
Granite Core Virtual Appliance costs $7,995; Granite Edge for Steelhead EX costs $3,000 (does not include Steelhead appliance) TCP-based WANs with iSCSI storage systems
  • Makes iSCSI resources in data center look like local storage
  • Works with any file system
  • Volumes can be “pinned” to branch office (and synced to data center in background) for better performance
  • Works well over the WAN
  • Prediction and prefetching for VMFS and NTFS only
  • Best results require Steelhead appliance

This article, “Review: Riverbed Granite reins in remote servers,” was originally published at InfoWorld.com. Follow the latest developments in networking at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.