by Mario Apicella

Review: Brocade DCX Backbone – Part 1

analysis
Jan 22, 20087 mins

InfoWorld Exclusive Review: Brocade DCX Backbone Perhaps it’s because of a recent survey where 59 percent of Brocade customers asked for less difficult management. Or may be it’s because last year’s acquisition of McData (together with several other minor acquisitions) made Brocade portfolio much more complicated. Whatever the reason, in my recent contacts with Brocade I sensed a renewed interest in delivering i

InfoWorld Exclusive

Review: Brocade DCX Backbone

Perhaps it’s because of a recent survey where 59 percent of Brocade customers asked for less difficult management. Or may be it’s because last year’s acquisition of McData (together with several other minor acquisitions) made Brocade portfolio much more complicated.

Whatever the reason, in my recent contacts with Brocade I sensed a renewed interest in delivering improved manageability with its products. Time will tell if this new attitude from Brocade will continue to deliver new features or will be remembered only as a well conceived marketing strategy. Regardless, what I saw during some lab demos earlier this year were real product enhancements.

My visit to Brocade was to see live demonstrations of some of the features of the DCX, the new switching backbone announced on January 22.

The DCX is an interesting solutions because it brings to market the first deliverables of Brocade’s DCF (Data Center Fabric), the newly designed architecture that was announced in October.

I will let Brocade’s literature on DCF paint the detailed picture of this rather ambitious architecture. In just a few words, DCF is a promise to deliver a more flexible, easier to manage, policy driven network, able to embrace multiple connectivity protocols and to better respond to applications’ demands and to new technologies such as server virtualization.

In Brocade’s vision, the DCX is the cornerstone of that architecture, with specs that suggest a level of performance never attained before. In fact, Brocade assures me that the DCX has a no compromise architecture capable of sustaining full transfer rate at 8 Gbps on the 896 FC ports supported in its largest configuration..

In addition to FC the DCX supports just about any other connectivity protocol, including FICON, FCIP (FC over IP) Gigabit Ethernet and iSCSI. That versatility brings to mind the Multi Protocol Router, which was the first product from Brocade aimed at consolidating multiple SANs.

I had plenty of ports to spare, so it was not a relevant issue in my test configuration, but it’s interesting to note that the DCX has dedicated ISL ports that don’t take away from the number of total available ports for say, storage arrays or application servers.

However impressive the specs of the DCX may be, its most innovative features are in its software that enables a variety of features including having better control of bandwidth allocation, restricting access to specific ports according to security policies, and better managing separate fabric sections by creating independent domains.

I started my evaluation with the bandwidth monitoring features. In a traditional fabric each connection acts as a gardening hose, a passive conduit that has no ability to regulate the flow it carries.

With DCX, Brocade offers an Adaptive Networking option that enables limiting the I/O rate on selected ports, a feature that Brocade calls Ingress Rate Limiting.

Here is how it works. In my test configuration Brocade had installed two DCX units, one linked to six HBAs on three hosts, the other linked to a storage array. To better show the traffic management capabilities of the DCX, each host HBA was assigned a dedicated LUN and a dedicated storage port. The two DCX were connected using two 4G ISL (inter-switch links).

With a simple Iometer script was easy to generate significant traffic on each host. To measure how that traffic spread across the fabric I invoked Top Talkers, the performance monitoring tool. A new capability of the Fabric OS 6.0, which was running on both DCX, is to define a Top Talkers profile either for specific ports or for the whole fabric.

As the name suggests, Top Talkers monitors and makes easy to list which source-destination pairs carry most traffic. That’s what the Iometer generated traffic looked like when seen from Top Talkers.

The next step was to limit at the source the traffic flowing from one of those hosts. After moving to the CLI of the hosts-facing DCF I typed: portcfgqos –setratelimit 3/2 200

This command set a maximum data rate of 200 megabit per second on slot 3 port 2 of the DCX, which is where one of my HBAs was connected.

Moving back to the storage-facing DCX, Top Talkers was showing a much reduced traffic rate on that pair, which made more bandwidth available to the other pairs.

The rate limit can be applied in 200 Mb increments and is an invaluable tool to prevent damaging data transfer bursts. A typical real world use could be to rein in bandwidth intensive applications such as backups. When no longer needed, the rate limit can be easily reset with a similar command, which brings those ports back to the previous, unrestricted footing.

To prepare for the next test I had to reduce the bandwidth between the two DCX to make it easier to exceed its data rate. Therefore, I disabled one of the ISL ports and set the other one to 1 Gb.

Almost immediately EFCM, the Brocade Enterprise Fabric Connectivity Monitor GUI changed the link between the two DCX to a bright red to indicate a traffic congestion.

Running Top Talkers showed a much reduced transfer rate of about 22MBps on each pair. The reason for setting an obvious under-dimensioned ISL between the two machine was to show how the DCX can assign different service levels identified as high medium or low, which reserves respectively 60, 30 or 10 percent of the available bandwidth.

With DCX you can assign a specific QoS service level to each zone in the fabric. Strangely enough Brocade has devised a zone naming convention to assign those QoS levels: A zone named starting with QOSH will be assigned a high service level, while a zone named starting with QOSL will be assigned a low service level. Of course the initials QOSM identify a zone with medium service level, which is also the default for zones not following the name coding.

If you think this is an odd way of assigning a QoS level, you are not alone. I would have preferred setting the QoS as an attribute, which wouldn’t require necessarily to change the zone names.

However, Brocade maintains that the zone name approach will better meet customers’ expectations because it’s simple to understand and monitor. In fact, simple it is.

To see the effect of different QoS levels on my bandwidth constrained fabric, I created new zones following the proper name coding and assigned hosts and storage devices to each zone.

Back to the DCX where Top Talkers was already active, I saw the transfer rate of the two pairs with high QoS jump well above the others, while the pair in the medium range settled around 20MBps. The third one, which was in the low QoS Zone fell down to 17MBps.

Whatever you think of the naming convention it follows, Brocade QoS is a very simple and efficient way to set your applications in the proper pecking order and make the best use of the bandwidth available however limited or abundant it may be.

I have some more interesting snippets of my DCX evaluation to present, but this article is getting long already. I’ll pause here for the moment, but read on for part 2 and the conclusion of my DCX review.