The need for I/O virtualization and lack of practical short-term alternatives breathes new life into InfinibandFor many IT managers, Infiniband immediately brings HPC (high-performance computing) to mind. In fact, this much-snubbed connectivity protocol has found vindication in HPC, where its minimal latency, sustained transfer rate, and gentle impact on CPU cycles have found much appreciation.It’s not that Infiniband is unknown among the lower tiers of enterprise computing, but vendors have predominantly favored other protocols, such as FC (Fibre Channel) and, more recently, the controversial newcomer, iSCSI. Which is a pity, because Infiniband could very well be the ultimate server connectivity protocol — at least for the time being. Because of this, expect Infiniband to soon undergo a renaissance of sorts, one which — if memory serves — will not be the first and — if history has any bearing on the future — will likely be as short-lived as previous rebirths. This dated but well-conceived white paper from Cisco will provide a worthwhile refresher on Infiniband, for those who might need it. I’ll quote a short excerpt from that document: From a protocol perspective, the InfiniBand architecture consists of four layers: physical, logical, network, and transport. These layers are analogous to Layers 1 through 4 of the OSI protocol stack. Missing Presentation, application and session layer. The protocol high-level layers mentioned above never got formalized, which could be the reason for, or perhaps the effect of, its limited diffusion in corporate infrastructure. In fact, for many IT managers who don’t administer HPC applications, Infiniband is just an unnecessary nuisance, one that doesn’t add much to what FC can already do. If this provocative statement describes you, server consolidation is unlikely a significant issue at your shop. Think instead of having 20 to 30 VMs on one server: If, say, six of those apps require a dedicated FC pipe, you will have to install six FC HBAs. Double that for dual-path redundancy. Under these circumstances, expect to see a spaghetti dish of cables spew from your server — an unsightly mess that is as difficult to manage as it is expensive. Can you scale? Perhaps, but only up to a point; sooner or later you will run out of slots for your HBAs. Got some iSCSI targets to reach? Then add some GbE NICs to that server and as many Ethernet cables.How does Infiniband help if you find yourself in that predicament? A good solution comes from Xsigo (pronounced seego) Systems, a new company that launched its first product, I/O Director in September. A look at the photo shows that I/O Director hosts 24 Infiniband ports on the top row. The lower part of the appliance can load FC or GbE cards. Install one or, for redundancy, two Infiniband HCAs (host channel adapters) on each physical server and connect them to the I/O Director. The lower ports then connect to the switch or to the storage targets using FC or GbE. The first advantage of deploying I/O Director is to reduce both cable clutter and the number of adapters at the server. Each HCA can dish out as much as 20Gbps, which can support multiple FC or iSCSI connections. But my favorite aspect of the Xsigo solution is the software that runs on the appliance. Via your browser or a VMware plug-in (a CLI is also available), you can create virtual HBAs and virtual NICs and map them to the appropriate storage target. Xsigo claims it is the only vendor to offer this capability, and to the best of my knowledge, that assertion is correct. Imagine the flexibility a virtual adapter will bring to your datacenter, in essence allowing you to provide the proper connection to newly created VMs on demand. The guest machine cannot tell the difference, viewing the virtual adapter as if it were physical. For example, during a demo I saw, a new hardware wizard popped up immediately after a virtual adapter was added to a Windows guest machine.You can also move virtual adapters from one VM to another, even across different physical hosts: Think a poor man’s VMotion, as Xsigo aptly put it during my demo. In fact, the receiving machine will also inherit all storage volumes reached by that adapter — an easy way to move application data quickly to a new environment when something goes wrong. Will Xsigo and other Infiniband-centered solutions bring the protocol back to the corporate deployment realm? My guess is yes, but only for as long as its current bandwidth advantage will continue. 100-Gigabit Ethernet could be the death sentence for Infiniband, but I don’t see that coming on too fast. Have you already deployed Infiniband? Care to quickly share your use cases and experience?Technorati Tags: Infiniband,HPC,VMware,virtualization,I/O virtualization,VMotion,GbE,Fibre channel,Xsigo