Server blades

analysis
Jul 18, 20027 mins

These compact, resilient, and affordable servers could revolutionize enterprise computing -- but only if IT leaders steer the technology toward open solutions with ongoing expandability

IF YOU ASK IT buyers whether they most desire performance, a small footprint, low power consumption, reliability, easy manageability, or scalability in a server, the answer is going to be yes. Blade technology promises to satisfy all these criteria, making some of the benefits of large-scale servers available to buyers with midrange budgets. Capable of squeezing an awesome array of computing firepower into small boxes that require a relatively small staff to manage, blade servers are more expandable and resilient than traditional multiprocessor systems, are more affordable and future-proof than mainframes, and yet are smaller and easier to manage than clusters. Blades can scale better than a cluster of stand-alone systems, they consume less power and generate less heat than comparable devices, and most can be serviced with the power on. When you want more computing power, plug in another blade. If you run out of slots, hook a second chassis to the first. It sounds ideal.

In a survey of more than 500 readers, respondents’ answers reveal a mix of naysayers, early adopters, and head-scratchers: 28 percent have no plans to buy blades; more than 17 percent have solid plans in place to buy before the end of 2003; and the rest are mulling over adoption and are researching the technology with no set timetable. The overall picture is one of a measured but healthy implementation of new blade systems. Buyers should weigh blades’ benefits against the immaturity of the technology and the paucity of detailed reviews. Blades are riskier and are more complicated than basic servers, and early adopters should not place too much trust in vendors. It’s important that buyers scrutinize each vendor’s design choices rather than blindly commit to a familiar brand name.

Blades 101

A blade server is a stack of compact, single-board computers, or blades, linked together by a high-speed bus. Each blade is more akin to a stand-alone cluster node than a CPU in a multiprocessor system. For example, blade pioneer RLX Technologies installs a CPU, memory, disk, and network I/O on every ultracompact blade board. Vendor implementations vary in the number of CPUs and selection of components on their blades, but the concept requires that each blade operate independently. In theory and implementation, a blade server is a miniature cluster RLX’s design squeezes 24 blade cards into a rack chassis 3U (units), or 5.25 inches, tall. Using commonly available multiprocessor technology, that same space accommodates a maximum of six CPUs, so RLX can pack a dizzying 336 processors into a 42U rack. Dell, IBM, Hewlett-Packard, and Sun aren’t trying to match RLX’s density with their first wave of offerings but are focusing on price and time to market instead.

Blades use slower processors to reduce power consumption and the transmission of heat, so Intel’s Pentium III has found new life as a blade CPU. Transmeta’s Crusoe, IBM’s Power, Sun’s UltraSparc, and Intel’s Netburst (Pentium 4 and Xeon) CPUs are also part of blade designs, but Intel’s processors predominate. Even Sun plans to roll out Intel-based blade servers.

The blade market was created by smaller companies such as RLX and Egenera, but readers will spend the bulk of their budgets on blades from giants HP (49 percent), Dell (47 percent), and IBM (44 percent). Sun, late to the race, trails in readers’ rankings (30 percent), but still beats pacesetter RLX (7 percent). (Many respondents plan to purchase blades from a variety of vendors.)

Blades mesh perfectly with vendors’ plans to improve the profitability of their midrange server lines. Without hardware or software standards governing blades, vendors can use proprietary architectures to lock in buyers to their brands. Customers shouldn’t expect an HP blade card to slide into an IBM chassis, and most first-generation designs won’t connect two vendors’ chassis over a high-speed link. Cross-vendor interoperability won’t happen until standards gel and the market demands their implementation.

Blades are more evolutionary than revolutionary, descending from passive backplanes, supercomputers, and clusters. The passive backplane moves the CPU from an integrated motherboard to a modular card that plugs into the same bus that I/O devices do. A supercomputer ties together massive numbers of processors using high-speed data channels, and a cluster aggregates the computing power of several independent servers. To this mix, blades add one-step serviceability, a unified management interface, almost unlimited expandability, and low power requirements for each blade card.

Backplanes, supercomputers, and clusters are hardly legacy technologies. So IT already knows how to solve most of the problems, such as scalability and serviceability, that blades were built to address. As a result, 43 percent of respondents are not buying blades simply because they see no need for them. With the right combination of square footage, management software, and tuned applications, existing clusters or server farms work just as well. Or do they? If you’ve already made up your mind about blades, you might want to do a little more homework.

The fact is, 34 percent of those not adopting blade technology admit that they don’t understand blades well enough to make informed buying decisions. It’s difficult to sort out the pros and cons of the various approaches to scalable computing, and it’s even tougher to determine where blades fit into an enterprise’s existing server strata.

But some seem to understand how similar blades are to clusters. Asked to list blade-appropriate applications, 71 percent of respondents place databases and 57 percent put Web servers — both big cluster apps — at the top of the list. Of those readers who plan to purchase blades, 66 percent cite mixing blade hardware as a benefit of the technology. But as desirable as it is, interoperability among first-wave blades servers will be limited by processor and bus variations. Monitoring and management will also vary.

Buyers plan to apply many of the same loose criteria to selecting their blades as to their other servers. Surveyed readers rank difficult-to-measure qualities such as reliability, availability, serviceability, and manageability among their chief criteria. The majority don’t care about such low-level details as processor type, blade density, and whether the vendor implements the latest technology.

Rush to judgment

But this lack of interest in implementation details will hurt some early adopters, especially those who have already decided on vendors, even if they haven’t started shipping blades yet. In their zeal to outdo one another, most vendors have already announced plans to make obsolete their first-wave offerings in short order. Other responses suggest that some buyers have made key choices before all the facts are in: 76 percent of survey respondents have already chosen Windows 2000 as their OS, and 34 percent of buyers won’t wait for standards to gel before spending their money.

IT buyers want the blades they buy in 2003 to be affordably serviceable and expandable in 2006. They want one server brand to work with other server brands, and they should demand the freedom to switch from Windows to Linux if it suits them. Buyers must ask vendors tough questions about standards involvement, interoperability, and the projected life span of the implementation.

Blades promise to shrink entire machine rooms full of stand-alone servers into a single, resilient, highly manageable rack. Blades could reshape enterprise computing, but only if prospective buyers steer the technology toward open solutions that maintain affordable expandability for years to come. Without customers’ active involvement, a handful of companies could control the future of blades and the utility computing concepts, such as grids, that depend on them.

020722feintro_a.gif
020722feintro_b.gif