Are blades already casting a shadow over the corporate workhorses of clusters, mainframes, and large-scale servers? SERVER BLADE TECHNOLOGY reaches into the domain of clusters, mainframes, and large-scale enterprise servers. Figuring that the initial buy-in is approximately $5,000 and that blades cost as little as $1,000 each, you could build one heck of a blade stack for the money you’d spend on a Sun Enterprise or IBM mainframe. Then again, blades can present management hassles whereas enterprise big iron has a bulletproof reputation for reliability. A cautious approach is easy to justify — blades have not proved themselves yet — but is it already foolish to buy server technology that might be obliterated by blades? P.J.: Blade components in general are a nifty idea, but I have to wonder about all the noise being made over how blade computing is going to change the world. I figure customers are going to run the same software on blades that they would on conventional rack-mounted servers or even on a bunch of white-box, commodity servers. Respondents to the 2002 InfoWorld Server Blade Survey have said nothing to change my opinion — databases and Web hosting lead the applications they expect to use on blades, at 71 percent and 57 percent, respectively. Tom: The advantages of blades aren’t compelling until you scale out to dozens or hundreds of nodes. Think of how much time and resources are wasted in a traditional cluster or farm — every system must be installed and configured separately, no machine knows anything about the other nodes, and managing a cluster is about as easy as herding cats. z Blades maintain the advantages of clusters while eliminating most of the redundant elements. By reducing square footage, kilowatt hours, cooling BTUs, and service time, blades recover wasted money. That translates to reduced operating costs or increased headroom for more nodes. Blades’ potential for a unified monitoring and management interface could dramatically reduce staffing costs; a blade server not only gives you fewer consoles to watch but also lowers the skill level required for maintenance. P.J.: But what’s innovative? Hot-plug and hot-swap features have been around for a while, but I question how many customers really take advantage of them. I simply can’t see anything that isn’t a logical outgrowth of consolidating support functions onto single-chip forms or the availability of denser and denser memory modules. Tom: By your measure, laptops and PDAs have no reason to exist. Electronics vendors don’t miniaturize and integrate just because they can. They do it to reduce manufacturing costs, raise yields, improve profits, and most importantly, make customers happy. A blade is a cheap, disposable server with a birdlike appetite for electricity. Want to impress the suits? Show them your fire-breathing, ear-splitting, 6-foot rack of eight Unix servers, or make it 21 low-profile PC servers. Then show them how just one 5-inch high blade chassis holds 24 discrete systems. You shouldn’t discount serviceability; customers don’t — 75 percent of readers planning a blade purchase cite serviceability among their motivating factors. Companies do use hot-pluggable components. Would anyone care about the ubiquitous SCSI or Ethernet if they had to power down the whole bus to add a device? And consider what it takes to replace a node in a high-density rack. While you’re figuring out which screws release a toasted 40-pound server from its bracket and are untangling the cables emerging from its back panel, I’ve hot-swapped my burnt blade and have watched the first half of Attack of the Clones. P.J.: I’m almost ready to buy into your argument, Tom. Yet I can’t help but remember that most companies are trying to consolidate their servers, not add more. For another thing, customers are shying off until the space matures; of 381 respondents likely to use server blades, almost two-thirds want to hold off until standards take root. Besides, from my experience, the most expensive part of having a room full of servers isn’t the physical real estate but the amount of heat the servers generate. Running extra electrical and data lines is dirt cheap compared to adding or upgrading HVAC. If replacing existing servers with blades can pay for itself from a power-consumption perspective, that’s a big plus, but I don’t hear any vendors pushing that angle. Tom: Chipmakers such as Intel and Transmeta are selling the power and cooling conservation traits of their blade CPUs. IBM is catching flack for designing blades around hot and power-hungry Xeon processors. But what IT managers like most about blades is the cost — they deliver a lower cost per CPU cycle than any competing architecture, and we’re in only the first wave of rollouts. Blade cards that cost $1,000 today will sell for $200 a pop in three years. P.J.: For everyone’s sake, I hope you’re right, Tom. But I have this nagging feeling that in a couple of years, we’re going to be asking why more customers didn’t take the bait of server blades. To me, the answer is simple: Where’s the need? Tom: There will always be a need for systems that are simple, cheap, and easy to scale. I look forward to fast node interconnects, self-monitoring blade cards, smart management backplanes, and blade-tuned software. I am convinced that blades, done right, will turn other large-scale servers into overpriced dinosaurs. Technology IndustrySoftware DevelopmentSmall and Medium Business