The scalability of hardware architecture is determined by an array of factors, including bus speeds, memory latency, I/O efficiency, and resource contention. The advantages of adding a CPU to a multiprocessor system (scaling up) must be weighed against those of adding a new server (scaling out). In typical cluster or farm configurations, increasing capacity involves both types of scaling. But with blades, it’s strictly scaling out. For this reason, some rules of thumb that apply to sizing clusters and farms don’t work for blades. Blades are subject to the same scalability advantages and bottlenecks that affect clusters, but some of these advantages are amplified by the compact nature of server blades. For starters, even a small cluster node has room for multiple hard drives and I/O cards; a pedestal server node accommodates as many as four CPUs, approximately 500GB of disk space, and at least 2GB of RAM. One rack may hold hundreds of blades, but each blade is closer in performance and capacity to a 1998 desktop PC than to a 2002 entry-level server. Multiprocessor systems can divide one application’s threads across CPUs. Blades divide the workload coarsely; each blade must be able to run an OS plus your application. Therefore, the maximum performance for any one atomic task, such as a Web services stock quote request, is determined by the capabilities of the fastest single blade card in your rack. Blades don’t pool memory the way multiprocessor systems do. If your application’s memory needs rise, you can’t feed an extra gigabyte to the whole blade stack. You have to add memory to every blade card that might run the larger application and make sure that unexpanded blades are excluded from the more demanding task. Mixing CPU types and speeds within a blade chassis, when allowed, complicates blade planning and management, but to a lesser extent than mixing cluster nodes. At least a blade server has the potential to expose one monitoring/management access point for every 16 or 24 blades rather than one per node. Stability and interoperability are important criteria for choosing a blade interconnection technology, but bus performance is critical. The bandwidth of the interconnection bus determines the maximum number of blades you can chain together in one segment. If that same bus, such as Gigabit Ethernet, 10Gb Ethernet, or Infiniband, is used to carry user traffic and to access network storage, you’ll have less headroom for blade-to-blade traffic. Ideally, each blade will have more than one fast bus so that user, storage, and blade-to-blade communications can be kept separate. — Tom Yager Technology IndustrySoftware Development