paul_venezia
Senior Contributing Editor

Review: Dell blade servers tip the scales

reviews
Sep 6, 201212 mins

Dell's M1000e blade system wows with novel new blades, improved management, modular I/O, and 40G out the back

It seems that Dell has been quite busy, especially where its blade offerings are concerned. Back in 2010, I reviewed the Dell PowerEdge M1000e blade chassis and blade servers, along with competing blade systems from HP and IBM, and I came away with a very good impression overall. At a lower cost, the Dell blades proved as fast and as manageable as the others, but there weren’t as many different types of blade servers to be had compared to the competition.

What a difference a couple of years makes.

Back in 2010, Dell had a few varieties of compute blades, but that was it. There were two-CPU or four-CPU blades, but no higher-density blades, virtualization-centric blades, or storage blades. Now, all of those options exist, and they are delivered in the same 10U M1000e chassis.

From a purely hardware perspective, the Dell PowerEdge M1000e is quite a compelling system. The vastly increased number of individual blade options — including the introduction of novel high-density blades such as the PowerEdge M420, the impressive PS-M4110 storage blade, and the Force10 MXL 10G/40G blade switch — offer more flexibility and scalability than ever.

But Dell has also taken steps to lighten the administrative burden, layering on sleek and functional management tools that add to the M1000e’s charm. Integrating Force10 switch management into the mix is a work in progress, and Dell still must face the task of centralizing the management of multiple chassis. In the meantime, Dell has already succeeded in turning out a very well-rounded blade system.

Little big SAN Among Dell’s debut blade options, the brand-new EqualLogic PS-M4110 storage blade should take center stage. This is a half-height, double-wide blade that houses 14 2.5-inch disks and two redundant controllers, connected to the switching fabric through two internal 10G interfaces, one per controller. Despite its tiny footprint, this is a fully functional EqualLogic iSCSI SAN array with the same capabilities as the full-size PS4100 arrays.

The PS-M4110 storage blade uses the same firmware and drives exactly the same as the outboard array. This means it can be controlled as part of an existing EqualLogic group that can comprise up to 16 EqualLogic arrays of varying types, assuming that 6100-series arrays are part of the mix. Otherwise, the 4000-series arrays are limited to two per group.

The EqualLogic PS-M4110 storage blade opens like a drawer, exposing the 14 hot-swap disks and hot-swap controllers that enter and exit from the top. In the front are a series of LEDs that show the status of each disk and controller at a glance, and when the drawer is open, each disk and controller have status lights on the top as well.

12355792393814.png
12388481768916.png
12378743819439.png
12355792399603.png
12388481769965.png
12372119206773.png
12355113543399.png
Test Center Scorecard
 
 20%20%20%20%10%10% 
Dell PowerEdge M1000e Blade System999999

9.0

Excellent

Code-named Colossus, this little monster can hold up to 14TB of raw storage with 14 1TB SAS disks, or it can be split into a tiering solution with five SSDs and nine SAS disks. Further, up to four PS-M4110 storage arrays can be housed in a single M1000e chassis, taking up eight slots, but leaving another eight for compute blades.

It’s important to note that unlike some storage blades offered by other blade vendors, the PS-M4110 is not designed to connect directly to an adjacent blade as a DAS solution. It’s a fully functional iSCSI SAN array that connects to the network just like any other iSCSI SAN would.

An abundance of blade options In addition to the Colossus, Dell has added an array of compute blades to the lineup. The baseline blade would probably be the PowerEdge M520. This is a two-socket, half-height blade designed for general virtualization and business application workloads. It houses two Intel Xeon E5-2400-series CPUs, up to 384GB of RAM with 32GB DIMMs across the 12 DIMM sockets, and four gigabit NICs, plus the option to add up to two mezzanine I/O cards, such as Fibre Channel or 10G Ethernet. Up front are two hot-swap 2.5-inch SAS bays, though the M520 also has dual internal SD cards that can be used to boot embedded hypervisors and remove the need for physical disks.

Like all the other blades, the M520 has an embedded iDRAC remote management card that allows for remote access to the blade’s console and provides myriad management capabilities.

Next up is the M520’s bigger brother, the M620, which is essentially identical in form but adds horsepower with Intel E5-2600-series CPUs, up to 768GB of RAM, and embedded dual 10G Ethernet interfaces. As with the M520, that I/O can be expanded with one or two mezzanine I/O cards, so you could conceivably have six 10G interfaces, or four 10G and two Fibre Channel or InfiniBand interfaces. Suffice it to say, there’s plenty of available I/O.

Kicking things up another notch, we find the M820. This is a full-height, four-socket blade with heavy specs. It runs Intel E5-4600-series CPUs, up to 1.5TB of RAM, and two dual-port 10G interfaces. There are four 2.5-inch hot-swap SAS bays up front, and there’s room for four mezzanine I/O cards; you can really pack this blade full of network and storage I/O. The mezzanine cards are not only interchangeable, but they work with all the blades. The same dual-port 10G, Fibre Channel, and InfiniBand cards can be used in all blade models. The M820 is a big-time blade, destined for large, heavily threaded, and RAM-hungry workloads.

One of the more interesting blades is the M610x. This blade is destined for a niche market, as it sports two full-length PCIe expansion ports within the blade, with the card edges exposed at the front. The compute side of the M610x is based on two Intel “Westmere” 5600-series CPUs, up to 192GB of RAM, and two gigabit NICs.

But those PCIe slots set this blade apart. They can support dual PCIe GPUs for VDI deployments, for instance — or any compatible PCIe card, such as RAID controllers and so forth. Since the card edges are accessible from the front of the blade, these blades can be cabled up to external storage arrays. It’s not a common requirement, but if you have a need for blades to house PCIe cards, the M610x is right up your alley.

Also in the mix is the M910, which offers four 8-core or 10-core Intel Xeon CPUs, up to 1TB of RAM across 32 sockets, and two 2.5-inch hot-swap drive bays. As with all the other blades, the I/O options are backed by the same mezzanine cards and include 1G, 10G, and Fibre Channel ports, as well as a dual-port InfiniBand module.

On the AMD side, there’s the M915 blade. This is another full-height blade driving four 16-core AMD Opteron CPUs, up to 512GB of RAM, and two 2.5-inch hot-swap disk bays up front. The I/O capacity of this blade is substantial, as you can drive up to a dozen 10G Ethernet ports. The 512GB max for RAM seems a bit low, however.

Finally, the Dell PowerEdge M420 may be the best and most interesting blade of them all. This is a quarter-height, two-socket blade housed in a full-height sleeve that holds four of these little blades vertically. Each M420 has one or two Intel E5-2400-series CPUs and up to 192GB of RAM, but only six DIMM slots — three per CPU — and no hard drive options. The local storage is handled by either two hot-swap 1.8-inch SSDs or the embedded SD cards for hypervisor installations.

The M420 has two 10G interfaces built in, and it can handle a single mezzanine I/O card, so you could drop four 10G interfaces in this quarter-height blade. Alternatively, you could have two 10G interfaces and two 8Gb Fibre Channel or InfiniBand interfaces. That’s a lot of I/O in a very small package.

Somewhat surprising, there are no population restrictions on these blades. You can fit 32 of these little servers in a single chassis. That’s 64 CPUs with up to eight cores each, or 512 cores in a single chassis. If you drop the beaucoup bucks to max out the RAM with 32GB DIMMs, you could accompany those cores with more than 6TB of RAM. That’s some serious density.

Breaking out of the box The M1000e chassis I/O capabilities have grown as rich as the blade options, due in no small part to Dell’s acquisition of Force10 Networks. To the basic 1G passthrough, Dell PowerConnect, and Cisco I/O switching modules available previously, Dell has added the Force10 MXL I/O switching module, which boasts 32 internal 10G interfaces and two external 40G interfaces, with two FlexIO module slots for further 10G fiber or copper expansion. This is undoubtedly a significant advancement for Dell, not the least of which is the capability of 40G uplinks from those switches. Further, up to six of these switches can be stacked, allowing the switching for multiple chassis to be consolidated and centrally managed.

However, the MXL and chassis integration is not yet fully baked. While the switches behave as you’d expect, they represent their internal 10G interfaces generically — such as TenGigabitEthernet 0/1, TenGigabitEthernet 0/2, and so forth — and there’s no simple way to map those ports back to the blades they’re connected to. If you’re looking to configure the 10G port for the second 10G interface in the M620 blade in slot 7, for instance, you will need a chart to figure out which interface that corresponds to on the MXL. When you’re faced with configuring four or six 10G interfaces per blade or a fully loaded chassis composed of 32 M420 blades with 64 10G interfaces, that will get really confusing really quickly.

Tighter integration between the switching modules and the chassis itself is needed to provide those mappings within the switch CLI. Network administrators don’t like to have to refer to spreadsheets to find out which port they need to tweak.

Aside from this, the Dell PowerConnect M8024 module is available with 16 internal 10G ports and up to eight external ports using the FlexIO slots in the module. There are 4Gb and 8Gb Fibre Channel modules, including the Brocade M5424, two InfiniBand modules supporting either QDR (Quad Data Rate) and DDR (Double Data Rate), and more basic 1G switches. There are also passthrough modules for 10G, 1G, and Fibre Channel ports.

Dell has added Switch Independent Partitioning or NIC partitioning functionality, which allows the 10G interfaces on each blade to be carved up into four logical interfaces with various QoS and prioritization rules attached to each logical interface. The OS sees several independent interfaces that are all subsets of the 10G interface, allowing administrators to allocate bandwidth to various services at the NIC level. This is a welcome addition that’s been missing in previous Dell solutions.

Blade management en masse Beyond the management of the new Force10-based switches, the overall management toolset present in the M1000e is quite extensive. Dell has paid much attention to the needs of higher-density blade chassis management and has taken steps to reduce the repetitive tasks associated with blade infrastructure.

By leveraging the iDRAC remote management processors in each blade and the Dell CMC (Chassis Management Controller) tools, the M1000e makes it both simple to perform tedious tasks such as mass BIOS upgrades and very easy to dig into the specific information about each blade. With a single click, you can retrieve a display containing every firmware version across the server, including its installation date; another click brings specific information on every hardware component, from individual DIMMs to what’s on the PCI bus. It’s extremely handy.

The Dell Chassis Management Controller puts blade server details and alerts right at your fingertips.
The Dell Chassis Management Controller puts blade server details and alerts right at your fingertips.

There are also provisions for scheduled hardware inventories and warranty status checks. In addition, the scheduled firmware upgrades can source either a local share or Dell’s FTP service for the firmware files to be distributed to the hosts.

The idea is to make this process as seamless as possible, allowing administrators to schedule firmware updates across multiple disparate servers that automate the process of putting a host in maintenance mode (assuming it’s a virtualization host), applying the updates, rebooting, and bringing the host back into the cluster. For stand-alone servers, the reboot process is still necessary, but that can be automated as well.

Dell has added a form of multichassis management, in that you can configure the CMC to connect to CMC instances on other chassis and jump to that management console from a single click. This isn’t true multichassis management, however; there are no facilities to directly manage multiple chassis from within the same console. But linking the independent management consoles together is a step in the right direction.

Dell also provides direct VMware integration, via a virtual appliance and a plug-in for the vSphere client. The appliance handles the data storage and distribution tasks, and the plug-in allows admins to work within the vSphere client to manage hardware tasks and check various system status elements.

All of this comes via the Dell iDRAC with the base “Express for Blades” license. Unlike the base iDRAC license for rack servers, the base license for blades still permits graphical console access. For years, Dell offered graphical console access via the base iDRAC license, while HP required an advanced license. Now Dell is following HP’s lead for rack servers. Fortunately, Dell has left graphical console access under the base iDRAC license for blade servers intact.

This story, “Review: Dell blade servers tip the scales,” was originally published at InfoWorld.com. Keep up on the latest developments in computer hardware and the data center at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.