matt_prigge
Contributing Editor

What to look for onsite when choosing a colo facility

analysis
Nov 5, 20129 mins

Hurricane Sandy provides an excellent reminder that no matter how good a colo's facilities look on paper, careful attention to detail is critical to picking a good one

In my post last week, I covered basic choices you’ll face when selecting a type of colocation facility to host mission-critical enterprise workloads. Only hours after that posted to the Web, Hurricane Sandy slammed into the Eastern seaboard, leaving widespread flooding and wind damage in its wake. Particularly hard hit were northern New Jersey and New York City — two of the most densely populated areas of the country as it relates to data center and colocation facilities.

Even as I write this, many of the largest data center facilities in Lower Manhattan are without commercial grid power, either due to extensive flooding or the spectacular ConEd substation explosion on the Lower East Side. In one case, a data center literally had to run a manual bucket brigade to lift diesel fuel to roof-mounted generators because the fuel pumps in the sub-basement were submerged in flood water — an act that is nothing short of heroic. However, the fact that some data centers have emerged unscathed while others succumbed — or had to fight tooth and nail to stay online — provides an excellent object lesson in why it’s so important to evaluate a prospective facility’s preparedness for the unexpected.

Although a huge array of factors goes into making a good colocation center, including intangibles such as the quality of the people running the operation and the company’s financial health, the most important qualities of any data center space are power, HVAC, staffing, and fire suppression. Read on for pointers on what to look for in each of these critical areas.

Power considerations

It’s often said that power is the single most important commodity for any data center. That point cannot be overstated. In most cases, power infrastructure is what ultimately separates the various tiers of data centers, because it is typically the most expensive problem for data center operators to solve. To build a Tier 4 data center, you have to either be in a location where the commercial power grid provides true power supply diversity or be able to build your own power-generation capacity by implementing a cogeneration plant.

In both cases, the best data centers sport multiple, diverse building power entrances and internal distribution busways. If onsite step-down transformers are used, they will also be fully redundant — typically to at least a 2N standard because these transformers are generally paired with the commercial power service entrances. However, even the best power grid will only take you so far — a fact underlined by Hurricane Sandy. If the streets are flooded with water or a systemwide blackout occurs due to a power-supply problem, the facility will have to rely on its own power protection and generation capabilities.

First in line to react to a power outage is the uninterruptible power supply. In highly redundant Tier 4 facilities, these UPSes are deployed in 2N+1 configurations. Yes, twice the number of UPSes are required — typically to pair with each of at least two diverse power routes and supplies — and at least one UPS of extra capacity so that a UPS can be removed from service for maintenance without hurting availability. Because even the largest UPSes can keep a multimegawatt facility online only for seconds, the same redundancy rules typically apply to generators.

Maintenance and monitoring of UPSes and generators — including regular load testing and preventative maintenance — are crucially important. Many facilities contract this work to the manufacturer or third parties rather than bringing on the staff to do that work internally.

Also contracted out is the equally crucial task of emergency fuel delivery. When data centers are forced to run off of generators for days, the amount of fuel onsite and the ability of the fuel supplier to actually get supplies to the site become extremely important — again, as we saw with Hurricane Sandy. You can learn a lot by asking how the data center has arranged its third-party relationships and when they’ve been exercised.

Beyond the protection and emergency generation of power, internal power distribution is also very important. Most high-quality data centers feed each cabinet or rack of customer equipment with two diversely routed power feeds. Each feed should source from a separate UPS attached to a separate generator and building power feed — providing two fully redundant power pathways that have absolutely no cross-dependencies (including the actual equipment as well as the path the wiring takes to get to your equipment).

HVAC considerations

Assuming all these power-delivery qualities are in place, the next element you have to concern yourself with is the facility’s power density. As compute and network equipment has gotten smaller and denser — blade chassis are popular in colocated data centers because they take up little room — so too has the effective power density per rack full of that equipment. In the old days, you’d be lucky to be able to get 5kW to 10kW of equipment into a single cabinet, simply because it took up so much space. Today, a rack full of heavily used blades can easily chew up to four times that wattage in the same amount of space.

Although this would appear to be a power supply problem (it is, to some extent), the larger problem is keeping the equipment and surrounding data center space cool. Because nearly all the energy consumed by electrical equipment is converted into heat, a data center with inadequate or unreliable air conditioning can overheat to the point where equipment is damaged or shuts itself down in a matter of minutes. Thus, the quality and redundancy of the cooling systems becomes just as important as that of the power systems.

Facilities with lower power densities typically use computer-room air conditioners (CRAC units) coupled with ducting or traditional raised-floor delivery methods. One of the only ways to adequately support higher densities is through the use of in-row air conditioners (IRC) that cool either the intake air or the exhaust air of the equipment in a specific cabinet. This method is both efficient and easy to deliver redundancy for because the loss of a single IRC unit can be easily absorbed by the remaining ones.

In both cases, these in-room air-conditioning elements depend on cold-water chillers or evaporators you typically see located on the roof (along with the generators, often placed there to protect against flooding). Just as path and component redundancy is important for power systems, redunancy is important for these HVAC systems and how they’re powered; no two redundant AC units should share the same pathway for reaching their outdoor-mounted components, nor should they share the same power feeds.

Staffing, security, and fire suppression considerations

Although having a well-constructed power and cooling infrastructure is critical to any data center, the staffing of the data center is also a key consideration. Many facilities, especially carrier facilities, may not be manned 24/7 — or at all. In such premises, it will take substantially longer for the provider to react to problems. This delay can have dire consequences in situations that require quick action to prevent a problem from snowballing.

From a security standpoint, most data centers use several layers of physical security, including some combination of proximity cards, door PINs, biometrics, and keys or combination locks on individual customer cabinets and cages. Together with video surveillance systems, these methods are typically enough to ensure that only facility tenants can access the authorized building areas. However, nothing can really replace a 24/7 guard who is actively monitoring the comings and goings of tenants and the video surveillance feeds.

From a fire-suppression standpoint, many facilities are forced by a combination of scale and fire regulations to use water-based extinguishing methods. Smaller data centers in areas where regulations don’t force the use of water may be able to get away with a chemical-based suppression method (such as Inergen or FM200), which typically won’t damage equipment when it is released.

Those that can’t use chemical suppression may opt for what’s called a pre-action, dry-pipe system. In these systems, the sprinkler system won’t be pressurized with water until very sensitive heat and smoke detection systems detect that a fire may be present. Then, the sprinkler system won’t actually release water until a sprinkler head reaches a certain temperature (typically around 165 degrees Fahrenheit). The goal in such systems is for data center staff to isolate and directly fight whatever fire might exist (say, in a single cabinet) with fire extinguishers to prevent an actual water discharge — making 24/7 staffing all the more important.

Putting it all together

When choosing a colocation center to house your most important Internet-attached assets, the most important part of your decision process is to observe the weaknesses of an imperfect data center and decide whether you’re willing to live with them. For example, if your applications are already built with some level of redundancy, you may find that operating across two geographically separated Tier 2 facilities offers you better overall reliability than operating in a single Tier 4 facility.

Do not be afraid to ask hard questions when you’re touring a prospective facility. Ask how the staff would react to various nightmare scenarios — no matter how outlandish they may seem. Their answers to these questions will show you how seriously they take outside-the-box risks and how well they’ve planned for them.

As the superstorm Sandy experience has so dramatically shown, anyone not taking those risks seriously is asking for trouble.

This article, “What to look for onsite when choosing a colo facility,” originally appeared at InfoWorld.com. Read more of Matt Prigge’s Information Overload blog and follow the latest developments in storage at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.