Six ways to make your data center highly efficient

analysis
Oct 15, 20127 mins

Organizations are raising the bar for energy efficiency with free cooling, DC power, modular data centers, and better collaboration

Data center operators don’t bandy the word “green” around as much these days compared to a couple of years ago. That doesn’t mean companies have abandoned projects aimed at boosting energy efficiency while making better use of their IT resources, both for the sake of slashing operating costs and reducing their environmental impacts.

Research Company 451 Group has put together “Highly Energy-Efficient Data Centers in Practice,” a comprehensive report that examines six green practices embraced by a swath of organizations to drive down data center operating costs. Some of these practices, such as approaching data center planning holistically and finding alternatives to traditional cooling, may be familiar to readers who’ve followed the awards over the years; others, such as embracing DC power and pre-fabricated data centers, may have simply gone unnoticed or been dismissed as unsustainable.

Data center operators would be well served to consider or reconsider these trends as they plan improvements to their facilities. Not all of these practices are suited for every organization, naturally, but you never know until you look.

Practice No. 1: Take an integrated and holistic approach to efficiency Data center operators need to tackle efficiency holistically via a combination of integrated technologies and approaches rather than dealing with projects in isolation. This isn’t an especially groundbreaking concept: The Green Grid has been pushing the “think holistically” mantra since its inception in 2008. One of the best ways to do this is to promote collaboration among departments, particularly between facilities and IT, to ensure that all players are on the same page when it comes time to make data center improvements. Woe to the IT guy, for example, who orders a new rack of high-end servers for a mission-critical project only to learn once the boxes arrive that the data center doesn’t have sufficient power or cooling to support the hardware.

The collaboration doesn’t end within an organization, either. Companies are becoming increasingly open with one another as to their once best-kept secrets for reducing energy waste. One of the best-known examples out there is Facebook, with its Open Compute Project, through which the company is open-sourcing the blueprints to highly efficient servers, racks, and cooling equipment.

Practice No. 2: Be smarter about cooling Data center operators are gradually understanding that their facilities need not feel like a meat locker to be properly chilled. Understandably, no IT admin wants to risk equipment failure (and thus his or her job) from overheating. However, if you’re spending 50 cents on cooling and power for every dollar you spend on IT — the traditional ratio for the average data center — you’re almost certainly throwing away money.

There are several ways to reduce cooling costs, some of which represent technologies organizations have been loath to embrace for fear of damaging valuable hardware. Among them is liquid cooling, which is perceived as creating maintenance issues and has been historically restricted to limited sets of applications. One form of liquid cooling comes from Green Revolution Cooling, employing a low-cost dielectric fluid that is non-contaminating and reportedly has 1,200 times the heat-retention capacity as air.

Speaking of air, free cooling, which uses outside air to chill machines, has enjoyed broader adoption than liquid cooling. The concept is straightforward: Set up your data center somewhere with consistently mild outdoor temperatures. Then use an to draw on outside air to cool the facility. The economizer then pushes the hot air outdoors. Companies are increasingly amenable to embracing free cooling. Microsoft, for example, is adding 112,000 square feet to its existing 303,000-square-foot data center in Dublin — and the expansion will rely entirely on air-side economizers.

Practice No. 3: Generate your own power An impressive number of organizations, including Microsoft, Google, Facebook, Yahoo, and Bell Canada, are putting a dent in their energy bills by installing generating renewable energy on site. Solar panels are a popular choice and are becoming even more so with solar-panel prices steadily dropping, according to 451 Group. Solar’s not the only option out there; companies like Fujitsu have turned to hydrogen as an alternative. Companies are also exploring using cow manure to generate low-cost power. Going the on-site power-generation route requires an up-front investment, but under the right circumstances, the ROI can come quickly.

Practice No. 4: Save watts with DC power In a typical data center environment, power conversions abound along the path from the outside utility pad to the servers. With each conversion, some power is lost. The power starts at the utility pad at 16,000 VAC (volts alternating current), then converted to 440 VAC, to 220 VAC, then to 110 VAC before it reaches the UPSes feeding each server rack. Each UPS converts the incoming AC power to DC power, then back to AC. The UPSes then distribute that AC power to their respective servers — where it’s converted back to DC. As much as 50 to 70 percent of the electricity that comes into the data center is wasted throughout this long and winding conversion process. A DC-based power-distribution system eliminates these costly conversions, reducing energy bills in the process.

Organizations have been hesitant to go the DC route for a couple of reasons. Among them, some operators may not realize that hardware vendors have starting offering systems to support a DC-based environment. There are also cost concerns, as a high-voltage power system requires substantial installation and investment. However, saving 50 percent or more on power over many years represents a big return, not to mention the space you can reclaim with less power-converting equipment on the data center floor.

Practice No. 5: Use your IT equipment more efficiently Virtualization has helped countless companies significantly consolidate hardware and reduce server count. On top of that, organizations are deploying DCIM (data center infrastructure management) software, according to 451 Group. These tools graphically display a complete inventory of a data center’s physical and logical assets, showing rack and data center floor location and rack-heat load. Using the software, a facility manager can model any move, add, or change by creating sophisticated what-if scenarios before implementing changes that can dramatically impact performance.

On the hardware side, organizations like Google and Facebook use their own specially designed servers in their data centers rather than going with off-the-shelf offerings. They both have embraced techniques like eliminating superfluous components, and Facebook has gone so far as to place dual processors side by side on a motherboard, rather than one behind the other, because it improves airflow. In the storage world, solid-state systems — which don’t have watt-draining spinning disks — have the potential to reduce energy waste, according to 451 Group: “The emergence of solid-state storage devices may be a game changer, although market adoption is still nascent.”

Practice No. 6: Go the modular route Designing and building a data center has many pitfalls: There’s time spent — and wasted — as representatives from departments throughout the organization gather around the table and butt heads over the design of the new facility. There’s also the wasteful practice of building more data center than you actually need, be it in terms of size, density, or redundancy. You’re stuck paying the bills to build and power that extra infrastructure without getting any return (until the time comes that you need it).

Data center operators are increasingly embracing a modular approach to building and expanding their facilities. This model entails deploying portable pods, or modules, comprising standardized components, including IT hardware, power, and cooling equipment. Organizations can add new modules as needed as their computing needs increase, thanks to their portable, cookie-cutter design. These modules also bring an element of flexibility in that they can put set up wherever there’s free space and access to power, such as a rooftop.