james_niccolai
Deputy News Editor

Novel way to cool datacenters passes first test

news
Sep 11, 20095 mins

Lawrence Berkeley National Laboratory links IT equipment to a building's cooling system to improve energy efficiency -- a possible boon for datacenter operators struggling with the escalating costs of cooling

A team of engineers led by Lawrence Berkeley National Laboratory has successfully tested a novel system that they say could greatly improve the efficiency of datacenter cooling.

It’s an important area for datacenter operators, who are struggling with the escalating costs of cooling increasingly powerful server equipment. Some facilities have been unable to add new equipment because they have reached the limit of their power and cooling capacity.

[ Keep up on green IT trends with InfoWorld’s Sustainable IT blog and Green Tech newsletter. | And check out the day’s tech news headlines with InfoWorld’s Today’s Headlines: First Look newsletter and InfoWorld Daily podcast. ]

By some estimates, the energy used to cool IT systems accounts for nearly half the cost of running a datacenter. The amount of energy consumed by datacenters in the United States doubled between 2000 and 2006, and could double again by 2011 if practices aren’t improved, according to the U.S. Department of Energy.

Server equipment in datacenters needs to be kept within a certain temperature range. Hardware can fail if it is too warm, but overcooling wastes energy. Still, most datacenters err on the side of caution and cool their equipment more than they need to.

The Lawrence Berkeley engineers, working with Intel, Hewlett-Packard, IBM, and Emerson Network Power, have been experimenting with a way to deliver just the right amount of cooling to computing equipment.

They fed temperature readings from sensors that are built into most modern servers directly into the datacenter building controls, allowing the air conditioning system to keep the facility at just the right temperature to cool the servers.

It’s a simple idea but something that hadn’t been achieved before. IT and facilities management systems have historically been managed separately. Computer Room Air Handlers, or CRAH units — basically, large air conditioners — are most often controlled using temperature sensors located on or near the CRAH air inlets.

That’s the way 76 percent of datacenters do it, according to an end-user study cited in a white paper about the experiment. Eleven percent of datacenters place the sensors in the cold aisles between the server racks, which is better but still not ideal.

Linking the IT equipment directly to the cooling systems represents “the most fruitful area in improving datacenter efficiency over the next several years,” according to the white paper.

The project has been a success, according to Bill Tschudi, a program manager at Lawrence Berkeley. “The main goal we had was to show that you could do this, that you could use the sensors in the IT equipment to control the building systems, and we achieved that,” he said.

The amount of energy saved will vary depending on how efficient a datacenter is to begin with, he said. He predicted that most datacenters would see a return on their investment within a year.

Most datacenters today are overcooled, according to the end-user study. It found that 90 percent of respondents keep their datacenter at least 5C below the upper limit recommended by The American Society of Heating, Refrigerating and Air-Conditioning Engineers, which publishes datacenter temperature guidelines. Adding even a few degrees of extra cooling can be expensive in datacenters.

“There’s this idea that the best datacenter is a cool datacenter, but what we’ve found is that it’s safe to run them a little bit warmer,” said Allyson Klein, a manager with Intel’s Server Platform Group.

Linking the IT and building control systems sounds simple but posed some technical challenges. IT management systems speak a different language from building control systems, so the engineers had to develop software to convert the IT information into a protocol that can be understood by the CRAH units.

The software was custom-written for the project, but commercial vendors are developing products to do that work, Klein said.

The project also used variable-speed fans in the CRAH units, which allow the cooling supply to be regulated more precisely. But Klein said datacenters could see benefits even without those fans, just from having more precise data about server temperatures.

The project is being wrapped up now and the engineers will report their findings in a session at the Intel Developer Forum this month, and at the datacenter Energy Efficiency Summit in October. NetApp has conducted a similar project and will also present its findings at the summit.

Part of the technique’s appeal is that the up-front costs are relatively low. “We’re using industry-standard technologies, so there’s no special sauce that would prevent customers from employing this,” Klein said. The temperature data could be fed directly into the building control systems, or sent via management consoles from IBM, HP and others, she said.

Most new servers include the front-panel temperature sensors employed in the experiment, and the EPA plans to add the sensors to its list of requirements for Energy Star servers, she said.

Other types of instrumentation data are likely to be used in the future.

“If you think about it, this is just a baby step to get started,” Tschudi said. “You could use this same idea to integrate more of the datacenter, so that instead of thinking of it in terms of IT equipment and infrastructure equipment, you could think of it as a single entity that’s seamlessly controlling itself.”