Industrial Light & Magic, which creates digital film special effects, has been replacing its servers with the hottest new IBM BladeCenters — literally, the hottest.
For every new rack ILM brings in, it cuts overall power use in the data center by a whopping 140 kW — a staggering 84 per cent drop in overall energy use.
But power density in the new racks is much higher: Each consumes 28 kW of electricity, versus 24 kW for the previous generation. Every watt of power consumed is transformed into heat that must be removed from each rack — and from the data center.
The new racks are equipped with 84 server blades, each with two quad-core processors and 32GB of RAM. They are powerful enough to displace seven racks of older BladeCenter servers that the San Francisco company purchased about three years ago for its image-processing farm.
To cool each 42U rack, ILM’s air conditioning system must remove more heat than would be produced by nine household ovens running at the highest temperature setting. This is the power density of the new infrastructure that ILM is slowly building out across its raised floor.
These days, most new data centers have been designed to support an average density of 100 to 200 watts per square foot, and the typical cabinet is about 4 kW, says Peter Gross, vice-president and general manager of HP Critical Facilities Services. A data center designed for 200 W per square foot can support an average rack density of about 5 kW. With carefully engineered airflow optimizations, a room air conditioning system can support some racks at up to 25 kW, he says.
At 28 kW per rack, ILM is at the upper limit of what can be cooled with today’s computer room air conditioning systems, says Roger Schmidt, IBM fellow and chief engineer for data center efficiency. “You’re hitting the extreme at 30 kW. It would be a struggle to go a whole lot further,” he says.
The question is, what happens next? “In the future are watts going up so high that clients can’t put that box anywhere in their data centers and cope with the power and cooling? We’re wrestling with that now,” Schmidt says. The future of high-density computing beyond 30 kW will have to rely on water-based cooling, he says. But data center economics may make it cheaper for many organizations to spread out servers rather than concentrate them in racks with ever-higher energy densities, other experts say.
Kevin Clark, director of information technologies at ILM, likes the gains in processing power and energy efficiency he has achieved with the new BladeCenters, which have followed industry trends to deliver more bang for the buck. But Clark wonders whether doubling compute density again, as he has in the past, is sustainable.
The case for, and against, running data centers hotter
Raising the operating temperature of servers and other data center gear doesn’t always save on cooling costs. Most IT manufacturers increase fan speeds for servers and other equipment as temperatures exceed about 77 degrees F to keep the processor and other component temperatures constant, says IBM fellow Roger Schmidt. At temperatures above 77 degrees, the speed of fans in most servers sold today increases significantly and processors suffer higher leakage currents.
Power consumption increases as the cube of the fan speed — so if speed increases by 10 per cent, that means a 33 per cent increase in power. At temperatures above 81 F, data center managers may think they’re saving energy when in fact servers are increasing power usage at a faster rate than what is saved in the rest of the data center infrastructure.
Bottom line: You would still save energy overall if you raised the temperature to 81, but going higher presents challenges to systems and component designers. Could equipment be designed to operate at higher temperatures? Possibly, Schmidt says. “Manufacturers will have to come together as a group to determine whether we should recommend a higher limit that will, in fact, save energy at the data center level.”
Tom Bradicich, an IBM vice president for architecture and technology for the company’s x86 servers, says that with all of the different equipment in a data center, getting the facility optimized for 81 degrees is difficult. Even getting the components in the boxes IBM builds to meet the current spec can be a challenge. “We’re working in a world where we integrate a lot of third-party components. At the end of the day, IBM doesn’t make the microprocessor and other components.”
Dyan Larson, director of data center technology initiatives at Intel, thinks the day when everything in a data center can run safely at 81 degrees is still a long ways off. “There’s a reliability concern people have when it comes to running data centers at higher temperatures. Until the industry says, ‘We’re going to warranty these things for higher temperatures,’ we’re not going to get there.”
As compute density per square foot increases, overall electromechanical costs tend to stay about the same, HP’s Gross says. But because power density also increases, the ratio of electromechanical floor space needed to support a square foot of high-density compute floor space also goes up.
IBM’s Schmidt says the cost per watt, not the cost per square foot, remains the biggest construction cost for new data centers. “Do you hit a power wall down the road where you can’t keep going up this steep slope? The total cost of ownership is the bottom line here,” he says. Those costs have for the first time pushed some large data center construction projects past the US$1 billion mark. “The C suites that hear these numbers get scared to death because the cost is exorbitant,” he says.
Ever-higher energy densities are “not sustainable from an energy use or cost perspective, says Rakesh Kumar, analyst at Gartner Inc. Fortunately, most enterprises still have a ways to go before they see average per-rack loads in the same range as ILM’s. Some 40 per cent of Gartner’s enterprise customers are pushing beyond the 8 to 10 kW per rack range, and some are as high as 12 to 15 kW per rack. However, those numbers continue to creep up.
In response, some enterprise data centers, and managed services providers like Terremark Inc., are starting to monitor power use and factor it into what they charge for data center space. “We’re moving toward a power model for larger customers,” says Ben Stewart, senior vice president of engineering at Terremark. “You tell us how much power, and we’ll tell you how much space we’ll give you.”
Containment: The last frontier
IBM’s Schmidt thinks further power-density increases are possible, but the methods by which data centers cool those racks will need to change. He says the cost per watt, not the cost per square foot, remains the biggest construction cost for new data centers.
Other data centers are already experimenting with containment — high-density zones on the floor where doors seal off the ends of either the hot or cold aisles. Barriers may also be placed along the top of each row of cabinets to prevent hot and cold air from mixing near the ceiling. In other cases, cold air may be routed directly into the bottom of each cabinet, pushed up to the top and funneled into the return-air space in the ceiling plenum, creating a closed-loop system that doesn’t mix with room air at all.
Using such techniques, HP’s Gross estimates that data centers can support up to about 25 kW per rack using a computer room air conditioning system. “It requires careful segregation of cold and hot, eliminating mixing, optimizing the airflow. These are becoming routine engineering exercises,” he says.
Liquid makes its entrance
While redesigning data centers to modern standards has helped reduce power and cooling problems, the newest blade servers are already exceeding 25 kW per rack. IT has spent the past five years tightening up racks, cleaning out raised floor spaces and optimizing air flows. The low-hanging fruit is gone in terms of energy efficiency gains. If densities continue to rise, containment will be the last gasp for computer-room air cooling.
Some data centers have already begun to move to liquid cooling to address high-density “hot spots” in data centers. The most common technique, called closely coupled cooling, involves piping chilled liquid, usually water or glycol, into the middle of the raised floor space to supply air-to-water heat exchangers within a row or rack. Kumar estimates that 20 per cent of Gartner’s corporate clients use this type of liquid cooling for at least some high-density racks.
These closely coupled cooling devices may be installed in a cabinet in the middle of a row of server racks, as data center vendor APC Corp. does with its InRow Chilled Water units, or they can attach directly onto each cabinet, as IBM does with its Rear Door Heat eXchanger.
Closely coupled cooling may work well for addressing a few hot spots, but it is a supplemental solution and doesn’t scale well in a distributed computing environment, says Gross. IBM’s Rear Door Heat eXchanger, which can cool up to 50,000 BTUs — or 15 kW — can remove about half of the waste heat from ILM’s 28-kW racks. But Clark would still need to rely on room air conditioners to remove the remaining BTUs.
Closely coupled cooling also requires building out a new infrastructure. “Water is expensive and adds weight and complexity,” Gross says. It’s one thing to run water to a few mainframes. But the network of plumbing required to supply chilled water to hundreds of cabinets across a raised floor is something most data center managers would rather avoid.
Using cold plates reduces processor leakage problems by keeping the silicon cooler, allowing servers to run faster — and hotter. In a test of a System p 575 supercomputer, Schmidt says IBM used direct-liquid cooling to improve performance by one-third while keeping an 85 kW cabinet cool. Approximately 70 per cent of the system was water-cooled.
But few data center managers can envision moving most of their server workloads onto expensive, specialized supercomputers or mainframes.
IBM’s Bradicich says incremental improvements such as low-power chips or variable-speed fans aren’t going to solve the problem alone. Architectural improvements to the fundamental x86 server platform will be needed.
But HP’s Gross says things may be going the other direction. “Data centers are going bigger in footprint, and people are attempting to distribute them,” he says. “Why would anyone spend the kind of money needed to achieve these super-high densities?” he asks — particularly when they may require special cooling.
The best way to take the momentum away from ever-increasing power density is to change the chargeback method for data center use, says Belady. Microsoft changed its cost allocation strategy and started billing users based on power consumption as a portion of the total power footprint of the data center, rather than basing it on floor space and rack utilization. After that, he says, power consumption per rack started to dip. Its users’ focus changed from getting the most processing power in the smallest possible space to getting the most performance per watt. That may or may not lead to higher-density choices — it depends on the overall energy efficiency of the proposed solutions.
Belady, who previously worked on server designs as a distinguished engineer at HP, argues that IT equipment should be designed to work reliably at higher operating temperatures. Current equipment is designed to operate at a maximum temperature of 81 degrees. That’s up from 2004, when the official specification, set by the ASHRAE Technical Committee 9.9, was 72 degrees.
But Belady says running data center gear even hotter than 81 degrees could result in enormous efficiency gains.
“Once you start going to higher temperatures, you open up new opportunities to use outside air and you can eliminate a lot of the chillers … but you can’t go as dense,” he says. Some parts of the country already turn off chillers in the winter and use economizers, which use outside air and air-to-air or air-to-water heat exchangers, to provide free coolingto the data center.
If IT equipment could operate at 95 degrees, most data centers in the U.S. could be cooled with air-side economizers almost year-round, he argues. And, he adds, “if I could operate at 120 degrees … I could run anywhere in the world with no air conditioning requirements. That would completely change the game if we thought of it this way.” Unfortunately, there are a few roadblocks to getting there.
The ideal strategy, he says, is to develop systems that optimize each rack for a specific power density and manage workloads to ensure that each cabinet hits that number all the time. In this way, both power and cooling resources would be used efficiently, with no waste from under- or overutilization. “If you don’t utilize your infrastructure, that’s actually a bigger problem from a sustainability standpoint than overutilization,” he says.
What’s next
Belady sees a bifurcation coming in the market. High-performance computing will go to water-based cooling while the rest of the enterprise data center — and Internet-based data centers like Microsoft’s — will stay with air but move into locations where space and power costs are cheaper so they can scale out.
Paul Prince, CTO of the enterprise product group at Dell, doesn’t think most data centers will hit the power-density wall anytime soon. The average power density per rack is still manageable with room air, and he says hot aisle/cold aisle designs and containment systems that create “super-aggressive cooling zones” will help data centers keep up. Yes, densities will continue their gradual upward arc. But, he says, it will be incremental. “I don’t see it falling off a cliff.”
At ILM, Clark sees the move to water, in the form of closely coupled cooling, as inevitable. Clark admits that he, and most of his peers, are uncomfortable with the idea of bringing water into the data center. But he thinks that high-performance data centers like his will have to adapt. “We’re going to get pushed out of our comfort zone,” he says. “But we’re going to get over that pretty quickly.”