While there is a lot of talk lately about building green data centres, and many hardware vendors are touting the efficiency of their products, the primary concern is still just ensuring you have a reliable source of adequate power.
Even though each core on a multicore processor uses less power than it would if it was on its own motherboard, a rack filled with quad-core blades consumes more power than a rack of single-core blades, according to Intel Corp.
“It used to be you would have one power cord coming into the cabinet, then there were dual power cords,” says Bob Sullivan, senior consultant at The Uptime Institute in Santa Fe, N.M. “Now with over 10 kilowatts being dissipated in a cabinet, it is not unusual to have four power cords, two A’s and two B’s.”
With electricity consumption rising, data centres are running out of power before they run out of raised floor space. A Gartner Inc. survey last year showed that half of data centres will not have sufficient power for expansion by 2008.
“Power is becoming more of a concern,” says Dan Agronow, chief technology officer at The Weather Channel Interactive in Atlanta. “We could put way more servers physically in a cabinet than we have power for those servers.”
The real cost, however, is not just in the power being used but in the costs of the infrastructure equipment — generators, UPSs, PDUs, cabling and cooling systems. For the highest level of redundancy and reliability — a Tier 4 data centre — for every kilowatt used for processing, some $22,000 is spent on power and cooling infrastructure.
To cut down on costs and ensure that there is enough power requires a close look at each component individually. The next step is to figure out how each component impacts the data centre as a whole. Steve Yellen, vice-president of product marketing strategies at Aperture Technologies Inc., a data centre software firm in Stamford, Conn., says that managers need to consider four separate elements that contribute to overall data centre efficiency — the chip, the server, the rack and the data centre as a whole. Savings in any one of these components yields savings in each of the higher area above it.
“The big message is that people have to get away from thinking about pieces of the system,” Stanford University’s Koomey says. “When you start thinking about the whole system, then spending that US$20 extra on a more-efficient power supply will save you money in the aggregate.”
Going modular
There are strategies for cutting power in each area Yellen outlined above. For example, multicore processors with lower clock speeds reduce power at the processor level. And server virtualization, better fans and high-efficiency power supplies — such as those certified by the 80 Plus program — cut power utilization at the server level.
Five years ago, the average power supply was operating at 60 per cent to 70 percent efficiency, says Kent Dunn, partnerships director at PC power-management firm Verdiem Corp. in Seattle, Wash. and program manager for 80 Plus. He says that each 80 Plus power supply will save data centre operators about 130 to 140 kilowatt-hours of power per year.
Rack-mounted cooling and power supplies such as Liebert’s XD CoolFrame and American Power Conversion Corp.’s InfraStruXure cut waste at the rack level. And at the data centre level, there are more efficient ways of distributing air flow, using outside air or liquid cooling, and doing computational fluid dynamics modeling of the data center for optimum placement of servers and air ducts.
“We’ve deployed a strategy within our facility that has hot and cold aisles, so the cold air is where it needs to be and we are not wasting it,” says Fred Duball, director of the service management organization for the Virginia state government’s IT agency, which just opened a 192,000-square-foot data centre in July and will be ramping up the facility over the next year or so. “We are also using automation to control components and keep lights off in areas that don’t need lights on.”