In late 2006, Ryerson University simply ran out of data centre.
Constantly adding new initiatives – and servers to support them – the Toronto school’s two data centres ran out of space. But there was more to it than that.
“In terms of space, and in respect to cooling the data centre, we were at capacity,” says Eran Frank, manager of the school’s technical support group. “It was almost impossible to cool down such a tight space.” There was also the issue of powering the more than 100 Intel servers and providing a bigger UPS.
Senior IT management began negotiating for more space with its vice-president, but Frank pitched another idea. He’d been following virtualization technology since 2004. By 2006, when VMware [Nasdaq: VMW] released Virtual Infrastructure 3, he felt it was mature enough for him to risk his reputation on.
“Instead of getting bigger, we got smaller,” Frank says.
Now the university runs some 20 blades in two chassis — identical environments in each data centre, replicated in real-time. The school can run 10 or 11 virtual servers on each blade, or 150 virtual servers in a single rack. The school pulled between 70 and 80 Intel servers out of its data centres.
Blades jam a lot of computing power into a small footprint – and generate much more heat in that smaller space. But, counters Frank, it’s certainly not as much as 100 Intel servers. And in terms of power draw, there’s no comparison between two chassis with four power supplies each and 100 servers with two PSUs each.
In fact, according to the virtualization readiness assessment done on Ryerson’s 100-server data centre environment of 2006, the school will slash power and cooling costs by 80 per cent, saving $246,000.
Josh Leslie, director of alliances with VMware, says the company hasn’t done any studies comparing projected and actual savings. But, he says, the formula is obvious: fewer servers equals lower power consumption.
“We walk in pre-virtualization and typically their server utilization is five per cent,” Leslie says. In a shop with 100 servers, even going from five to 10 per cent utilization means taking 50 machines off the power grid. “Whether they throw the 50 servers out or not is kind of immaterial,” Leslie says.
And often, Leslie says, they’ll find servers with zero per cent utilization. “For larger customers, it’s quite common,” he said. He’s heard figures as high as 30 per cent for the number of enterprise servers that are not being used at all.
“Data centres get pretty unruly,” he says. “They’re racking ‘em and stacking ‘em faster than they can keep track of them.”
Though changing the whole infrastructure of the university’s data centres was an expensive proposition – Frank won’t share exactly how expensive – “It wasn’t hard for me to sell the idea,” he says.
“Maybe if we were a bank, we’d have to take a more conservative approach,” he says. “The main idea of education is about new things … about innovation.”
The bottom line proposition couldn’t have hurt, either. Many of those 80 servers pulled out of the data centres were at end-of-life anyway. In a virtualized environment, there’s no need to replace 150 Intel boxes every three years, Frank says.
In the physical server environment, every initiative – and there are some every month – required at least a Web front end and a SQL backend server. The school’s Citrix farm, which serves applications to students across the Internet, once sprawled across 11 servers. Frank says he could host that now on a single blade, though spreads the virtual servers around for higher availability. The physical environment was using less than two per cent of its CPU capacity, and only about 20 per cent of its storage capacity, leaving 40 TB idle, but the boxes were still piling up.
“We didn’t add a single (physical) server in the last year and a half,” Franks says.