Microsoft Corp.’s plan to fill its mammoth Chicago data centre with servers housed in 40-foot shipping containers has experts wondering whether the strategy will succeed. In Microsoft’s plan, each container in the data centre, still being built, will be filled with several thousand servers.
Computerworld queried several outside experts – including the president of a data centre construction firm, a data centre engineer-turned-CIO, an operations executive for a data centre operator and a “green” data centre consultant – to get their assessments of the strategy. While they were individually impressed with some parts of Microsoft’s plan, they also expressed skepticism that the idea will work in the long term.
Here are some of their objections, along with the responses of Mike Manos, Microsoft’s senior director of data centre services. Manos talked with Computerworld in an interview after the Data Centre World show at which Microsoft’s plan was announced.
1. Russian-doll-like nesting (servers, on racks, inside shipping containers) may work out to less Lego-style modularity, as some proponents claim, and more mere…moreness.
Server-filled containers are “nothing more than a bucket of power with a certain amount of CPU capacity,” quipped Manos.
His point is that setting up several thousand servers inside a container in some off-site factory setting will make them nearly plug-and-play once the container arrives at the data centre. By shifting the setup to the server vendor or system integrator and then wrapping it up inside a 40-foot metal box, containers become far easier and faster to deploy than individual server racks, which have to be moved one at a time.
But people like Peter Baker, vice president for information systems and IT at Emcor Facilities Services, argue that in other ways, containers still “add complexity.”
“This is simply building infrastructure on top of infrastructure,” he said.
One example, says Baker – who worked for many years as an electrical engineer building power systems for data centres before shifting over to IT management – is in the area of power management. Each container, he says, will need to come with some sort of UPS (uninterruptible power supply) that does three things: 1) converts the incoming high-voltage into lower usable DC voltages; 2) cleans up the power to prevent it from spiking and damaging the servers; 3) provides backup power in case of an outage.
The problem is that each UPS, in the process of “conditioning” the power, also creates “harmonics” that bounce back up the supply line and can “crap up power for everyone else,” Baker said.
Harmonics is a well-known issue that’s been managed in other contexts, so Baker isn’t saying the problem is unsolvable. But, he argues, the extra infrastructure needed to alleviate the harmonics generated by 220 UPSs – the number of containers Microsoft thinks it can fit inside the Chicago data centre – could easily negate the potential ROI from using containers.
Manos’ rebuttal: “The harmonics challenges have long been solved [by Microsoft’s] very smart electrical and mechanical folks,” he said, though he declined to go into specifics. Manos added that he also “challenged the assumption” that Microsoft’s solutions are bulky and non-cost-effective: “You can be certain that we have explored ROI and costs on this size of investment.” He also admonished critics’ speculation that relies too heavily on the “traditional way of thinking about data centres,” again, without going into detail.
2. Containers are not as plug-and-play as they seem.
Servers normally get shipped from factory to customer in big cardboard boxes, protected by copious Styrofoam. Setting them up on vibration-prone racks before they travel cross-country by truck is a recipe for broken servers, argues Mark Svenkeson, president of Hypertect Inc., a Roseville, Minn., builder of data centres. At the very least, “verifying the functionality of these systems when they arrive is going to be a huge issue.”
But damaged servers haven’t been a problem, claimed Manos, since Microsoft began deploying containers at its data centres a year ago.
“Out of tens of deployments, the most servers we’ve had come DOA is two,” he said. Manos also downplayed the labor of testing and verifying each server. “We can know pretty quick if the boxes are up and running with a minimum of people,” he said.
He also pointed out that Microsoft plans to make its suppliers liable for any transit-related damage.
So let’s say Microsoft really has solved this issue of transporting server-filled containers. But part of what makes the containers so plug-and-play is that they will, more or less, sport a single plug from the container to the “wall” for power, cooling, networking and so forth.
But, Svenkeson pointed out, that also means that an accident such as a kicked cord or severed cable would result in the failure of several thousand servers, not several dozen. It’s like those server rooms that go dark because somebody flicks the uncovered emergency “off” switch out of curiosity or spite.
“If you’re plugging all of the communications and power into a container at one point, then you’ve just identified two single points of failure in the system,” Svenkeson said.
While Manos conceded the general point, he also argued that a lot “depends on how you architect the infrastructure inside the container.”
Outside the container, Microsoft is locating services worldwide – similar to Google’s infrastructure – in order to make them redundant in case of failure. In other words, users accessing a hosted Microsoft application, including Hotmail, Dynamics CRM or Windows Live, may connect to any of the company’s data centres worldwide.
That means that “even if I lose a whole data centre, I’ve still got nine others,” Manos said. “So I’ll just be at 90 per cent serving capacity, not down hard.”
Microsoft is so confident its plan will work that it’s installing diesel generators in Chicago to provide enough electricity to back up only some, not all, of its servers.
Few data centres dare to make that choice, said Jeff Biggs, senior vice president of operations and engineering for data centre operator Peak 10 Inc., despite the average North American power uptime of 99.98 per cent.
“That works out to be about 17 seconds a day,” said Biggs, who oversees 12 data centres in southeastern states. “The problem is that you don’t get to pick those 17 seconds.”
3. Containers leave you less, not more, agile.
Once containers are up and running, Microsoft’s system administrators may never go inside them again, even to do a simple hardware fix. Microsoft’s research shows that 20 per cent