Also visit our blog pages to read and comment on: Streets – and data centres – of San Fran go dark
A pair of utility computing vendors have stress-tested what they claim is the world’s largest virtual data centre, including 443 CPUs, 920GB of RAM and 47TB of storage.
Layered Technologies and 3Tera were showing off the benchmark results of the virtual data centre at an industry event called HostingCon in Chicago on Tuesday. According to the vendors, Layered Technologies set up the virtual data centre on 116 commodity servers set up in a grid computing environment running 3Tera’s AppLogic operating system.
Using a test called UnixBench, the vendors are trying to show the virtual data centre can easily scale from 10 to more than 100 servers without significant performance problems. This should help demonstrate to reluctant IT managers that the utility computing model works, executives said.
“If it’s just one four-node grid on the backbone or one large grid, you’ll see same level of performance,” said Jeremy Suo-Anttila, Layered Technologies’ CTO. “There’s not the kind of bottlenecks or hotspots you’d see if you were to build this on 128 nodes using a SAN or other virtualization system. The grid architecture was built so it’s using local disk storage. You get the linear performance.”
Although grid and utility computing approaches have been around for several years, they haven’t seen the adoption rates that some predicted. Market researcher IDC Canada, for example, pegs the local market at $200 million to $300 million. Suo-Anttila suggested that some enterprise firms are short-sighted about how they plan their data centre builds.
“(This test) is to show you don’t have to go and invest in a data centre full of hard steel technologies – the firewalls, the switches, the load balancers,” he said. “If you’re comparing virtual data centre environment using Xen Server or VMware with shared storage, there’s a hesitation by some people to adopt it in the market. Typically that’s where they see the failure, with shared media . . . It’s just putting too much load on one point.”
On a grid, however, virtual appliances can be managed so they don’t have those points of failure, he added.
Layered Technologies started working on the virtual data centre with 3Tera around this time last year and started with a 48-node grid, Suo-Anttila said. That increased substantially after HP debuted a new ProCurve networking switch that allowed the build to reach 128 nodes.
“If you have an application that could blow up overnight due to popularity, this can scale to those needs without panic,” he said.
Bert Armijo, 3Tera’s vice-president of product management, said the virtual data centre could run a popular user-generated content site like Digg.com two or three times over. That’s obviously more than what most enterprises need, “but we just wanted to try and express to people just how much horsepower is available,” he said. “Instead of trying to build out infrastructure, here’s the most CPUs we could put together on short notice.”
Armijo said he was able to run the benchmark tests himself in less than a day without any help from engineers or system admins. He’s now giving online demos of the virtual data centre.
Suo-Anttila admitted, however, that utility computing takeup has been slow, and that some IT managers refuse to give up their hardware.
“If they want to invest and go that way, I’ll sell them servers all day long,” he said. “Eventually, though, they’re going to realize, ‘I’m spending a lot of time and money dealing with this hardware. A hardware node dies on a grid, they restart on next node. You can get someone to fix that hardware node at your leisure,” Suo-Anttila said.
Also visit our blog pages to read and comment on: Streets – and data centres – of San Fran go dark