Virtualization: How to avoid server overload

As virtualization stretches deeper into the enterprise to include mission-critical and resource-intensive applications, IT executives are learning that double-digit physical-to-virtual server ratios are things of the past.

Virtualization vendors may still be touting the potential of putting 20, 50, or even 100 VMs (virtual machines) on a single physical machine, but IT managers and industry experts say those ratios are dangerous in production environments, causing performance problems or, worse, outages.

“In test and development environments, companies could put upwards of 50 virtual machines on a single physical host. But when it comes to mission-critical and resource-intensive applications, that number tends to plummet to less than 15,” says Andi Mann, vice president of research at Enterprise Management Associates (EMA) in Boulder, Colo.


In fact, EMA conducted a study in January 2009 of 153 organizations with more than 500 end users and found that on average they were achieving 6:1 consolidation rates for applications such as ERP, CRM, e-mail and database.

The variance between the reality and the expectations, whether it’s due to vendor hype or internal ROI issues, could spell trouble for IT teams. That’s because the consolidation rate affects just about every aspect of a virtualization project: budget, capacity and executive buy-in. “If you go into these virtualization projects with a false expectation, you’re going to get in trouble,” Mann says.

Indeed, overestimating P-to-V ratios can result in the need for more server hardware, power consumption, heating and cooling, and rack space — all of which cost money. Worse yet, users could be impacted by poorly performing applications. “If a company thinks they’re only going to need 10 servers at the end of a virtualization project and they actually need 15, it could have a significant impact on the overall cost of the consolidation and put them in the hole financially. Not a good thing, especially in this economy,” says Charles King, president and principal analyst at consultancy Pund-IT in Hayward, Calif.

Key apps will fight for server space

So, why the disconnect between virtualization expectations and reality? King says up to this point, many companies have focused on virtualizing low-end, low-use, low I/O applications such as test, development, log, file and print servers. “When it comes to edge-of-network, non-mission-critical applications that don’t require high availability, you can stack dozens on a single machine,” he says.

Bob Gill, managing director of server research at consultancy TheInfoPro, agrees. “Early on, people were virtualizing systems that had a less than 5 per cent utilization rate. These were also the applications that, if they went down for an hour, no one got upset,” he says.

That’s not the case when applying virtualization to mission-critical, resource-intensive applications; virtualization vendors have been slow to explain this reality to customers, some say.

Once you get into applications with higher utilization rates, greater security risks and increased performance and availability demands, consolidation ratios drop off considerably. “These applications will compete for bandwidth, memory, CPU and storage,” King says. Even on machines with two quad-core processors, highly transactional applications that have been virtualized will experience network bottlenecks and performance hits as they vie for the same server’s pool of resources.

Start with a capacity analysis

To combat the problem, IT teams have to rework their thinking and dial back everyone’s expectations. The best place to start: a capacity analysis, says Kris Jmaeff, information security systems specialist with Interior Health, one of five health authorities in British Columbia, Canada.

Four years ago, the data centre at Interior Health was growing at a rapid rate. There was a lot of demand to virtualize the 500-server production environment to support a host of services, including DNS, ActiveDirectory, Web servers, FTP and many production application and database servers.

Before starting down that path, Jmaeff first used VMware tools to conduct an in-depth capacity analysis that monitored server hardware utilization. (Similar tools are also available from CiRBA, Hewlett-Packard, Microsoft, PlateSpin and Vizioncore, among others.) Rather than looking at his environment in a piecemeal fashion by each piece of hardware, he instead considered everything as a pool of resources. “Capacity planning should . . . focus on the resources that a server can contribute to the virtual pool,” Jmaeff says.

Already, the team has been able to consolidate 250 servers, 50 per cent of the server farm, onto 12 physical hosts. While Jmaeff’s overall data centre average is 20:1, hosts that hold more demanding applications either require much lower ratios or demand that he balance out resource-intensive applications.

Jmaeff uses a combination of VMware vCentre and IBM Director to monitor each VM for “telltale signs” of ratio imbalances such as spikes in RAM and CPU usage or performance degradation. “We’ve definitely had to bump applications around and adjust our conversion rates according to server resource demand to create a more balanced workload,” he says. If necessary, it’s easy to clone servers and quickly spread the application load, he adds.

“Because we did our homework with ratios of virtual servers by examining the load on CPU and memory and evaluated physical server workloads, we’ve been pleasantly surprised with our ratios,” he says.

Continuous monitoring is key

At Network Data Centre Host, a Web service provider in San Clemente, Calif., the IT team quickly learned that when it comes to virtualizing mission-critical applications, you have to consider more than just RAM. “We originally thought, based on available RAM, we could have 40 small customers share a physical server. But we found that with heavier-used applications, it’s not the RAM, it’s the I/O,” says CTO Shaun Retain.

The 40:1 ratio had to be pulled back to 20:1 at the greatest, he says. To help with that effort, the team has written a control panel that allows their customers to log in and see how their virtual machine is handling reads, writes, disk space usage and other performance-affecting activity. In addition, NDC Host uses homegrown monitoring tools to ensure that ratios aren’t blown by a spike in a single VM’s traffic.

Pund-IT’s King says companies should also conduct rigorous testing on their virtualized mission-critical applications before and after deployment. “You have to make sure that in terms of memory and network bandwidth, each application is stable at all times. For instance, if you know an application is harder hit during certain times of the year, you’ll want to account for that in establishing your ratios,” he says.

Testing will also help IT teams determine which virtual workloads will co-exist best on a physical host. “You have to make sure that a physical server isn’t running multiple VMs with the same workload. Otherwise, if they’re all Web servers, they will be contending for the same resources at the same time and that will hinder your consolidation ratio,” says Nelson Ruest, co-author of “Virtualization: A Beginner’s Guide” and founder of the Resolutions Enterprise consultancy in Victoria, British Columbia. Instead, IT staffers should make sure that workloads are heterogeneous and well-balanced based on peak usage times and resource demands.

More virtualization-management tips

Ruest also warns IT teams not to forget the spare resources that host servers need so they can not only support their own VMs, but accept the workload from a failing host. “If you’re running all your servers at 80 per cent, you won’t be able to support that necessary redundancy,” he says.

Most organizations will find they need to dedicate at least a month to the capacity planning and testing phases to determine the appropriate P-to-V server ratios for their environment, Ruest says.

Finally, EMA’s Mann advises IT teams to seek out peers with similar application environments at large annual meetings like VMware’s VMworld conference or Citrix’s Synergy, or through local user groups. “Most attendees are more than willing to share information about their environment and experiences,” he says. Rather than relying on vendor benchmarks, get real-world examples of whathas workedand what hasn’t at organizations with your same profile. “You’ll have a better chance at setting realistic expectations.”

Gittlen is a freelance technology writer in the greater Boston area who can be reached at sgittlen@verizon.net.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now