When BMO Financial Group sought a location for its new data recovery (DR) centre, the firm knew it wanted to put the facility a certain distance from its existing tech buildings. Industry observers say distance between data centres is one aspect financial corporations should consider when designing for operational resilience.
BMO, one of Canada’s largest financial institutions with 34,000 employees and $256 billion in assets, decided to put the new edifice in Barrie, Ont., in part because the municipality, some 90 kilometres from Toronto, was close enough to another BMO data building to facilitate failover functions should one of the computing centres go offline, yet far enough away that a disaster at one place probably wouldn’t hit the other.
“They are spread apart, but within the required distance you’d want them to be to accomplish the data transfers that you need to do, and far enough apart…for contingency standards,” said Karen Metrakos, executive managing director, BMO technology and solutions.
BMO is using one best practice by putting its new data centre in Barrie, but according to industry insiders and observers, there are many elements that companies should keep in mind when designing for resilience.
According to Ian Miles, president of Toronto Hydro Telecom Inc., a data service provider that counts financial firms among its customers, distance between data centres is a prime concern. He said companies should keep computer buildings 25 to 100 kilometres apart, although in time enterprises might be able to put their tech houses further afield from each other than they could before.
“Applications that are residing on servers that support native Ethernet for connectivity can run quite long distances, as long as the latency is reasonable,” he said. “The bulk of new applications being developed are IP- or Ethernet-based. Distance will become less and less of a limiting factor.”
Miles said resilience is as much an organizational task as it is a technology endeavour. In his opinion, firms best prepared to weather problems make disaster recovery a C-suite issue, rather than an IT problem. Miles suggested companies should make it part of the corporate culture if they want to come through blackouts and such with business operations intact.
“If you’re talking about financial companies, I’d say the majority do think that way now,” Miles said. “It’s so critical to their business and the risk of any kind of failure is so high.”
Miles said the decreasing cost of data transport technologies might cause some companies to revise their disaster recovery strategies. “If [companies] haven’t looked at network solutions in the last couple of years, they’re often quite surprised by how quickly the cost has come down, and how much the bandwidth has gone up,” he said. “Once their eyes are open to that, they might go back and reconsider their plan. Whereas they might have been thinking there were one or two really core applications that they had to back up, with the expanded bandwidth they can afford to back up all six or seven of their critical applications.”
Thomas Coughlin is president of Coughlin Associates, a data storage advisory firm in Atascadero, Calif. He said it’s important to mind the bits and bytes, as each connection technology has its pros and cons.
For example, “delivery of large blocks of data is somewhat problematic on traditional Ethernet connections,” Coughlin pointed out. “What’s really going to open that up is 10-Gig E (10-gigabit Ethernet). Then you’ll have a fat enough pipe to handle some of this traffic.
“On the other hand,” Coughlin added, “the threat of iSCSI and Gig E, as well as SATA drives, has led to some lowering of the cost of fibre channel and traditional storage area networking architectures. These things are becoming more affordable.”
Coughlin Associates and Peripheral Concepts Inc., another IT consulting firm, conducted a survey of disaster recovery practices earlier this year. The researchers learned that although more than 50 per cent of companies’ businesses would be at risk if they couldn’t recover data within eight hours, 17 per cent still didn’t have disaster recovery facilities.
“It must be the economic situation,” Coughlin said. “They recognize that they have a vulnerability, but they don’t have the capital to invest.” He added later that falling storage and data transport prices could offset this in the future.
Technology, price, distance between data centres — all aspects of disaster recovery that BMO has considered or should keep in mind as it maps out the Barrie facility. But according to Metrakos, there’s more to designing a tech edifice than cold hard numbers. She said BMO also took into account some of life’s more qualitative aspects when deciding where to build.
“We were really looking for something that would meet our technical requirements, financially was sound, strategically was sound — [that] from an employee demographics perspective would be a good place to put a data centre, not only for our existing employees but also in terms of being able to hire in the future,” Metrakos said.
She also said BMO considered nearly 20 locations before picking Barrie. “We…looked at quality of life in the communities, the standard of living, the expected growth of the community. At the end of it, Barrie came out ahead.”