Symantec Corp. has released the results of its fifth annual Global IT Disaster Recovery survey.
According to the report, 93 per cent of organizations have had to execute their disaster recovery plans and the average cost of implementing DR plans for each downtime incident is US$287,000. The medium cost in Canada is US$496,500. The average budget for disaster recovery initiatives worldwide is US$50 million.
Response within Canada reflected those of the worldwide results, but percentages were noticeably different in terms of virtualization backup practices. Only 10 per cent of Canadian respondents do not back up data on virtualized systems, compared to 36 per cent of worldwide.
“The more stringent requirements in general were in North America,” said Dan Lamorena, senior manager of high availability and disaster recovery solutions at Symantec. Overall recovery times are faster and the cost of downtime is higher in Canada and the U.S. when compared to other countries surveyed, he noted.
The average time it takes to “achieve skeleton operations after an outage” is three hours. To be fully “up and running after an outage,” the average is four hours, states the report.
The trends reflected in Symantec’s report generally mirror those of Info-Tech Research Group Ltd.’s mid-sized enterprise customer base, according to Darin Stahl, lead analyst at the London, Ont.-based firm.
Executive-level involvement in DR plans is rising. In 2007, 55 per cent of respondents reported DR committees involved the CIO, CTO or IT director; this dropped to 33 per cent in 2008. The number rose to 67 per cent in 2009, according to the report. Symantec attributes the rise to DR “becoming a competitive differentiator” and other factors including the size of DR budgets and the impact on customers.
The increased level of executive involvement is a significant issue, Stahl noted. When executives are not involved at a real active level with DR planning and business impact analysis (BIA), the IT group will often build an over-engineered plan, he said.
“You get this sort of notion from the business that everything’s critical … because they’re not going to assume that something is not critical. They’re not going to second-guess that maybe off-the-cuff comment from the executive,” said Stahl.
Info-Tech notices a downward trend when executives get involved in the BIA and see how those costs line up, he pointed out. “The more structured that conversation takes place, the more a detailed methodology is followed, the likelihood that they’re going to achieve an optimal state of alignment and costs,” he said.
Recovery time objectives fell from five hours in 2008 to four hours in 2009. “In 2009, 75 per cent of tests were successful, more than doubling the 30 per cent of tests that met RTO objectives in 2008. While this rate also parallels executive involvement, they may or may not be correlated,” states the report.
One in four DR tests fail. This figure marks an improvement, however, when compared to previous years. In 2007, 50 per cent of DR tests failed. The number dropped to 30 per cent in 2008 and 25 per cent in 2009, according to the report. “Only 15 per cent say that tests have never failed,” states Symantec. “Although this is good news, one test failure in four is still alarmingly high.”
But the number doesn’t alarm Stahl. “Tests are meant to fail … it’s not alarming unless I’m getting to the point where customers are actually trying to recover and failing. That mean’s they’re not testing, doing that remediation cycle through their DR,” he said.
“DR is a living thing. The infrastructure is continually changing and morphing and it would be unreasonable to expect enterprises to be 100 per cent on the test year after year. If they are, that means they’re probably not doing anything else in the infrastructure of the business,” he said.
Reasons cited for test failures included staff errors (47 per cent), technology failure (40 per cent), inappropriate processes (37 pr cent) and out of date plans (35 per cent), states the report. Insufficient technology, which ranked third on the list of reasons for test failure in 2008, dropped to fifth place this year, notes Symantec.
While 96 per cent of IT organizations have tested their DR plans at least once, roughly 35 per cent of organizations perform their test only once or less than once a year, according to the report. “This is 12 per cent lower (and an improvement) from the 47 per cent that reported minimal testing in 2008. However, Symantec and most IT experts believe that every organization should be testing more frequently than once a year,” states the report.
While full end-to-end tests used to be norm, according to Stahl, the trend is shifting to unit tests. “What happens now is they target tests (to) applications or services where they’ve made significant changes because they just can’t sustain a full test. It’s too big, it’s too much, it’s too complex,” he said.
Oganizations aren’t performing more tests because of a lack of resources in terms of people’s time (48 per cent), disruption to employees (44 per cent), budget (44 per cent) and disruption to customers (40 per cent), states the report.
Rob Ayoub, global program director of Network Security at Frost & Sullivan Ltd., found the testing impact to customers and revenue a “very good finding” and one of the most interesting results of the survey. “That’s one of the things at the heart of disaster recovery that doesn’t get talked about a lot,” he said.
“Everyone says ‘test your plans, test your plans’ … but how do you test your plans on a real live working business without impacting your service levels?” said Ayoub. “I’m not sure anyone has a really great answer for that.”
The study focused on organizations with existing plans and doesn’t ask people from organizations without DR plans why they don’t, Ayoub pointed out. “I think testing is definitely a lot of it … there are a lot of pieces that discourage organizations,” he said.
Nearly one third (27 per cent) of respondents do not their test virtual servers as part of their DR plans and more than one-third (36 per cent) do not perform regular backups of data on virtualized systems, states the report.
The lack of storage management tools (53 per cent), lack of backup storage capacity (52 per cent) and lack of automated recovery tools (50 per cent) were reported as the top challenges in “protecting mission-critical data and applications in virtual environment.”
One of the most significant points raised in the survey, according to Lamorena, are the issues with virtualization. “As people are becoming more familiar with the technology and they are moving more mission-critical applications to these environments, they are encountering some of the challenges and are starting to look at what solutions are really going to help deal with this more complex virtual environment,” he said.
Based on the survey findings, Symantec recommends organizations curb the costs of downtime by implementing more automation tools that minimize human involvement, reducing the impact of testing on clients and revenue by implementing non-disruptive testing methods, and including those responsible for virtualization in disaster recovery planning. </p