Keeping data secure, tracking who uses it and managing it in a way that maintains backup windows and keeps information available to customers — especially after an interruption in service or a disaster — are among the top issues for IT executives, according to users who took part in a panel discussion yesterday at Storage Networking World.
One of the major pain points raised by users at the conference, which ends Thursday, is around managing hundreds of terabytes to petabytes of data in a way that protects it from outside attacks and keeps it from being compromised or lost during transport.
Ralph Barber, CIO at law firm Holland & Knight LLP in Tampa, Fla., said Hurricane Wilma this week knocked out several branch offices of his law firm, which often deals with electronic discovery cases in conjunction with regulatory requirements or litigation. Holland & Knight has about 450 servers and two storage-area networks that support about 3,000 users.
Barber replicates data between his two data centers — one in Tampa, the other in Denver — for disaster recovery. But the digital tape he also uses to transport information between offices did not help restore systems quickly enough after Wilma hit the state on Monday.
“Our challenges have been putting together a suite of services that will allow for disaster recovery and business continuity,” he said. “This morning [Wednesday], the Miami office and the West Palm Beach office [are] down. Fort Lauderdale just came back up about 10 minutes ago.”
Barber said that better real-time, online data replication tools would help him set up emergency facilities during a disaster.
“We’re really trying to mitigate [business continuity issues] through backup and replication. With the Miami office down, I have lawyers who can’t service clients and can’t make money for the firm,” Barber said. “What really is attractive to me is to be able to flash over data to a restore and move it to a data center, and then move it to a local office for efficiency.”
Barber said his company also uses United Parcel Service Inc. to move backup tapes between some 30 branch offices. Some of the tapes are encrypted, but others are not. “That’s a risk,” he said.
One goal Barber is working toward is moving data over his company’s existing WAN in an encrypted form. That, he said, would cut transportation costs, save man-hours and reduce the risk of losing tapes that are often moved between offices for case management.
“We’ve received tapes back that are chopped … and eaten by the dog [so to speak],” Barber said, adding that not managing tapes properly can cost a company millions of dollars in fines
Ken Black, global storage architect at Yahoo Inc. in Sunnyvale, Calif., said he is looking at different methods of encryption in light of numerous high-profile cases of data loss, and because strict federal guidelines require him to focus on security. “We have a group called ‘The Paranoids.’ They’re our security people, and they look for holes everywhere. And what’s irritating is that we’re finding them everywhere,” Black said.
Black’s company has dozens of data centers and anywhere from four to seven petabytes of data to manage. With so much data, storage administrators are struggling to stay on top of backups.
“We’re trying to find something that helps us meet our backup windows,” Black said. “That’s one of the biggest hurdles right now. It’s one we’re researching.”
Like many users at the conference, Black is testing disk-to-disk backup technologies, such as virtual tape libraries that emulate real tape libraries to application servers but allow data to be saved to disk before transferring it off-line to tape for archival.
Cliff Dutton, chief technology officer at Ibis Consulting Inc. in Providence, R.I., said he is also concerned with the ability to track data, especially in a crisis. Ibis manages 200TB of network-attached storage as part of its electronic data discovery business.
He currently does not replicate data to an off-site facility because data restoration must be “almost instantaneous.”
“If something is down for even a few minutes, it’s a horrible problem for us. We’re under deadline regulatory requirements from the SEC or a judge,” Dutton said. “We can’t have a second tier [of data] replication with a slow restore. It would have to be a process that we could process that data in live, real time.”
Dutton said that because of the high cost of WAN bandwidth, it’s simply too expensive to keep data live at an off-site facility that is restorable in real time.
Storage Networking World is sponsored by Computerworld and the Storage Networking Industry Association.