From business-critical, decision-support information to endless amounts of possibly useful customer data, companies today are storing more information than many of them can handle. But data storage without a solid network architecture around it is increasingly like a filing cabinet that doesn’t have any labels inside and is often stuck shut.
“Companies that treat data like a strategic asset — that know how to manage and analyse data to gain new insights to their business — will be in the best position to capitalize on the new business models, the new market opportunities, and the new ways to attract and develop customers,” said Bill Russell, executive vice-president and COO of Hewlett-Packard Co.’s Enterprise Computing division, during a presentation in New York City in May.
“Storage capacity requirements are growing at a rate of over 50 per cent per year…and, of course, you don’t solve this storage problem by throwing more capacity at it. It requires a fundamental shift in how you design and implement your storage infrastructure,” Russell said.
By far, the most popular concept emerging in storage management is the storage area network (SAN), which brings all of the storage devices together on one network to better distribute server access to those devices.
Commonly touted benefits of SANs include: greater efficiency since servers can utilize various storage devices instead of all waiting for one; better overall network performance as storage and back-up activities are moved off the LAN and onto the SAN subnetwork; and disk mirroring for redundant protection.
Vendors confidently state that among the advantages of a SAN is a lower cost of ownership, but, as with any technology, that calculation depends on many factors such as cost of the hardware, security of the design, and the big kicker in storage management: interoperability.
In terms of secure design, it’s important to ensure that the effort to grant servers access to multiple storage devices doesn’t allow some servers to make a wild grab at any storage device they can see.
Michael Casey, a research director with consultancy Gartner Group Inc. in San Jose, Calif., said this is a problem particularly with Windows NT.
“NT, when it boots up, tries to take over all of the storage it can see,” Casey said. “Many of the Unix variants aren’t as bad…you have to explicitly tell them these (storage devices) exist before the server tries to take control of them.”
Furthermore, as Bruce Gordon, director of strategic planning with CLARiiON, a division of Data General Corp., in Southboro, Mass., explained, allowing every server to openly see every device presents a security threat.
“If one of those machines is hooked up to the Internet,” Gordon said, “a malicious person could get in there and reconfigure the storage software to start writing over another machine’s storage.”
Casey said the disks need to be masked so only the servers with permission to access particular devices can see those devices. This can be done through topology, software or by virtue of Fibre Channel host adapters all having unique worldwide names similar to an IP address, he said.
“You can use that unique host adapter name to map the host adapter to a specific set of disk drive volumes that the subsystem presents, so that host adapter and the host that it’s in can only see the disk drives that are assigned to it. That way, you can share a pool of 100 drives among multiple servers by assigning particular logical volumes to particular hosts, even though the hosts are sharing the same connection,” Casey said.
Gordon and Casey both said the management could be done in a switch, but agreed that such a method is slow and expensive.
“[This] requires that switch crack the packets open and see where they’re going, which isn’t normally something a switch has to do. Either it slows it down or the switch has to be a more powerful and more expensive switch to handle the same amount of traffic. So it’s the wrong place architecturally to do it,” Casey said.
Both agree that the right place to do management is in software with an array-based topology. Gordon explained this arrangement allows the switch to manage the ports but doesn’t require the switch to look at packets and make decisions, as that is all handled by the software.
Another area of vendor buzz in the SANs arena is server-free back-up. Contrary to the name, server-free back-up does involve a server, but instead of the data moving into the server and then back out to the back-up device, the server merely controls the data movement directly from the storage device to the back-up device, thus minimizing the load on the server.
In this often confusing realm of topological hype, vendors will tout one system as being superior without clearly explaining that various methods of storage have benefits depending on each customer’s specific storage needs.
For example, many vendors claim that network-attached storage (NAS) is in competition with SANs, but Gartner Group’s Casey said the two storage methods are located in different parts of the network and don’t even operate on the same concept. He said NAS involves a LAN-attached file server responding to file requests over some kind of file transfer protocol.
“SANs are a back-end network…that tie the servers together with storage. They’re not moving files, they’re moving SCSI blocks,” Casey said.
So a NAS system could have clients attached on one end and still be connected to a SAN or other back-up system on the other end, just like any other server, he said.