There’s still a 10- to 12-year development road map ahead for spinning disk drives, but the technology’s inherent limitations will eventually force a shift to something new, Yoshida says.
The spinning disk is “probably the only high technology that’s still very mechanical,” Yoshida says when asked why cost per capacity is improving slowly. “It has a motor that spins and it has an arm that has to be moved to the right track. There is a limit to how fast we can spin that disk.”
Enterprises have begun using small amounts of flash-based solid state disks to serve applications whose performance needs aren’t met by mechanical hard drives. SSD can be anywhere from 10 to 17 times faster than hard drives, while also costing 10 to 20 times more, Yoshida says.
While the price can be expected to drop, Yoshida says it may not happen as fast as some people expect. Flash is not produced in the same types of quantities as spinning disks and it will therefore be hard to eliminate the price gap between SSD and hard drives, he says.
However, it is possible that flash will someday overtake hard disk drives as the primary data storage technology. “Eventually, the thought is that some sort of solid state disk will replace [spinning disk drives],” Yoshida says.
But the flash technology used in enterprise storage products today can wear down after about 100,000 writes, Yoshida notes. Vendors are trying to make flash more durable, but ultimately it may not be enough to make flash a suitable replacement for hard disks, Yoshida suggests. Other non-volatile storage technologies, such as PcRAM, are in development and might ultimately prove to be more durable, he says.
Because hard disk drives are in such wide use and will be for the foreseeable future, vendors are pursuing several strategies to make them more efficient. One such strategy is striping, which spreads data across a wider pool of storage devices. Thin provisioning, meanwhile, over-allocates storage to servers, which allows faster provisioning of space to servers without actually adding more capacity.
Another important technology is a global cache, which allows multiple storage processors access to the same cache, Yoshida says. Such technology is available in Hitachi’s USP V, the EMC DMX and IBM DS8000, he says.
More advances will be needed, because various trends are placing more stress on storage systems. While the demand for physical storage space rises constantly, increasingly powerful servers are also demanding more data serving capability.
“If you look at these new [Intel and AMD] processors they are like mainframes,” Yoshida says. “They have power we didn’t dream about five years or so ago.”
Besides the sheer power of individual processors, servers are now made with many cores and host multiple virtual machines, potentially slowing down access to data.
“You have these different technology cycles that overlap each other,” Yoshida says. “Sometimes the processors are ahead and sometimes the storage is ahead. Right now the trend is with the servers, and the way we use the servers, with multicore and hypervisors and virtual machines, is going to drown the storage systems.”
Adapting storage systems to new standards such as Fibre Channel over Ethernet will also be a challenge, Yoshida says.
“The bandwidth to the storage has doubled,” he says. “This year were going to see 8-Gig Fibre Channel. On the horizon is 10-Gig Fibre Channel over Ethernet, going to 40-Gig, going to 100-Gig. The pipes to the storage are scaling up, and all these things are going to require a storage system that will scale up.”