Down economy or no, the growing appetite for enterprise data storage won’t be sated anytime soon, if ever. The rise of data-heavy multimedia files, new customer touchpoints, evolving reporting and compliance standards and other trends are contributing to near exponential growth rates in the amount of data created and stored in the digital universe.
In 2006, according to IDC, 161 billion gigabytes of data was created – three million times the information in all the books written date, according to the research firm. That will grow to 988 billion gigabytes (or exabytes) by 2010. And while 70 per cent of that data will be created by individuals, organizations will be responsible for storing 85 per cent.
Against that backdrop, it’s not hard to see why these five hot – and cool – storage technologies are significant in the market today.
1. Solid State Drives
DRAM and Flash-based drives are finding their way on to the market – EMC and Samsung are among the companies that have announced products – and the fact that they draw considerably less power, both to run and to cool, is resonating in the enterprise that’s wrestling with high power costs and the optics of green technology use.
“The long and short of it is, to use current parlance, they’re very green and very fast,” says Mark Peters, analyst with Enterprise Strategy Group. How much faster than their spinning hard drive brethren? “It’s orders of magnitude,” Peters says. “While you’re talking milliseconds for disk, you’re talking microseconds for solid state.” With mechanical seek time removed, especially for random I/O, solid state is far superior, with access times faster than 200 Mbps. They’re also quieter.
On the downside, SSDs are more expensive – way more expensive – than hard disks. “You can easily be talking 10 to 30 times the price,” Peters says. But that number comes with a caveat – if you define the price in cost-versus-I/O terms, rather than cost per gigabyte, the multiples get smaller. For a small percentage of data centre needs, like memory for heavily transactional processes, it might make sense to view it in terms of I/O, he says.
Meantime between failure numbers are similar – about one to two million hours – but critics point out the limit to the number of times an individual block can be written to as an Achilles’ heel. (Blocks can be written to about 10,000 times; there’s no limit to the number of times they can be read.) Peters says there are workarounds, like those incorporated in EMC’s Symm flash. And using tools like thin provisioning can use space more efficiently. “We’re a very ingenious race,” Peters says. “We’ll find ways around that.”