Fibre Channel, the high-speed data transport protocol for storage area networks (SAN), is under increasing pressure as data centers move toward Ethernet for all data network traffic and SAS for hardware interconnects.
By no means is Fibre Channel down and out. In fact, recent figures indicate it’s still showing low single-digit, year-over-year growth. The protocol is currently used in $50 billion worth of equipment around the world, according to research firm Gartner.
Because corporate data centers are slow to change out technology, the Fibre Channel networking market will likely continue to show sluggish growth for the next five to 10 years. After that, Ethernet looks to be the protocol of the future.
“The counter winds against Ethernet is that there’s a lot of politics and a lot of religion around Fibre Channel,” said Forrester analyst Andrew Reichmann. “[But] Ethernet can do most everything Fibre Channel can do. Ethernet is cheaper, more ubiquitous.”
And it allows IT managers to find the best fit for specific application workloads, he said. “As those decisions move more toward a workload-centric approach, the one that makes the most sense is Ethernet. For example, it makes more sense to put your [virtual machine] infrastructure on iSCSI or NFS [network file system] because there’s very little difference in the performance you get compared to Fibre Channel.”
Slowing the move to Ethernet — for now — are the usual IT turf battles. Storage networks and hardware are purchased by the storage team, which controls that portion of the overall IT budget. Moving to an all-Ethernet infrastructure means giving that budget away to the networking group, according to Reichmann.
On top of that, some storage administrators simply don’t believe Ethernet is robust enough for data storage traffic. They’ve always used Fibre Channel and see it as the fastest, most reliable way to move data between servers and back-end storage.
“All those factors make it hard to move away from Fibre Channel,” Reichmann said.
Market research firm IDC predicts Fibre Channel will remain at the core of many data centers (supporting mission-critical mainframe and Unix-based applications), while most future IT asset deployments will leverage 10GbE (and later 40GbE) for the underlying storage interconnect. This transition will lead eventually to market revenue losses for Fibre Channel host bus adapters (HBA) and switch products.
As the Fibre Channel market shrinks, IDC predicts “rapid and sustained revenue growth” for 10GbE storage interconnect hardware such as converged network adapters (CNA), 10GbE network interface cards (NIC) and switches. (A CNA is simply a network interface card that allows access to both SANs and more common LAN networks by offering multiple protocols such as Fibre Channel, iSCSI, Fibre Channel over Ethernet (FCoE) and straight Ethernet.)
SAS and Fibre Channel drives
Although Fibre Channel switch revenues have remained relatively flat over the past two years, according to Gartner, Fibre Channel disk drive sales have plummeted. Vendors are expected to stop shipping them within five years.
“We’re forecasting SAS will replacing Fibre Channel because it provides more flexibility and it lowers engineering costs,” said Gartner analyst Stan Zaffos.
High-performance applications such as relational databases will be supported by SANs made up of 5% solid-state drives and 95% SAS drives, according to Forrester’s Reichmann. SAS, or serial-attached SCSI drives, are dual-ported for resilience and are just as fast as their Fibre Channel counterparts.
Unlike Fibre Channel, SAS shares a common backplane with cheap, high-capacity serial ATA (SATA) drives, so they’re interchangeable and can be mixed among drive trays. It also allows for simpler data migration in a tiered storage infrastructure.
IP storage is a buyer’s market
Gartner recently released figures showing that over the past two years, shipments of Fibre Channel HBAs and switches have remained relatively flat, while 10GbE unit shipments have soared. According to Gartner, shipments of 10GbE NICs rose from 259,000 in 2009 to more than 1.4 million last year. And it’s a buyer’s market, with prices falling through the floor due to fierce competition between seven principle vendors, including Intel, Broadcom, QLogic and Emulex.
“Prices for 10GbE hardware is going into the Dumpster. The market has to stabilize around three vendors before we see something from the revenue side,” said Sergis Mushell, an analyst at Gartner.
According to Mushell, single-port 10GbE NICs sell for $43 to $60 dollars; a year ago they went for $100. Dual-ported 10GbE NICs now go for about $300. And CNA cards sell for between $700 and $1,000.
In comparison, a 4Gbps Fibre Channel HBA sells for $337, while an 8Gbps HBA ranges between $1,000 and about $1,900 on sites such as Pricegrabber.com.
In the first quarter of 2010, Fibre Channel switch revenue totaled $1.59 billion; a year later it hit $1.66 billion; and in the third quarter of 2011, it was $1.58 billion. (Those figures includes both 4Gbps and 8Gbps modular and fixed switches.)
Sales of Fibre Channel HBAs — network interface cards that are required for servers and storage arrays alike — have also struggled. In the first quarter of 2010, HBA revenue totaled $781 million. While it rose to $855 million in the first quarter of 2011, it dropped back to $811 million by the third quarter of the year.
According to IDC, as the economic recession abated in 2010, IT shops began server upgrades that had been deferred, with an increased use of server and storage virtualization. To manage those virtualized infrastructures, IT managers sought out a set of standard elements: x86 processors for computing, PCI for system buses, Ethernet for networking and SAS for hard drive and SSD interfaces.
“The goal is no longer to deploy and manage each element individually, but to build the optimal (e.g., densest, greenest, simplest) data center,” IDC wrote in its report ” Worldwide Storage Networking Infrastructure for 2010 Through 2014.”
The underlying idea behind converged IT infrastructure is that companies want to deploy and manage IT assets in predefined “chunks” (e.g., a rack, an aisle or an entire data center) rather than as distinct products (e.g., servers, storage or network switches), according to IDC. Thanks to technologies like server and storage virtualization, these “chunks can then be allocated to support specific application sets. They can also be used much more efficiently,” IDC said.
Mazda makes a move
For example, Mazda’s North American Operations virtualized its application servers, cutting its server count from 300 physical machines to 33 VMware ESX host servers with 522 virtual machines (VM). The move reduced Mazda’s 2009-2010 IT budget by 30%, in large part by virtualizing nearly all of its applications, including IBM WebSphere, SAP, IBM UDB and SQL Server. But the virtualization project also caused storage network I/O bottlenecks because of all the added VMs.
“The backup times just kept growing, from six hours to eight hours all the way to 16 hours,” said Barry Blakeley, Mazda’s infrastructure architect for enterprise infrastructure. “In a workday, you can’t have a 16-hour backup window.”
So Mazda moved its 85TB of storage from NetApp arrays to Dell Compellent iSCSI storage arrays attached through 10GbE networks. Mazda chose a virtual backup product from Veeam Software, following Blakeley’s mantra for the project: “Keep it simple, stupid.”
“Once you deploy things correctly, you can get all the performance you need over iSCSI and you don’t need Fibre Channel,” he said.
Blakeley said the Veeam backup software, combined with a 10GbE network, helped open up his storage network bandwidth, dropping his backup windows to six hours and increasing backup performance to about 6Gbps. “The restore times were really quick too,” he added.
Fibre Channel over Ethernet
One networking protocol that’s gotten a big push — largely from Cisco — in recent years is Fibre Channel over Ethernet (FCoE). While Cisco doesn’t break out sales figures for FCoE-enabled switches, the FCoE protocol was used in a little less than 10% of all SAN deployments last year, according to Stuart Miniman, an analyst at the Wikibon Project, a Web 2.0 community for IT professionals. Those figures, Miniman said, represent a tremendous success for FCoE.
“Most deployments of FCoE are in blade server environments; customers don’t need to think about the technology, it just works the way current SANs do,” he stated in a recent blog post.
Miniman previously worked in the EMC CTO’s office, where he was “an evangelist” for FCoE.
In contrast, Gartner’s Mushell said his research firm is not predicting robust growth for FCoE.
Zaffos echoed that view. “Does it improve data availability? No. Does it improve performance? No. Does it simplify the infrastructure? Potentially. Does it simplify management? Perhaps,” he said. “But it’s not changing how LUNs are created. It doesn’t change how they’re zoned or being allocated.”
Unlike iSCSI, FCoE still requires organizations to employ a Fibre Channel administrator to handle storage provisioning.
“When you look at simplifying an infrastructure, many users follow the [keep it simple, stupid] method and choose to keep separate LAN and SAN infrastructures,” he said. “If you’re keeping two separate environments… the simplification of the infrastructure [by using FCoE] may be ethereal.”
Miniman argues that FCoE is a great way for an enterprise storage team to begin the shift to a more Ethernet-centric environment while maintaining the data loss resiliency of Fibre Channel.
Miniman points out that organizations using FCoE tend to have infrastructures with more than 200 servers, and therefore have the budget for a fulltime Fibre Channel admin and need the robust nature of the protocol. “If there’s less than 200 servers, they tend to use iSCSI,” he said.
FCoE encapsulates Fibre Channel frames in Ethernet packets allowing for server I/O consolidation. In an FCoE environment, converged network adapter cards (CNA) replace both NICs and HBAs. An FCoE-enabled switch then provides connectivity to both an existing LAN and a back-end SAN.
Bob Fine, product marketing director for Dell Compellent, argues that iSCSI can also be used in combination with the more robust Lossless Ethernet and asks, “So what’s the advantage to FCoE?”
Yet a little more than half of all Dell Compellent SAN ports remain Fibre Channel.
“I’d also say most of our customers are using multiple protocols. Very few are only using one,” he said. “That’s a nice thing about giving customers a technology choice. They can choose what technology is good for them.”
Sitting among Storage Networking World (SNW) attendees last month, Rod Patrick, lead IT systems engineer at Atmos Energy, was one of three people to raise his hand when a speaker asked the audience whether anyone was using FCoE for server-to-storage connectivity.
That was an improvement over last year, when Patrick was the only one to raise his hand.
“Even to this day I think we were early on in the game for FCoE,” he said. “It wasn’t totally without incidents or pain… but we’ve been very glad to be on the leading edge.”
Atmos Energy consolidates its network
Atmos Energy, one of the largest natural gas distributors in the U.S., built a brand new data center two and a half years ago. That allowed the company to start from scratch in consolidating its network infrastructure.
“It was mainly about cost and simplicity of the design,” said Patrick, who was hired about six months after the new data center was built. “You’re saving all sorts of gear as far as top-of-rack. So instead of having to run Fibre everywhere and Ethernet, you obviously just run Ethernet. It’s a shared path.”
Atmos had been using a combination of 2Gbps, 4Gbps and 8Gbps Fibre Channel for its SAN. Including three primary storage arrays, several midrange arrays and its archives, the company stores about 1 petabyte of data.
The company runs about 1,000 servers, 60% of which are virtualized. All of its virtual machines run over FCoE, as do about 100 physical machines that support higher-end databases.
Atmos deploys VDI
About a year and a half ago, Atmos deployed a virtual desktop infrastructure (VDI) that includes about 500 terminals in two call centers. Its VDI, based on VMware View, also runs over FCoE.
To help with VDI-related boot storms in the morning, Patrick and his team installed about 10TB of solid-state storage on the primary, high-end storage arrays to boost performance.
The company uses CNAs on its blade servers, which allow it to run both typical LAN data traffic over Ethernet and storage traffic using FCoE. But the company has yet to run the FCoE all the way to its back-end storage.
The Fibre Channel storage arrays are connected to Cisco MDS switches, which are Fibre Channel only. Those MDS switches connect to Cisco Nexus 5000 switches, which connect to blade servers using FCoE.
“We are looking at FCoE direct-connect options to eliminate the [Cisco] MDS switches eventually, but that is a few years away probably,” Patrick said.
Patrick also said he wouldn’t be opposed to deploying iSCSI or NFS (network file system) as his IP-over-Ethernet storage protocol in the future, but he experienced problems in the past with regard to high-end storage performance needs.
For his virtual machine file system data, Patrick said he wanted to use a “more proven standard.”
“I’m kind of old school when it comes to Fibre Channel,” he said.
Forrester’s Reichmann has no doubt how the tension between Fibre Channel and Ethernet will get resolved. Despite its adherents, Fibre Channel is on a long slow slide toward obscurity.
“In the long run, Ethernet is going to win,” he said. “How long it’s going to take to get there is unclear.”