Data Center Bridging and the Enablement of Fiber Channel over Ethernet

Posted by Mark Teter, Chief Technology Officer
September 5, 2011

In our last couple of blog posts, we’ve discussed the shortfalls of Ethernet as a data center networking backbone and introduced data center bridging to expand the use of Ethernet. However, converged enterprise data center networks must address the needs of networked storage.

In general, server‐to‐storage applications are intolerant of extended delays, technically referred to as nondeterministic I/O latencies. Many server workloads expect a response to data requests from the networked storage devices in the 1‐10 millisecond range, and many servers are configured to assume that I/O operations have been lost in the case of lengthy delays, such as sixty seconds without a response. That will often cause the application to cease execution. Similarly, these same applications are simply intolerant of dropped packets that trigger resends and cause I/O queue backups that result in the same delayed response times and subsequent application termination as described.

For this reason, Fiber Channel (FC) is the prevailing networking protocol for server‐to‐storage connectivity today. While FC can be deployed in point‐to‐point mode, as in direct attached storage (DAS), the majority of FC deployments support storage area networks (SANs) that enable resource sharing across a larger and more disparate set of server and storage assets than the direct connect model  typically allows.

FC deployments today run over a dedicated network using dedicated single‐use networking equipment components; typically Host Bus Adapters (HBAs) that provide server‐to‐storage connectivity, specialized FC switches and directors, and special cabling. These require specialized training, experience, and tools to set up and manage. FC today offers speeds up to 8 Gigabits per second (Gbs), enabling it to deliver low‐latency response at relatively high‐performance levels. This is sufficient for now to meet the needs of most of the I/O intensive workloads found in today’s data center.

Looking ahead, FC may have trouble matching Ethernet performance going forward. While 16Gbs FC is coming, Ethernet currently provides 10Gbs performance and will be moving to 40Gbs shortly.

Maybe more significant is the economics of FC today. FC components are more expensive, and the lack of QoS capability forces over‐provisioning of FC network resources to address peak consumption period, a wasteful and inefficient practice, as is the case with Ethernet.

By comparison, a converged network reduces cost by eliminating the number of host adapters by combining NIC and HBA functions into one adapter and eliminating redundant infrastructure cost by using one cable infrastructure to support both storage and IP traffic.

Ethernet also supports storage, mainly through iSCSI, the server‐to‐storage protocol designed to transport SCSI block storage commands over Ethernet using TCP/IP. iSCSI was designed to take advantage of all the manageability, ease‐of‐use, high availability, and guaranteed delivery mechanisms provided by TCP/IP while providing a seamless path from 1Gbs Ethernet to 10Gbs Ethernet and beyond. Here at Advanced Systems Group, we’ve seen an uptick in 10Gbs iSCSI deployment over the past 18 months.

By using Ethernet as its underlying transport layer, iSCSI addresses two key data center manager issues—1) reducing the cost and 2) reducing the complexity of deploying FC networks. The first issue—cost—is addressed by the fact that iSCSI can run over existing Ethernet networks and can use software initiators that consume server cycles and memory atop existing Ethernet NICs or LOM (LAN on Motherboard) chips instead of requiring dedicated and relatively expensive Host Bus Adaptors to provide connectivity.

The second issue—complexity—is addressed by the sheer ubiquity of Ethernet network management competency and toolsets that exist today. Most organizations have no trouble finding sufficiently skilled people to handle their basic Ethernet needs.

iSCSI adoption is growing and has found considerable acceptance in small and midsize enterprises, as well as finding adoption for cost-sensitive applications within larger enterprises. Data center managers with time‐sensitive applications, such as OLTP, or Monte Carlo Portfolio Analysis, have not jumped on the iSCSI bandwagon due to uncertainty about the latency resulting from iSCSI’s dependency on an Ethernet layer that can result in dropped packets and extended I/O latencies in lossy environments. However, as 10Gbs Ethernet makes the possibility of converged networking more feasible, all recognize a new need for finer‐grained QoS mechanisms that can address I/O latency and lossy-ness.

Those hopes ride mainly on Data Center Bridging. iSCSI leverages TCP/IP to mitigate the lossy uncertainty of Ethernet by providing better delivery mechanisms. While this currently is addressed through network simplification (appropriate provisioning, minimal router hops, VLANs, etc.), the introduction of Data Center Bridging will enable iSCSI to take full advantage of the lossless nature and deterministic latency of Data Center Bridging-enabled Ethernet networks. In addition, the finer grained QoS capabilities provided by Data Center Bridging will better enable iSCSI, as well as other Ethernet-based storage protocols, in the movement toward converged networking environments.

Data Center Bridging enhancements to Ethernet networks also enable Fibre Channel over Ethernet (FCoE), a new protocol specification developed in the INCITS T11 committee. FCoE will allow FC commands to be transmitted natively over Data Center Bridging-enabled 10Gbs Ethernet networks. FCoE specifies the mechanism for encapsulating FC frames within Ethernet frames and is targeted at data center SANs. As a result, it preserves the capital investment and operational expertise of existing deployments of Fibre Channel SANs while allowing them to coexist with a converged infrastructure. The FCoE specification was ratified in June 2009.

The intrinsic improvements in Data Center Bridging Ethernet for storage deployments (lossless, QoS, low‐latency) enables data center managers to finally plan for a singular converged Ethernet-based network for all data center communications including user‐to-server, server–to‐server and server‐to‐storage networks. By deploying a converged network, data center managers will gain the broad benefits previously described:

  • Cost reduction through the use of common components between endpoints (servers, storage, etc.) and switches
  • Ability to leverage a common management platform, personnel, knowledge and training across the data center
  • Platform with a future that today looks to provide a performance upgrade path to 100Gbs
  • Reduction in costs resulting from less need for dedicated single‐use components such as FC-only HBAs and switches.

We’ll next discuss scaling virtual server environments and application mobility, so check back soon, and let us know what you think!

About Mark Teter Before he retired from ASG in 2013, Mark Teter was Chief Technology Officer (CTO) and the author of 'Paradigm Shift: Seven Keys of Highly successful Linux and Open Source Adoptions.' As CTO, Mark regularly advised IT organizations, vendors, and government agencies, and he frequently conducted seminars and training programs.

Filed Under: Networking

0 Responses to 'Data Center Bridging and the Enablement of Fiber Channel over Ethernet'

Leave a Comment

Please copy "FQUwv3U7uPtdhRsbQkHO8HcQHxh1ImXA" into the field labeled "Uncaptcha"