Data Center Bridging – Overcoming Ethernet Limitations

Posted by Dustin Smith, Chief Technologist
January 27, 2015

In a recent blog, we discussed the challenges with Ethernet as a data center networking protocol. Although Ethernet is the dominant networking technology in the data center today, there are limitations with the current Ethernet standard. Overcoming these limitations is the key to enabling Ethernet as the foundation for efficient converged data center networks and for delivering robust QoS. 

Ethernet utilizes upper level layer protocols (TCP) to manage end‐to‐end data delivery and integrity. When the amount of data entering the network exceeds network capacity, Ethernet networks become over‐subscribed and they can drop data packets in certain circumstances, which results in lost data.

Fibre Channel (FC), by comparison, provides a buffer‐to‐buffer credit that ensures packets will not be dropped due to congestion in the network, making it lossless. Ethernet can be made lossless only by adopting higher level protocols such as TCP/IP, which have adapted to the intent of IEEE 802-based networks by incorporating end‐to‐end congestion avoidance and flow control algorithms. Ethernet is an 802-based network.

To deliver traffic differentiation and QoS with Ethernet, the existing IEEE Ethernet 802.1p/Q standards provide classification of traffic flows with 3-bit tagging. Network equipment devices (like bridges and routers) use this classification to put different classes of traffic into different queues, while the standard specifies strict priority scheduling of these queues. This allows higher priority traffic to be serviced before the lower priority queues, thus achieving lower latency and drop probability for priority traffic. However, this also creates unfairness issues for other queues because higher priority queues use more bandwidth, effectively starving the lower priority queues. Although the standard does permit the use of other scheduling algorithms, the behavior isn’t specified—which means no single implementation exists, producing interoperability issues. 

Enter Data Center Bridging, an architectural extension designed to improve and expand the role of Ethernet in the data center. Data center bridging allows organizations to logically manage networks end-to-end with QoS through four capabilities:

  1. Congestion Notification (CN)—provides end-to-end congestion management
  2. Priority‐based Flow Control (PFC)—provides a link-level, flow-control mechanism
  3. Enhanced Transmission Selection (ETS)—provides a common management framework
  4. Discovery and capability exchange protocol—conveys the capabilities and configuration of the above features to ensure a consistent configuration across the network

Data Center Bridging networks can be characterized by limited bandwidth‐delay and limited hop‐count. When multiple traffic types are transmitted over a single link, there needs to be assurance that each traffic type obtains the bandwidth that has been allocated for it, while at the same time, conditionally restraining any traffic types from exceeding their allocated bandwidth.  

When multiple traffic types—LAN, SAN, IPC—are consolidated onto a single converged link, there is no inherent prioritization of traffic between these types; however, each traffic type needs to maintain its current usage model of a single interface with multiple traffic classes supported. Each type also needs to maintain its bandwidth allocations for a given virtual interface (VI), independent of traffic on other VIs. Data Center Bridging physical links provide multiple virtual interfaces for different traffic types. 

The new features and capabilities of Data Center Bridging will need to operate within and across multiple network domains with varying configurations. Achieving interoperability across these environments requires that link partners exchange information about their capabilities and configuration and then select and accept feature configurations with their link partners. This is accomplished through the four capabilities (CN, PFC, ETS, Discovery and Exchange) noted above.

These capabilities, in conjunction with other Data Center Bridging technologies, enable support for higher layer protocols that are loss-sensitive while not affecting the operation of traditional LAN protocols utilizing other priorities. Converged enterprise data center networks, however, must do more than provide QoS for mixed traffic. Data center networks must address the needs of networked storage and the scaling of virtual server networks. 

About Dustin Smith Throughout his twenty-year career, Dustin Smith has specialized in designing enterprise architectural solutions. As the Chief Technologist, Dustin is responsible for the strategic direction of aligning the company’s growing consulting services with the client challenges he finds in the field, and he works closely with his regional architects to design new programs to address these issues.

Filed Under: Networking

0 Responses to 'Data Center Bridging – Overcoming Ethernet Limitations'

Leave a Comment

Please copy "xF9f1Q7WFJG0RjRHTGyAyGK7oeout1Zl" into the field labeled "Uncaptcha"