BlogMaking Ethernet Ready for Converged Enterprise Data Center Networks

Hyperconverged Infrastructure – Organizational Benefits and Industry Performers

Over the past several decades, Ethernet has emerged as the dominant data center networking protocol—along with IP (TCP/IP), an internetwork protocol suite over which traffic from various lower level networks like Ethernet flow.  Although organizations will add other networking protocols for special purposes (such as Fibre Channel for storage networks), Ethernet has become pervasive.

Flexible as it is, however, Ethernet doesn’t serve all purposes. Organizations must frequently use other networking protocols for certain tasks, which means they need to maintain and support multiple networking protocols. This is costly and inefficient.

Ethernet also presents challenges for organizations with multi-tier networks. In addition, Ethernet doesn’t handle server virtualization particularly well or the mobility of the resulting virtual machines. In short, Ethernet is starting to show its age as networking equipment evolves into the highly virtualized enterprise data centers of the future.

And yet the biggest issue with Ethernet is its lack of support for quality of service (QoS). QoS refers to the ability to create and manage networks to deliver different levels of service for different types of traffic. Ethernet can achieve some level of QoS, but native Ethernet gives all classes and/or types of traffic equal access to bandwidth.

Not all types of traffic are of equal importance to the organization, however. Ethernet QoS doesn’t go far enough in distinguishing between types or classes of traffic. A telephony application or customer support agent might have the same priority as a mission critical application or as a high priority file transfer. Lacking this level of QoS, data center managers must over-provision network bandwidth for peak loads and endure user complaints, or manage traffic prioritization at the source side by limiting the amount of non-priority traffic entering the network (and nobody likes finding network access blocked).

QoS also refers to the need for loss-less data transmission, particularly in regard to storage networking. Networks lose data when they get over-saturated with traffic, which forces buffers to overflow and data packets to be dropped. Ethernet tries to compensate by resending the dropped data packets but that often only exacerbates the problem. Although these ‘resends’ happen quickly (25 milliseconds), they contribute to the lack of consistent response times. This limits Ethernet’s ability to service applications that are response-time sensitive, such as customer facing online transaction processing systems (OLTP) or applications that depend on isochronous or near isochronous communications, such as video streaming over distance.

Ethernet also needs more and better management tools to give administrators a way to see what’s happening over the network and make adjustments to improve QoS. Ethernet tools today tend to be the basic tools that come with the networking devices.

About the Author

Dustin Smith

Dustin Smith, Chief Technologist

Throughout his twenty-five year career, Dustin Smith has specialized in designing enterprise architectural solutions. As the Chief Technologist at ASG, Dustin uses his advanced understanding of cloud compute models to help customers develop and align their cloud strategies with their business objectives. His master-level engineering knowledge spans storage, systems, and networking.