Scaling Virtual Server Technology Environments and Application Mobility
Like storage, networks present challenges and limitations when organizations scale their virtual server technology environments. These include the shortcomings of Spanning Tree Protocol (STP), the growing number of GbE connections per server, low utilization, and link failure recovery. Another challenge arises from certain virtualization capabilities, such as Virtual Machine (VM) mobility, which allows VMs to be migrated within a single Layer 2 network. This is particularly important since non-disruptive migration of VMs across Virtual LANs (VLANs) using Layer 3 protocols are not yet supported by virtualization hypervisors.
In traditional Layer 2 Ethernet networks, organizations create highly available networks by designating paths as active or standby using STP. While this provides an alternate path, only one path can be used at a time, which means that network bandwidth is not well utilized. Since one of the goals of server virtualization is to increase utilization of the physical server, increased utilization of network bandwidth should also be expected.
To increase network utilization, Multiple Spanning Tree Protocol (MSTP) and similar protocols allow for separate spanning trees per VLAN. While this improves bandwidth utilization, the STP limit of one active path between switches remains. And, because traffic paths are manually configured with MSTP, complexity increases.
Another challenge with STP is network behavior when links fail. Links do fail, and when that occurs, the spanning tree needs to be redefined. This can take anywhere from five seconds with Rapid Spanning Tree (RSTP) to several minutes with STP—and this situation can vary unpredictably even with small topology changes. Furthermore, the demands for non-stop traffic flow increases with server virtualization technology, and, consequently, network convergence times must shrink accordingly. STP does not provide an adequate solution for these issues. Finally, when a redefined spanning tree is reconverging, broadcast storms can occur and result in network slowdown. All of these limitations of STP are why Layer 2 networks typically are kept small in the data center.
What’s needed are Layer 2 networks that:
- Are highly available
- Guarantee high-bandwidth utilization over equal-cost paths
- Don’t stall traffic when links are added or removed due to failure or network reconfiguration
- Make latency deterministic and is lossless
- Can transport IP and mission-critical storage traffic over the same wire
This becomes even more important when an application is running in a VM rather than on a physical server. Since the VM is not tied to a specific physical server, it can move between physical servers when the application demands change, when servers need to be maintained, and when a quick recovery from a site disaster is necessary.
VM mobility should occur within a cluster of physical servers that are in the same IP subnet or Ethernet VLAN for the migration to be non-disruptive to client traffic. Otherwise, changes in the IP subnet are necessarily disruptive. As noted in the discussion of STP limitations, the sphere of VM migration also can be constrained. The solution for flexible VM mobility is a more scalable and available Layer 2 network with higher network bandwidth utilization.
For a VM to migrate from one server to another, many server attributes must be the same on the origination and destination servers. This extends into the network as well, requiring VLAN, Access Control List (ACL), Quality of Service (QoS), and security profiles to be the same on both the source and destination access switch ports. If switch port configurations differ, either the migration pre-flight will fail or network access for the VM will break. Organizations could map all settings to all network ports, but that would violate most networking technology and security best practices. The distributed virtual switch in VMware vSphere 4 addresses some of these issues, but at the cost of consuming physical server resources for switching, added complexity in administering network policies at multiple switch tiers, and a lack of consistent security enforcement for VM-to-VM traffic.
Automation provides only part of the answer. With automated VM migration, network administrators will have limited visibility to the location of applications. This makes troubleshooting a challenge, and pinpointing issues to a specific VM will be like finding the proverbial needle in a haystack. Now, consider again a Layer 2 network that:
- Places no physical barriers in the way of VM migration
- Is aware of VM locations and consistently applied network policies
- Does not require manual intervention when a VM moves
- Removes the overhead of switching traffic from the hypervisor for maximum efficiency and functionality and supports heterogeneous server virtualization in the same network
The new Ethernet fabric brought about by Ethernet Data Center Bridging allows organizations to broaden the sphere of application mobility, provide VM awareness, and optimize server resources for applications just as it improves networking for storage.