As systems integrators, we’ve had the unique privilege of deploying several UCS and converged infrastructure solutions in numerous environments. That means we’ve had the opportunity to amass a tidy collection of UCS deployment snafus and learned how to avoid them.
In the spirit of sharing, here are our top seven technical tips for anyone deep in the trenches of a UCS installation.
- If you have DMZ virtual machines in your environment, consider whether the DMZ and normal VLANs can come down the same trunk wires. If not (that means you’ve probably have an inside production core and a DMZ core outside the firewall), then you have to implement a ‘Disjoint Layer 2’ design. It’s not hard to set up, but it does have some rigid assumptions. For example, don’t overlap VLAN IDs, and—due to pinning rules—you should separate vNICs on the blades to go to each core.
- The KVM IPs assigned to each blade go out the MGMT port, not the 10 GbE ports. The MGMT port doesn’t support VLANs, so you need to use a normal access port. That means you’ll use three IPs for the FIs (1 VIP, and 1 for each FI) and then 1 IP for each blade in the environment on the same network over that 1 GbE link.
- Don’t be tempted to use the 6200s as switches—they’re not switches. Since UCS 2.1, you’ve been able to hook up an FC SAN directly to the 6200 and create zones. However, that’s only supported for a completely captive SAN. (In other words, the only servers using that SAN are UCS blades.) The same applies to appliance ports and Ethernet. The 6200s are designed to run blade server environments, so if you need general-purpose data center switches, buy 5548s with storage FC licenses instead.
- Boot from local disk is the easiest to setup. Boot from iSCSI works fine, but you might find the GUI a bit cumbersome. Booting from FC SAN has a much cleaner interface.
- On boot from local disks, the local disk policies can be problematic. Make a decision—like RAID1 mirrors for everything—and stick with it. If you change your local disk, you might find that your Service Profiles won’t apply to the blade because it doesn’t think the local disks are ‘correct’ for the different local disk policy on the new profile.
- You can’t get around the QoS system in UCS, so if you’re not using QoS Class of Service (CoS) tags, then all Ethernet traffic ends up in the “Best Effort” class and your jumbo frames won’t work. To get jumbo frames down to the blades you have to change the MTU under LAN > QoS System Class in the “Best Effort” category. It’s set to ‘normal,’ which is 1500, by default. Change it to the system max, which is 9216.
- If you experience any weird errors during installation, you should ‘rediscover’ the device as a first attempt at troubleshooting. This is what Cisco TAC support recommends as a first step 230% of the time anyway, so you might save yourself a call. You can even physically remove devices from the chassis and re-seating them before attempting a rediscover.
If you haven’t yet embarked on a UCS deployment, and you’re still considering whether a converged infrastructure solution is the right strategy for your organization, consider our unified computing technology workshop. We’ll help you evaluate your current challenges and needs, and we’ll address your questions in the context of your current application infrastructure.