FRAMINGHAM, Mass. – To evaluate Cisco System Inc.’s FabricPath technology, Network World U.S. performed tests in five areas: with 16 redundant paths; with 16 links per redundant path; with load-balancing of multicast traffic across redundant links; with removal or addition of one device in the switching fabric to determine convergence time; and with management by Cisco’s Data Center Network Manager (DCNM) platform.
The test bed comprised six Nexus 7010 switches equipped with Cisco’s F1 32-port 1/10-gigabit Ethernet line cards. We designated two switches as “edge” devices, each with 128 10G Ethernet ports. The remaining four switches, each with 32 ports, functioned as “spine” devices. (This is a flat network design with all devices forming a single layer-2 domain; we used the terms “edge” and “spine” only to designate which switches had hosts attached.)
We used Spirent TestCenter to offer traffic between the two edge switches using a partially meshed topology. Unless stated otherwise, we used 1,518-byte frames, bidirectional unicast traffic, an offered load of 80% of line rate, and a test duration of 300 seconds.
In the 16-path tests, we configured the switches so that each edge switch connected to each of four spine switches using four EtherChannels (the Cisco term for link aggregation) with four links apiece. Thus, each edge switch had 16 paths to each spine switch in this case. In the four-path tests, we configured the switches so that each edge switch had one link to each spine switch, but this time that link comprised 16 links. We used this latter setup for all tests except the 16-path test case.
To verify 16- and four-path connectivity, we configured Spirent TestCenter to emulate 100 hosts per port on the edge switches, offering unicast traffic to all ports on the opposite edge switch. Thus, with 64 ports on each of two edge switches and 100 hosts per port, the test traffic emulated 12,800 hosts.
To demonstrate fairness in the hashing algorithm used in FabricPath, we ran this test twice, using two completely different sets of 12,800 pseudorandom MAC addresses.
We verified FabricPath’s ability to load-share multicast traffic with multicast alone and also with a combination of multicast and unicast traffic. In the multicast-only test, a receiver on each edge port emulated by Spirent TestCenter joined 50 groups, each with 100 transmitters. With 128 edge ports, this made for the layer-2 equivalent of 640,000 multicast routes. To determine whether load-balancing was uniform, we cleared the interface counters on all switches before each test and then examined the EtherChannel frame counters after the test.
Then we repeated the test with unicast traffic comprising 60 per cent of the offered load and multicast comprising 20 per cent. Again, we examined interface counters on inter-switch links after each test run.
To determine convergence time for the loss of a switch, we killed power to one spine switch while rerunning the 16-path unicast test described above. We derived convergence time from frame loss on that switch. We repeated the test once more and powered up the same spine switch. Again, we determined convergence time by measuring any frame loss on that switch.
The final set of tests involved an assessment of DCNM, Cisco’s new management platform, for four common network management tasks involving FabricPath. First, we configured DCNM to discover all switches and populate its database. Second, we configured DCNM to send text messages and e-mail when traffic on a FabricPath link exceeded a given percentage. Third, we configured DCNM to display an alarm upon link failure (triggered by physically removing a cable between edge and spine switches). Fourth, we configured DCNM to apply weighted random early detection (WRED) queuing to all switch configurations, and then to remove the WRED section of all switches’ configurations.