So you finally upgraded your FDDI ring to a Gigabit Ethernet backbone and added 1000Base-T network cards to your servers. You’ll never have to worry about LAN bandwidth again, right?
While adding inexpensive bandwidth has become the de facto way to solve network performance problems, observers say network administrators would be better served to keep a close eye on their core LAN traffic utilization.
“Some people just don’t care about LAN bandwidth and feel they don’t have to clean up those connections,” says Chris Lukas, CTO of emerging technologies at Hold Brothers On-Line Investment Services Inc., a New York stock brokerage. With Gigabit Ethernet server network interface cards costing as little as US$100, and Gigabit Ethernet ports priced in the US$200 to US$800 range – down 40 per cent from a year ago, according to multiple market watchers – throwing more bandwidth at a network bottleneck is easier than ever.
But that really isn’t the answer, Lukas says. Identifying potential trouble spots and keeping tabs on backbone utilization are important for running an efficient LAN, experts and users say. Also, identifying inefficient traffic patterns over a backbone can help reduce surprise surges in bandwidth utilization from internal and external traffic.
Most network backbones consist of a group of core switches tied together with single or multiple Gigabit Ethernet connections, says Ali Tehrani, senior network engineer at Network Visions, an integration firm that installs networks for large companies.
Corporations with this kind of meshed core switches need to keep a close eye on what kind of traffic travels over their interswitch trunks, Tehrani says.
“When connecting switches back to a core switch, the backbone speed will be the slowest link between the switches, whether that’s one, four or eight gigabits” – when using trunking, he says.
“Typically, you want to stay under 14 per cent” utilization on interswitch Gigabit links, he adds. “You can go as high as 25 per cent, [but] if you have too much going on that link, even with Gigabit, there’s always the potential for collisions. Obviously with less utilization, the better it is.”
Tehrani says his firm usually works with customers who complain that their networks are slow but have never bothered to track the utilization of their switches, which he says can be done with simple network management tools usually included with the equipment.
He adds that it is not uncommon for him to go on-site and find a network backbone running between 80 per cent and 90 per cent utilization because of poor planning, slower links used to trunk switches or older gear running newer applications.
When adding virtual LANs (VLAN) to a meshed backbone of switches, the interswitch traffic can be even trickier, according to Hold Brothers’ Lukas.
The backbone at Hold Brothers consists of four Cisco Catalyst 6509s linked together with multiple Gigabit connections using Cisco’s Gigabit EtherChannel trunking technology.
“The problem is that the switches were never really in use that much,” Lukas says, but the traffic between backbone switches was heavy, which made minimizing interswitch traffic a priority.
“I have a 32-gigabit backplane inside each switch, and I have one to four gigabits for interswitch links,” Lukas says. “I want to keep traffic as much as I can on that high-speed backplane.”
Lukas’ firm uses VLANs extensively to group users together and keep different types of traffic from colliding. However, the VLANs caused complications when Lukas configured several of them to communicate, or “trunk,” with each other. Communications between trunked VLANs caused his switch-to-switch utilization to surge because hundreds of users attached to four switches, but belonging to two trunked VLANs, had to go from switch to switch to access servers and send files to each other.
To monitor the problem, Lukas used network management tools from NetScout Systems, which can scrutinize individual port activity and track which VLAN traffic is crossing the backbone most often. By moving some trunked VLANs to the same switch, Lukas reduced his interswitch bandwidth by half, which improved network response time.
“If you just let this stuff run on its own, you may discover one day that your interswitch links are 90 per cent used up at times,” Lukas says. While fine-tuning LAN bandwidth won’t help you save money immediately, he adds, “it’s more about stopping surprises than saving money. Surprises, by definition, cost you money.”
Other users are on the lookout for backbone utilization surprises from outside their networks, rather than internally.
At Incyte Genomics in Palo Alto, the network backbone is considered to be the hundreds of Gigabit Ethernet links between the research firm’s large clusters of servers, and not so much switch-to-switch links.
Incyte also runs an extranet that lets customers such as universities access the firm’s server clusters for conducting research. This is where Phil Kwan, associate director of network infrastructure, is most concerned about overutilization of his backbone.
When this extranet first went online, Kwan says huge amounts of traffic came in from institutions accessing Incyte’s backbone. With this, the old 80-to-20 ratio of internal-to-external traffic was thrown out the window.
“What we don’t want to happen is to have [extranet customers] all hitting us at the same time with their DS-3 connections,” which many university customers have, Kwan says.
To control that traffic, Kwan plans to upgrade his Foundry Networks’ BigIron switches in the network core with new modules that support rate limiting, which can be used to allocate how much or how little bandwidth certain ports or users can receive.
By contrast, little is done to engineer traffic on the internal backbone. “We don’t use a lot of [quality of service] or [Differentiated Services],” Kwan says. “We wouldn’t gain a lot from using that” with internal traffic.