Whoever came up with the term “Web surfing” had it all wrong. It should be “driving.” Then at least the analogy would be accurate. One minute you are speeding along at 120Kbps and the next you are bumper to bumper going 12Kbps, and all the while without the foggiest idea why you slowed down.
If, as most analysts predict, hundreds of millions of people start to use the Internet in the years to come, the thoroughfares and side roads will get more congested unless measures are put in place to deal with the dramatic increase in traffic. Tempers will flair and many millions will choose to avoid the situation altogether. Companies who plan to use the Internet as a business focal point will find the store empty. In a nutshell, e-commerce will be dead before it gets a chance to bloom.
For the hundreds of thousands of people who are responsible for building the Internet, the work is clearly cut out. Make it bigger. Make it faster. Not only will the number of people and companies accessing the Web increase, but their downloads will no longer be limited to bandwidth-friendly e-mail and static Web pages. Even now there is huge demand for pipeline hogs like iCraveTV, streaming video and MP3 music files.
Is the infrastructure of the Internet ready to handle this incredible growth in demand and, if not, where are the bottlenecks and what can be done to alleviate them?
Robert Quance, vice-president of market development for UUNET Canada, put it rather succinctly.
“The applications are getting fatter and the number of users is growing,” he said. “Whatever you build will, sooner or later, not be enough. It is certainly posing some big challenges, because none of this has been done before.”
One area most agree is relatively well suited to handle the growing load is the backbone, an optical fibre network criss-crossing North America that handles the majority of long-distance traffic. Whether owned by UUNET, Bell or AT&T, almost all of the major carriers are upgrading their entire network to full line-rate Optical Carrier-192, which transmits at the rate of 10Gbps.
Though fibre cable is still being laid, much is already in place. Now it must be turned on or, to use the industry vernacular, lit. As laser technology advances, the speed of transmission along the fibre increases dramatically. At OC-192 the laser pulses on and off 10 billion times per second, and each on-off pulse transmits one bit of information.
Not long ago companies were at OC-1 (51.85Mbps). According to Bill Gartner, Lucent Technologies’ general manager of optical networking systems in Holmdel, N.J., the only game in town in the 1980s was “could you make your laser turn on and off faster than the next guy?
“It is a constant race to deliver more wavelengths or higher bit rates or both to increase the amount of capacity we are putting on a fibre.”
Though speeds are quickly increasing in this area, known as time division multiplexing, there is also tremendous advances in dense wave division multiplexing. Put simply, TDM is speeding up the traffic on the fibre, DWDM is increasing the number of lanes.
DWDM increases the number of channels of traffic running down one fibre by varying its wavelengths. Eighty or more wavelengths are now possible on one fibre, according to Asoka Valia, vice-president of the Optical Internet Group with Brampton, Ont.-based Nortel. Valia said later this year Nortel will release the OPTera 1600G DWDM system which will be able to deliver 160 10Gbps wavelengths, thus pushing individual fibre traffic into the terabit range.
Gord Waites, vice-president of operations and development at AT&T Canada in Toronto, agrees that fibre technology has exploded.
“The developments in fibre optics in the last five to 10 years have been just astronomical in the capacity that has been carried through them,” he said.
As a case and point, he said AT&T has lit up about 4,500 miles of OC192, while it wasn’t that long ago that OC3 (155Mbps) was the maximum speed on fibre.
“Rather than people laying more and more strands in the ground the technology is advancing so rapidly to allow more and more bandwidth to be sent down the same strand of fibre,” he added. Waites said no one has found a limit to how much information can be passed through fibre and that in laboratory conditions fibre capacity is six times what is commercially available.
So why is the Web so slow?
“There are a myriad of reasons why access can be slow,” said Paul Giroux, vice-president of systems engineering for Sun Microsystems in Markham, Ont.
“There are too many points of failure along the way to pinpoint [the bottleneck]. It could be a lot of traffic on your cable, your ISP or the traffic itself can be a bottleneck depending on which way it is being routed. The destination site might be slow or they might have a slow pipe to their site.”
In other words, it could be anything between your computer and the one to which you are connecting.
But the overall consensus is that NAPs are the most likely bottleneck. NAPs, short for network access points, are public network exchanges where ISPs connect with one another. They are often located in large metropolitan areas like Chicago and New York and operated by companies like Pacific Bell, Sprint and Ameritech. Waites said they were initially set up as non-profit meeting points, where everyone contributed to the hardware and, through agreement, somebody managed the site.
“The biggest problem with NAPs is that everyone connects to them and there is too much traffic being exchanged from one carrier to the next,” said Steve Harris, a senior research analyst with International Data Corp. in New York.
Alan Strachan, a vice-president of Internet services with Telus in Burnaby B.C., agrees. “When the Internet bogs it is usually in the major U.S. Internet access points like MAE-West and MAE-East,” he said. Because they are central access points they are particularly vulnerable when an Internet event generates intense amounts of traffic. Strachan cited the bottlenecks created when the much-anticipated Star WarsPhantom Menace QuickTime video was released.
Carl Condon, vice-president of technology for Bell Canada in Toronto, agreed that NAPs are a bottleneck problem, partially because they are not well funded.
So how can this issue to be solved? As is often the case in the world of business, when public fails, go private.
“Now the major carriers are trying to peer with each other directly,” Harris said. This relationship, called private peering, occurs when two companies forward each other’s information across a dedicated link.
Waites gives the example of companies like AT&T and Bell Nexia linking to give their users a better experience on both sites. Condon agrees, and says Bell uses very little of the public NAPs and does most of its connecting with the major U.S. carriers and works closely with other Canadian Telcos. Peering arrangements are almost always confidential.
This arrangement is great for the big players but tends to leave out the smaller ISPs. According to Harris, there are over 5,000 ISPs in the U.S. and most are quite small, and “all of the big carriers pretty much privately peer with the other big carriers.”
Harris added that the biggest problem with private peering is that they don’t tend to pay each other and, since the little carriers usually hand off a lot more traffic to the big carriers than vice versa, there is a traffic imbalance and little incentive for the larger players to peer with them. This leaves the smaller ISPs stuck using the overcrowded public NAPs and no real consensus in the industry on how to solve the problem, according to Harris.
On the home stretch
If your packet of information has successfully made the trip across the country and through a NAP, then why does it often still not zoom along?
One remedy is more better access. “I can hardly think of a case now where customers couldn’t get more [speed] if they connected to a fatter pipe to us,” UUNet’s Quance said.
According to AT&T’s Waites, over-subscription can also be a problem. “A very small ISP might have a T1 connection into the Internet, but if you aggregate all of the bandwidth they give to users, it combines to four or five times as much as their link.”
Quance added that one ISP might have three times the customer base, for the same bandwidth, as another ISP. That drives prices down, but it also drives down speed. “[Companies] want to have undiluted bandwidth, but of course price is the big measurement in any purchaser’s mind,” he added.
“Independent of how empty the highway is, you may have a problem getting home just because getting out of the parking lot takes 45 minutes. For companies who have lots of employees and slow access, the rest of the Internet might be lightening fast but you own access is clogged.”
Cost vs. speed
Though the price of bandwidth is falling it is still a big cost for companies
to bear, said Bill Arnaux, senior director, network projects at CANARIE in Ottawa.
“You are limited by your bank account.” And if your corporate headquarters are not in a major metropolitan area, “the cost to back haul (connect) you to Toronto, Montreal or Vancouver would be horrific,” he added. For every GM, Ford or Royal Bank-type company with OC3 access, there are hundreds struggling along with ISDN or T1 (128Kbps or 1.5Mbps).
Though fibre may not be in every office building in Canada, phone lines are. With vastly improving technology, more and more bandwidth is available from those technologically-dated copper wires.
Digital subscriber line, or DSL, allows faster speeds over telephones lines. Though DSL technology is distance dependent, speeds of up to 55Mbps are achievable over twisted pairs of copper wire over short distances (less than two kilometers) between an office and an Optical Network Unit, which connects to the carrier’s main fibre network. DSL technology is not yet available across the country but the offerings are expanding.
And if, after all of this has been remedied, you still find access to a site particularly slow? Do what everyone else does — blame the site. “There is nothing to stop someone from setting up a Web site and under-provisioning their hardware or under-provisioning their pipe to the Internet or selecting a provider who has a different set of economics,” Quance said.
Waites agreed. “Interestingly enough, most of the slow downs are in the (site) servers. Companies will put up a site and view it as a marketing cost and not as a strategic element of their business.” They might not buy enough bandwidth to the ISP or, if it is hosted at the ISP, cut corners with the server hardware, he explained.
Giroux added: “[Being] unresponsive is deadly in this marketplace; you are better to not be on the ‘net than to be on the net poorly.”
Strachan views the eventual development of the Internet as the old chicken and egg situation. If the internet stays slow then no more people will get on, which in turn will mean companies won’t have this revenue stream which would induce them to pay the high costs of speeding up the access.