Internet: Backbone

views updated

Internet: Backbone

Internet: Backbone

The first Internet backbone was invented to assist in the attempt to share supercomputers . The U.S. government realized that supercomputing was crucial to advances in science, defense, and economic competitiveness but the budget for research was insufficient to provide supercomputers for all scientists who needed them. Thus, the first Internet backbone, called the NSFNET because it was funded by the U.S. National Science Foundation (NSF), linked six supercomputing centers (University of California-San Diego, National Center for Atmospheric Research, National Center for Supercomputing Applications at the University of Illinois, Pittsburgh Supercomputing Center, Cornell University, and the John von Neumann Supercomputing Center/Princeton) and their associated regional networks in the United States in order to provide supercomputer access to scientists. Today, a single government-managed Internet backbone has been transformed into a multitude of different backbones, most of which are private commercial enterprises.

Backbone Basics

A backbone is a high-speed wide area network (WAN) connecting lower speed networks. A country typically has several backbones linking all of its Internet Service Providers (ISPs). In the United States, these backbones are linked in a small number of interconnection points. Finally, national backbones interconnect in a mesh with other countries, usually with international trunk lines via land, undersea, or satellite.

The current Internet is a loose connection of TCP/IP networks organized into a multilevel hierarchy using a wide variety of technologies. At the lowest level, computers are connected to each other, and to a router, in a local area network (LAN) . Routers can be connected together into campus, metropolitan, or regional networks. Non-backbone ISPs exist solely to provide Internet access to consumers. For Internet connectivity, at some point all non-backbone networks must connect to a backbone ISP (the highest level). It is typical for a large corporation to connect with one or more backbone ISPs. Backbone and non-backbone ISPs exchange traffic at what is generally called peering points. Federal agencies have always shared the cost of common infrastructure such as peering points for interagency trafficthe Federal Internet Exchanges (FIX-E and FIX-W) were built for this purpose and have served as models for the Network Access Points (NAPs) and "*IX" facilities that are prominent features of today's Internet.

Technology

In the NSFNET, each interconnected supercomputing site had an LSI-11 microcomputer called a fuzzball. These fuzzballs were running TCP/IP and were connected with 56 Kbps leased lines. NSFNET was an immediate success and was overloaded from the day it started. NSFNET version 2 leased 448 Kbps fiber optic channels and IBM RS/4000s were used as routers. In 1990 NSFNET version 3 was upgraded to T1 lines (1.544 Mbps). Later that same year, NSFNET upgraded to T3 lines (45 Mbps). European backbone networks (e.g., EBONE) had a similar evolution from 2 Mbps to 34 Mbps. Current speeds of Internet backbones are based on SONET framing speeds in the gbps range.

Because peering points handle large volumes of traffic, they are typically complex high-speed switching networks within themselves although concentrated in a small geographical area (single building). Commonly, peering points use ATM switching technology at the core to provide traffic quality-of-service management, with IP running on top.

Transmission Mechanisms

The Internet can be viewed as a collection of sub-networks or Autonomous Systems (ASes) that are controlled by a single administrative authority. These ASes are interconnected together by routers with high-speed lines. Routing within an AS (interior routing) does not have to be coordinated with other ASes. Two routers that exchange routing information are exterior routers if they belong to two different ASes. For scalability, each AS designates a small number of exterior routers. The protocol that exterior routers use to advertise routing information to other ASes is called the Exterior Gateway Protocol (EGP).

From the point of view of an exterior router, the Internet consists of other exterior routers and the lines connecting them; two exterior routers are considered connected if they share a common network, and all networks can be grouped into three categories: (1) stub networks (all traffic goes to end computer systems); (2) transit networks (all traffic goes to other ASes); and (3) multiconnected networks (traffic goes to both end computer systems and selectively to other ASes).

Routers in the NSFNET Internet backbone exchanged routing information periodicallyonce a single backbone router learned a route, all backbone routers learned about it. When the single Internet backbone became multiple backbones, the Internet transitioned from a core backbone routing architecture to a peer backbone routing architecture with interconnections at several points. While the desired goal is shortest-path routing, peer routing uses route aggregation between exterior routers as opposed to individual routes for individual computers, which may not result in shortest paths. Also, peer backbones must agree to keep routes consistent among all exterior routers or routing loops will develop (circular routes).

History

The U.S. Department of Defense funded research on interconnecting computers in networks using packet switching that eventually culminated in a wide area network called ARPANET. ARPANET was the network (there were no other networks to connect to) that linked Advanced Research Projects Agency (ARPA) researchers working on central Internet ideas like TCP/IP. Although ARPANET was successful, it was restricted to sites that received funding from ARPA. However, many other universities were interested in forming networks using packet switching, even if they did not receive ARPA funding. This led to the construction of a series of backbone networks, some based on protocols other than TCP/IP (e.g., SNA, DECNET), before the NSFNET linked isolated regional TCP/IP networks into an Internet backbone:

  • BITNETacademic IBM mainframe computers;
  • CSNETNSF/Computer Science Community;
  • EARNEuropean Academic and Research Network;
  • ESNETU.S. Department of Energy Network;
  • FIDONETdial-in E-mail Network;
  • JANETU.K. Joint Academic Network;
  • HEPNETU.S. Department of Energy (High Energy Physicists);
  • MFENETU.S. Department of Energy (Magnetic Fusion Energy);
  • NASA Science InternetU.S. National Aeronautics and Space Administration;
  • SPANNASA Space Scientists (DECnet);
  • USANNSF Satellite Academic Network;
  • USENETbased on AT&T's UNIX built-in UUCP communication protocols.

From 1985 to 1988, regional TCP/IP networks were formed around government institutions and universities, most supported with government funding. In 1988 these NSF regional and mid-level TCP/IP networks were interconnected via a backbone funded by NSF (also supported by donations from IBM, MCI, and MERIT). Given Internet growth in capacity demand, NSF realized it could not pay for managing a network forever so it did three things.

First, in 1992 it encouraged IBM, MERIT, and MCI to form a nonprofit company, Advanced Networks and Services (ANS), which built the first private Internet backbone called ANSNET.

Second, to ease transition and make sure regional networks could still communicate, NSF funded four different network operators to establish Network Access Points (NAPs): PacBell (San Francisco), Ameritech (Chicago), MFS (Washington, D.C.), and Sprint (Pennsauken, NJ). Every new Internet backbone provider had to connect to all four NAPs if it wanted to receive NSF funding. This arrangement meant that regional networks would have a choice of potential new Internet backbone providers to transmit traffic between NAPs. Other NAPs have also emerged: Metropolitan Area Exchange (MAE)-East, MAE-West, and Commercial Internet Exchange (CIX).

Third, NSF enforced an "Acceptable Use Policy" on the NSFNET, which prohibited usage "not in support of Research and Education." The predictable (and intended) result was stimulation of other commercial backbone networks in addition to ANSNET such as UUNET and PSINET.

NSF's privatization policy culminated in 1995 with the defunding of the NSFNET backbone. The funds previously earmarked for the NSFNET were competitively redistributed to regional networks to buy Internet backbone connectivity from the numerous new private Internet backbone networks that had emerged.

At about the same time in the mid-1990s, a European Internet backbone formed, EBONE, consisting of twenty-five networks interconnecting regions of Europe. Each country in Europe has one or more national networks, each of which is approximately comparable to an NSF regional network.

Technical Issues

As a result of having multiple backbones for Internet traffic, different agreements have evolved for handling traffic between networks at NAPs. The most common agreements between backbones take two forms: (1) Peering exchanging traffic at no cost; and (2) Transitexchanging traffic where one backbone pays another for delivery. As a result of increased congestion at NAPs, most backbones have begun to interconnect directly with one another outside of NAPs in what has come to be known as private peering. At one point it was estimated that 80 percent of Internet traffic was exchanged via private peering. Many backbones have taken a hybrid approachpeering with some backbones and paying for transit on other backbones. Those few backbones that interconnect solely by peering and do not need to purchase transit from any other backbones are referred to as top-tier backbones. Because of the proprietary and dynamic nature of this information, it is difficult to state with accuracy the exact number of top-tier backbones, but the following have been reported as such: Cable & Wireless, WorldCom, Sprint, AT&T, and Genuity (formerly GTE Internetworking).

Recently, there has been a call for regulation of backbone interconnection agreements since larger backbones have started to refuse to peer with smaller backbones. There is no accepted convention that governs when it is beneficial for two backbones to peer. While intuition would suggest equal size, there are many measures of backbone sizegeographic coverage, transmission capacity, traffic volume, and number of customersit is unlikely that any two backbones would match on many of these metrics. There is a growing consensus that Internet backbone interconnection agreements are complex contracts with private costs and benefits that can only be decided upon by the participating backbones.

see also Internet; Embedded Technology (Ubiquitous Computing); World Wide Web.

William Yurcik

Bibliography

Comer, Douglas E. The Internet Book: Everything You Need to Know About Computer Networking and How the Internet Works, 3rd ed. Upper Saddle River, NJ: Prentice Hall, 2000.

*IX short for "Internet Exchange"; the asterisk indicates that there are different possible types of Internet Exchanges the two most common are the Commercial Internet Exchange (CIX) and the Federal Internet Exchange (FIX)

About this article

Internet: Backbone

Updated About encyclopedia.com content Print Article