Big changes are coming to the National Science Foundation Network (NSFNET), Don Morris of SCD's Networking Group told participants at the Tenth SCD User Conference. A radically new network architecture is under construction, being developed in phases and scheduled for completion by next spring. NSFNET will be replaced by an infrastructure that will be operated by private network service providers in a competitive environment--in other words, NSFNET is going commercial.
NSF hopes that commercialization will promote the increased vigor of the U.S. Internet, which has burgeoned in size and complexity since the founding of NSFNET in 1986. Commercialization will also further the goals of the High-Performance Computing and Communications (HPCC) initiative and the National Research and Education Network (NREN) program by creating an increasingly higher performance network that can serve more users.
The transitional period could be "a bit rough," according to some observers, who speculate that network overloads, routing problems, and rising costs for network access could be a possibility in the near future. Such problems, however, might not materialize.
Why did NSF decide to revamp NSFNET? One reason, Don said, is that the agency is mandated by Congress to seed technologies and pass them on to the private sector. NSF initiates projects that would normally not be undertaken by commercial establishments; when the projects are successful, NSF transfers the technology. (In this case, the technology is a national research-and-development network.)
Another contributing factor was that the cooperative agreement with Advanced Network Services, the company that currently operates NSFNET, expired in May 1993. (The agreement has been temporarily extended).
Accordingly, NSF put out a solicitation last year for a new network infrastructure that would turn control over to the private sector. Contracts were awarded in spring 1994, and the NSFNET reconfiguration is progressing rapidly.
Don explained that broadly speaking, up until recently the NSFNET-based infrastructure consisted of three layers:
- The original high-speed NSFNET backbone, created eight years ago as a research-and-development network. At the outset, the backbone connected the six facilities that were then designated as "supercomputing centers"; today it connects sixteen nodes.
- Regional networks connecting to the backbone. Examples include WESTNET (interconnecting the Rocky Mountain region), NEARNET (New England Academic and Research Network), SURANET (Southern University Research and Academic Network), and MIDNET (serving the "Big Eight" universities). There are many other regional networks.
- Smaller organizations and campus networks connecting to the regional networks.
This entire infrastructure (backbone, regional, and campus networks) is linked to a number of government networks via two Federal Information eXchange (FIX) access points. Examples of government networks include the NASA Science Internet (NSI), the Department of Energy's Energy Sciences Network (ESnet), and the U.S. Military Network (MILNET).
In addition, the NSF infrastructure is linked to various commercial-traffic networks through the Commercial Internet eXchange (CIX) in the western U.S. and the Metropolitan Area Ethernet (MAE-East) in the eastern U.S. Examples of commercial-traffic networks include NETCOM Corporation and Colorado Supernet.
Figure 1 shows a diagram of the old infrastructure. (Note: Most of the international traffic to and from the U.S. Internet goes through the International Connection Manager, or ICM, operated by Sprint.)
In the new infrastructure (see Figure 2), certain elements are being eliminated or rearranged and others added.
High-speed backboneThe original NSFNET backbone will be dismantled. A new backbone (the very-high-speed Backbone Network Service, or vBNS) is under construction by MCI, who was awarded the contract to build and operate the vBNS. The vBNS will connect the five currently funded NSF metacenters (including NCAR) and provide service of 155 megabits per second. It will have a strict acceptable-use policy and be used only for "meritorious applications" (the definition of which is yet to be determined). To use the vBNS, applicants must make a special request and receive an NSF grant. In other words, the vBNS will not be available for ordinary production networking--or e-mail!
All other network traffic will be handled by commercial network service providers.
NSPs and NAPsCommercial network service providers (NSPs) will provide Internet Protocol connectivity to the Internet, will not be subject to NSF control, and will have no acceptable-use policy (ergo: anything goes). However, to qualify as an NSP, a provider must connect to all three priority network access points (NAPs). The NAPs were mandated by the NSF as a way to ensure interoperability of the various providers.
Modeled after the current FIX and CIX exchange points, NAPs are hubs where the vBNS and NSPs will interconnect to exchange traffic. Three priority NAPs are now being constructed--one in the San Francisco Bay area, one in Chicago, and one in New Jersey. The first two NAPs are Asynchronous Transfer Mode (ATM) switches; the last is a Fiber Distributed Data Interface (FDDI) ring. (A fourth, nonpriority NAP is being built in Washington, D.C.) Connection to a priority NAP requires an initial fee, an annual fee, and phone lines; this will limit the number of NSPs. Large providers such as SprintLink, PSI, and Alternet should have no problem with the cost, Don said. Smaller providers, however, might not be able to afford the expense.
Regional, government, and private networksIn the old infrastructure, regional networks connected directly to nodes on the NSFNET backbone. Under the new plan, regional networks will connect to one or more NSPs (or, alternatively, to one or more NAPs). Either way, network connectivity will be expensive. For the next four years, NSF will subsidize the cost at a decreasing rate for the regional networks who won awards under the solicitation; after that time, subsidization will stop. Whether or not regional networks will persist after the funding runs out remains to be seen, Don said. Conceivably, regional networks might dissolve; smaller organizations and campus networks would then negotiate their own connection with NSPs.
Under the new plan, government networks that used to connect to the NSF backbone via two FIX access points will use the FIX access points to connect to the NAPs instead. The future of the CIX and MAE access points, through which commercial networks used to connect to the NSF backbone, has not yet been determined.
And the metacenters?All traffic other than "nonmeritorious applications" from the five NSF metacenters will be carried by NSPs; the metacenters will have to pay for NSP connectivity and service. This means that Internet service that was once subsidized by the NSF and seemed to be "free" will now cost--in the case of NCAR, as much as $250,000 per year to retain the same level of service we now have, Don estimated. It is possible that NCAR may be able to operate with less bandwidth; SCD is gathering statistics to determine the maximum bandwidth required.
Routing arbiterThe last element of the new NSFNET infrastructure is the routing arbiter. This is an organization that will provide database management for information (for example, network topology, routing policies, and interconnectivity data) that can be used by NSPs to build routing configurations. The arbiter will make this information publicly accessible, but will not mandate its use. (In the old infrastructure, the organization providing backbone service also acted as the routing authority, controlling and configuring routing at the NSFNET backbone nodes.)
While the new NSFNET infrastructure is aimed at fostering growth and serving an expanding community of users, some people have raised concerns. One is that the NSPs, in order to recover their own costs, might start metering service (that is, charging for the number of packets sent); this could force institutions such as NCAR to limit their own network traffic. Another concern is that most network traffic will have no acceptable-use policy; this will probably lead to a deluge of commercial messages. That deluge, coupled with the rapid growth of services such as the World Wide Web and the current lack of NSP bandwidth, might "swamp" the network. Finally, it is uncertain how international network traffic will be handled: a message traveling from Japan to Germany via the U.S. may no longer be able to take a free trip on the NSFNET highway.
Be this as it may, optimists are predicting a positive--if momentarily shaky--passage toward a stronger and more widely accessible Internet.