The NSFNET Backbone Project, 1987 – 1995
NSFNET: A Partnership for High-Speed Networking
NSFNET: Transition to T-3
During the first three years of the project, NSFNET team members focused on building and improving the NSFNET T1 backbone network service. In 1989 the backbone was reengineered, increasing the number of T1 circuits so that each site had redundant connections to the NSFNET backbone as well as increasing router capability to full T1 switching. These changes significantly increased the robustness and reliability of the backbone service. IBM and Merit also continued to develop new network management and routing tools.
Perhaps the most concrete example of the NSFNET’s success, however, was the exploding growth in usage by the research and education community. Even though the applications, like telnet, e-mail and FTP, were not necessarily the most “showy,” once connected, scientists and researchers didn’t know how they had ever gotten along without it. Their enthusiasm led to over 10% monthly increases in usage of the NSFNET backbone, a rate the overall Internet has continued to experience to this day.
“When we first started producing those traffic charts, they all showed the same thing: up and up and up! You probably could see a hundred of these, and the chart is always the same,” recalls Ellen Hoffman, “whether it is growth on the Web, growth of traffic on the Internet, or growth of traffic on the regionals. You didn’t think it would keep doing that forever, and it did. It just never stopped.”
By 1989, the tremendous growth in use of the NSFNET prompted the NSF and the other team members to think about expanding the backbone. The traffic load on the backbone had increased to just over 500 million packets per month, representing a 500% increase in only one year. Every seven months, traffic on the backbone doubled, and this exponential growth rate created enormous challenges for the NSFNET team.
The NSF also had encouraged greater use of the network by funding many more universities and colleges to connect to the NSFNET through its “Connections” program. IBM and Merit conducted tests based on usage statistics, and projected that the NSSs on the T1 network would saturate by 1990. An 18-month performance review by an outside committee had pronounced the team’s performance excellent, and this encouraged the partners to develop a working plan for the upgrade. In view of this continuing demand, the NSFNET partners felt that exercising the option to deploy a higher-capacity network was the right thing to do.
In January of 1989, in an Executive Committee meeting, the partners introduced their plan to upgrade the backbone network service to T3. The NSF also wished to add a number of new backbone nodes, so Merit was requested to prepare proposals for the added cost of new backbone node sites at T1 and T3 speeds, while the NSF issued a solicitation to the community for those interested in becoming new NSFNET node sites. It was eventually decided by the NSF that the team would implement eight new T3 nodes and one new T1 site. Of the eight T3 nodes, six of the original T1 nodes would be upgraded, and two new T3 node sites would be added. Finally, the remaining T1 nodes would be upgraded to T3, increasing the total number of backbone nodes on the NSFNET from 13 to 16, all running at 45 Mbps. (see the accompanying MAP (ftp://nic.merit.edu/nsfnet/final.report/t1data.html).)
The first step was to upgrade the test network to T3, using collocated T3-capable NSSs and T3 circuits. Throughout 1989 and 1990, the partners, particularly IBM, strove to implement the brand-new technology on the test network. Central to the new network were the improved node systems.
“We built the first robust, high-performance T3 routers operational anywhere in the world,” says Alan Baratz, Director of IBM’s High- Performance Computing and Communications activities within IBM Research. Each T3 node consisted of an IBM RS/6000 workstation that performed many of the same functions as the nine IBM RTs that made up each NSS-but in a single box, and at a much higher speed.
Two types of T3 routers were deployed on the backbone. Core Nodal Switching Subsystems (CNSSs), located at junction points on MCI’s network, enabled a much closer match between MCI’s circuit-switched network and the NSFNET backbone’s packet-switched network. Exterior Nodal Switching Subsystems (ENSSs) were installed at regional networks attached to the NSFNET, and acted as end nodes for the backbone. These next-generation packet switches in the nodes would eventually switch over 100,000 packets per second.
“In essence, it was a matter of developing some new, leading-edge technology that utilized some of the principles of parallel processing, where each adapter card, running a subset of the UNIX operating system and its own copy of the IP protocol, forwarded packets at very high speeds,” according to Rick Boivie, NSFNET Development Manager at IBM.
Rick Boivie, NSFNET Development Manager at IBM.
Another important part of the new T3 backbone service was MCI’s digital cross-connect systems, which made the routing, management and (in case of trouble) restoring of dedicated circuits “as simple as sending commands to electronic switches in the MCI junctions,” explains Mathew Dovens.
At Net ’90 in Washington D.C., a yearly networking conference sponsored primarily by EDUCOM, the NSFNET partners exhibited a prototype T3 service in a ground-breaking demonstration, linking the conference site to Merit and the NSFNET backbone at 45 Mbps. At the press conference, Steve Wolff, Doug Van Houweling, Mike Connors, at the time Director of Computing Systems at IBM, Dick Liebhaber, and Jamie Kenworthy from the State of Michigan spoke to national print and broadcast media representatives about the NSFNET project and the proposed T3 upgrade.
Upgrading the NSFNET backbone service to T3 provided both challenges and opportunities to the project team members as well as the research and education community they served. The tremendous advances required to develop and implement the next generation of high-speed networking would push the envelope of technology yet again, and if successful would raise the level of performance of the network another giant notch, enabling more and greater capabilities for the users of the network. But the challenges and opportunities were not only technical in nature. It was becoming clear to the partners that to the extent that the community relied upon a robust, high-performance backbone of greater and greater capacity, more investment would be needed to satisfy the demand. In addition, non-academic organizations willing to pay commercial prices increasingly desired Internet connectivity, but were restricted from using the NSFNET backbone due to the NSF’s Acceptable Use Policy, which defined the “research and education traffic that may properly be conveyed over [the] NSFNET [backbone service].”(3)
This growing use of the Internet for purposes other than research and education, together with the challenges presented by an upgrade of the backbone service to T3, precipitated an opportunity of another kind: the next stage of technology transfer and the beginning of the Internet industry.
In September of 1990, the NSFNET team members announced the creation of a new, independent nonprofit corporation, Advanced Network & Services, Inc. Al Weis, president of ANS, explains the thinking behind the formation of ANS:
“No one had ever built a T3 network before; there were no commercial T3 routers available and telephone companies had little experience with clear-channel T3 lines. I knew what IBM had and didn’t have, and I realized that if we were going to make this jump to 45 Mbps, with the network constantly growing, it was going to be a very difficult jump. To do this, you had to have an organization that was technically very strong and was run with the vigor of industry.”
According to Weis, the commitment to commercial provision of high-speed networking would attract corporate customers, which would in turn provide more funds to support the backbone from which the research and education community benefited. From now on, ANS would provide the NSFNET backbone service as a subcontractor to Merit, and would also take over the Network Operations Center. IBM and MCI initially contributed $4M to ANS, with an additional $1M pledged from each, as well as personnel and equipment; Merit was represented by Doug Van Houweling on the Board of Directors of ANS. In May of 1991, ANS spun off a for-profit subsidiary called ANS CO+RE Systems, “so that if we did anything that was commercial and taxable, we could pay tax on it,” says Weis. “Both IBM and MCI felt a need to spin off an organization that could truly focus on evolving the network further,” says Matthew Dovens, who became the MCI on-site liaison to ANS.
Team members, including ANS, began the painstaking process of installing, configuring and debugging the new nodes. Almost from the start, they ran into difficulties. Doug Van Houweling describes the enormity of the technical challenges:
“The T1 network required that we build new packet switches, but the fundamental T1 data transmission technology was pretty solidly in place. What we didn’t understand when we went to T3 was that we not only had to do the switches all over again, but we also had to learn how to transmit data over a full T3 line, which wasn’t being done. So when we built the T3 network we had two technology frontiers to overcome, and they interacted with one another in all kinds of vicious ways.”
Throughout 1991 several T3 nodes were installed, but production traffic remained on the T1 NSFNET, creating more problems due to congestion and overloading. “The engineers described the situation as peeling back the layers of an onion,” explains Ellen Hoffman.
“Every time you thought you had something solid, it turned out to be only the paper layer on the top and there was another paper layer underneath it, and so you just keep going through layer after layer. You have solved a problem and all it does is reveal the next one. And so if you think of the Internet as a set of layers in terms of protocols, you could just assume that every time you hit another layer, you would have another problem, and another. It was tough!”
All of the partners were under tremendous pressure. “At times engineers were working more than a hundred hours a week,” remembers Hoffman. Knopper explains the breadth of work to be done: “We were increasing the traffic of the network, and reengineering it, and bringing on new attachments, and routing tables were exponentially growing–but what I didn’t realize was that even the day-to-day stable operation of the system was having problems.” These challenges demanded everything the NSFNET team members had to give.
The last of the sixteen T3 sites was completed in the fall of 1991, over the Thanksgiving holiday, and production traffic was phased in. The new T3 backbone service, provided for the NSFNET by ANS, represented a 30-fold increase in bandwidth and took twice as long to complete as the T1 network. It now linked sixteen sites and over 3500 networks, carrying data at the equivalent of 1400 pages of single- spaced text per second. The new and improved NSFNET backbone service provided the research and education community with state-of-the-art communications networking, and connectivity to the fastest production network in the world (see the accompanying MAP (ftp://nic.merit.edu/nsfnet/final.report/t3map.html).)
The introduction of a new corporate structure to the NSFNET project, and the resulting organizational complexity, created controversy among members of the research and education community, as well as other members of the Internet community. According to George Strawn, there were two main issues of concern with regard to the so-called “commercialization and privatization of the Internet.” One was what effects the NSF’s ongoing support of the NSFNET would have on the fledgling Internet service provision market; new companies such as Performance Systems International (PSI) and AlterNet charged that the NSF was unfairly competing with them. The second issue was the research and education community’s concern that “commercialization”– part of which was the perception that ANS would provide the NSFNET backbone service instead of Merit–would affect the price and quality of their connection to the NSFNET and, by extension, the Internet. They wanted to “keep it in the family,” says Strawn.
On yet another front, the regional and midlevel networks were beginning to attract commercial customers, and wanted that business for much the same reasons that ANS was created: to support themselves and the research and education community. However, they felt constrained by the NSF’s Acceptable Use Policy, which specified the nature of traffic allowed to traverse the NSFNET backbone service. Purely commercial traffic was not directly in support of research and education and was thus restricted from the NSFNET backbone. “Something had to happen to break loose the whole commercial issue,” says Ellen Hoffman.
“There had been a number of discussions about the need to be able to transmit commercial traffic, and the unhappiness with having to split your own regional network into a commercial segment and an educational segment. ‘I have this network here in Michigan. Today it has 30 universities connected to it at the colleges, but Ford Motor wants to connect to it. If Ford Motor is not particularly interested in educational traffic, do I have to build a whole new network for Ford and create a whole new Internet for Ford? Or should I take advantage of the infrastructure that’s already there?’ That was the question we were trying to answer.”
None of these issues had touched the community before, and the NSFNET partners, including the NSF, found themselves in the midst of roiling debate. But according to those involved, the type of Internet services the NSFNET offered to the research and education community would soon be able to be obtained by the private sector. “It had to come,” says Steve Wolff,
“because it was obvious that if it didn’t come in a coordinated way, it would come in a haphazard way, and the academic community would remain aloof, on the margin. That’s the wrong model: multiple networks again, rather than a single Internet. There had to be commercial activity to help support networking, to help build volume on the network. That would get the cost down for everybody, including the academic community–which is what the NSF was supposed to be doing.”
Overall, the partners now realize that more steps could have been taken to reassure the research and education community that the NSFNET backbone service would continue to provide them with high-speed networking capabilities as required by the cooperative agreement with the NSF. “In retrospect, I think we should have paid more attention to better communication about our activities and objectives,” says Doug Van Houweling. Merit, IBM, and MCI believed that the formation of ANS would allow the team members to expand the network to the benefit of the research and education community. “The network we could have built with only NSF’s money would not have been as robust. It would have provided connections, but it wouldn’t have had the same degree of redundancy, for example,” explains Elise Gerich.
From the NSF’s viewpoint, “commercial use of the network … would further the objectives of NSFNET, by enhancing connectivity among commercial users and research and education users and by providing for enhancements to the network as a whole.”(4) However, as part of Congressional hearings related to the NSF budget, and an internal report by the Inspector General of the NSF, the project team was questioned thoroughly about its decision.
In retrospect, such growing pains were unavoidable, considering the scope of the technological achievement and importance of networking and communications infrastructure to academic (and commercial) users. Thus the upgrade of the NSFNET backbone service to T3 was not only a technological and organizational challenge of the highest order. It also precipitated a greatly-needed, though contentious, community dialogue about the evolution of the communications infrastructure that had come to mean so much to the research and education community and, increasingly, the society as a whole.
The NSFNET partnership reached and crossed many technology milestones over the duration of the agreement. “We defined the frontier and won it together. As a federally funded research project, few can claim such broad support and pervasive impact,” says Paul Bosco.
The most fundamental achievement was the construction of a high-speed national network service, and successfully evolving that service from T1 to T3 speeds. According to Mark Knopper, the NSFNET team’s overall achievement helped to “scale networking to a huge system that covers the world. Having a single backbone in place, with a centralized administration and technical group, allowed us to do that.” The NSFNET was the first large-scale, packet-switched backbone network infrastructure in the United States, and it encouraged similar developments all over the world. Jessica Yu points out:
“The NSFNET backbone glued together the international networks– almost all traffic from abroad would transit the NSFNET. With that kind of connectivity available, other countries were prompted to build their own networks so they could get connected too. Many of them used NSFNET’s three-tiered structure–backbone, regionals, campus networks–when they started to build their own networks.”
The NSFNET team played a leading role in advancing high-speed switching and routing technology. “We developed and migrated across four or five generations of routing technology through a seven-and- one-half-year period of extraordinary growth,” says Paul Bosco. “The project demanded end-to-end systems engineering with prototype deployment, but resulted in real operational experience.” Both the T1 and T3 nodes featured distributed embedded control architectures, while the T3 nodes contained powerful adapter cards that switched packets without any main system intervention, greatly increasing packet switching efficiency and making very high data throughput possible.
Many important improvements to the NSFNET backbone service went unnoticed by users. “There were many transitions,” says Hans-Werner Braun, “many times when we installed faster serial cards or upgraded other hardware and software to enrich the backbone topology. All this was invisible to users–what they saw was a network that was able to keep up with growing demands for bandwidth.”
NSFNET project partners also contributed to the development of improved Internet routing architecture and protocols. “The NSFNET backbone from day one defined a routing architecture which provided a clear demarcation between interior routing on the backbone, and exterior routing between the backbone and the attached regional and midlevel networks,” explains Yu. This separation allowed for efficient troubleshooting of network routing problems and preserved backbone routing integrity.
IBM and Merit together with Cisco Systems also undertook initial development of the Border Gateway Protocol (BGP), now used extensively on the Internet. BGP was created in true Internet style, according to Elise Gerich, “through Kirk Lougheed of Cisco and Yakov Rekhter of IBM sitting down outside an IETF meeting, working to come up with the concept for BGP and writing the specs on a napkin. It was an enormous step forward in the routing development area.”
After extensive testing, Yu and other Merit staff deployed BGP on the T1 NSFNET backbone and worked with regional engineers to update their software, so they could use BGP instead of EGP to exchange reachability information with the NSFNET. The NSFNET backbone service was thus the first network to deploy BGP in an operational environment.
Experience gained in operating the NSFNET backbone played a major role in the development of another important routing strategy, Classless Inter-Domain Routing, or CIDR. Yu was one of the architects of CIDR, in conjunction with other members of the Internet Engineering Task Force. “Our data showed that the size of the NSFNET routing tables was doubling every ten months, and would soon become unmanageable.” she says. Yu, Elise Gerich, and Sue Hares of Merit helped convince the IETF community that a broad-based solution was needed–one that not only tackled the problem of the swiftly dwindling IP address space, but also tamed the growth of the Internet routing tables. CIDR “removes the notion of IP address classes, which reduces the size of the routing tables and enables more efficient use of IP address space,” says Yu. Technical staff from the partnership deployed CIDR on the NSFNET backbone service in 1994; CIDR is now widely used in the Internet.
In these and many other ways, the technical teams from Merit, IBM, MCI, and ANS working on the NSFNET backbone project demonstrated the effectiveness of their partnership. Their collaborative efforts to create, implement and constantly improve high-performance networking technology in the context of a rapidly changing technological environment constitute a historic chapter in the development of the worldwide Internet.
|United Arab Emirates||3|