Birth of the ARPAnet :: 1969
BBN and the IMPs
ARPA RFQ for IMPs is released Aug 1968 drafted by Larry Roberts and Barry Wessler. [ISOC] [Roberts, Net Chronology] [History of Telenet p 29]
September: Twelve proposals were received. [Hauben] [Roberts, Net Chronology] [Walden 1990 p 5 (helped prepare BBN's bid)] [DARPA 1981 III-17]
"It was an extremely difficult problem to produce a proposal like that in a very short time, because the government has the very natural tendency to spend an enormous amount of time getting an RFP out. Then they breathe a big sigh of relief, and they want the people to write the proposal in zero time." Frank Heart 1990.
The other finalist bid for the ARPA contract was from Raytheon. [Nerds2.0 p. 79] [Vanity Fair]
Jacobie Systems in Santa Monica was a bidder [Cerf, Oral History 1990] Steve Crocker and Vint Cerf worked for Jacobi Systems. [Cerf, Oral History 1990] [Vint Cerf and Steve Crocker, Nov. 2011 Smithsonian American Art Museum lecture]
IBM and AT&T did not bid on the contract [Segaller 62] (note that AT&T may have been prohibited to have built the IMPs pursuant to the MFJ) IBM apparently considered the project "impossible" and too expensive. Two decades later IBM had learned from its mistake and was making great concessions in order to be a part of NSFNET. [Segaller 72] [Roberts, Computer Science Museum 1988] ("IBM and CDC 'no bit' it because, they told me: "If we used Model 50s at these sites, you're going to go broke, and we think you're crazy." ")
Severo Ornstein of BBN quote: "I talked to Frank about it one night and he said, "Well, here's this RFQ, from ARPA. They want to build a network and so why don't you take it home and look at it?" And I did and I thought about it a little bit overnight and it seemed as though this was fairly straightforward thing to do. It was fairly well described in the RFQ. And so it seemed we could build it. And I went in and told Frank in words that I guess have become somewhat immortalized that sure, we could build it, "But I had no idea why anybody would want such a thing." [Nerds2.0 p 76]
In January 1969 ARPA awarded the IMP contract to BBN to build the first interface message processor (IMP) (to be packet routers in a stupid sub-network). [Roberts, Net Chronology] [Hauben] [History of Telenet p 29] People at BBN who had prepared the proposal included Frank Heart, Robert Kahn, Will Crowther, Dave Walden, Hawley Rising, Ben Barker, and Severo Ornstein. [Heart 1990] [Vanity Fair (quoting Larry Roberts, "I chose [BBN] based on the team structure and the people. I just felt that the BBN team was less structured. There wouldn’t be as many middle managers and so on.")]
Sen. Ted Kennedy sent BBN a telegram informing BBN that they had won the contract and said that they were "to be congratulated on winning the contract for the interfaith message processor," congratulating BBN on its ecumenical efforts. [Nerds2.0 p 80] [No Credit] [How the Web was Born p 27] [Roots of the Internet] [Vanity Fair]
"BBN designed the IMP to accommodate no more than 64 computers and only one network." [Kleinrock] The BBN team was headed by Robert Kahn. [Kahn] [Babbage 22 (installation of first IMP)] The IMPs were to be delivered to UCLA, SRI, UCSB, and the Uni of Utah. BBN was located near Honeywell in Boston and would use the Honeywell H-516 for the first IMPs. [Abbate p 57] [RFC 1000] [Kleinrock 1996] [Hauben] [Heart 1990]
"These sites were running a Sigma 7 with the SEX operating system, an SDS 940 with the Genie operating system, an IBM 360/75 with OS/MVT (or perhaps OS/MFT), and a DEC PDP-10 with the Tenex operating system. Options existed for additional nodes if the first experiments were successful." [RFC 1000]
BBN Report No 1822, Interface Message Processor: Specifications for the Interconnection of a Host and an IMP (May).
[DARPA 1981 II-15 ("A central goal of the network was to provide resource sharing between remote installations in such a way that a local user could employ a remote resource without degradation. In particular, this required that the subnetwork be sufficiently reliable, have sufficiently low delay, have sufficiently great capacity and have sufficiently low cost, that remote use would be "as attractive" as local use. These objectives translated into a major technical problem for the switching node itself to provide low delay and high capacity at modest cost. The approach taken was to select a mini computer for the switching nodes whose I/O system was very efficient and to write very carefully tailored programs in machine language to optimize the capacity and the low delay of the data path in the switching node. Great attention was paid to minimizing the operating time of the inner loops of these programs.")
Routing: "In a non-fully connected network, an important problem is the decision process by which each node decides how to route information to reach any particular destination. This is a difficult theoretical problem and there are many different approaches, including fixed routing, random routing, centrally controlled routing, and various forms of distributed adaptive routing. The approach taken was to use a distributed adaptive traffic routing algorithm which estimates on the basis of information from adjacent nodes the globally proper instantaneous path for each message in the face of varying input traffic loads and local line or nodi failures. Each IMP keeps tables containing an estimate of the output circuit corresponding to the minimum delay path to each other IMP, and a corresponding estimate of the delay. Periodically, each IMP sends its current routing estimates to its neighbors; whenever an IMP receives such a message it updates its internal estimates. Thus, information about changing conditions is regularly propagated throughout the network, and whenever a packet of traffic must be placed on the queue for an output circuit, the IMP uses its latest estimate of the best path." [DARPA 1981 II-18]
End-to-End Design: "The BBN design took the idea of inserting a communications processor between the host and the network to a logical extreme: it specified that the IMP be used only for communications functions, and that there be maximum logical separation between IMP and host. Thus, the design did not provide for IMP repacking of binary messages on behalf of a host or for doing character conversion for a host. Further, the design precluded all host programming of the IMP and any control of the IMP by regular hosts. The initial subnet design also specified minimal control messages between an IMP and its hosts, many fewer than envisioned in the RFP. As the network later developed, it proved useful to add additional such control messages." [DARPA 1981 III-48]
BBN served as the NOC for the ARPANET. In the 1970s, BBN had an office in Rosslyn Virginia across the street from ARPA. In 1970, ARPA's contract with BBN to build IMPs is extended. [Nerds p 100]
March 1970: ARPANet connects from West Coast to East Coast at BBN. [Roberts, Net Chronology][Abbate p 64]
1971 TIPs are introduced
In 1981, BBN on behalf of DARPA published A History of the Internet: The First Decade, Report No. 4799, Prepared for DARPA by BBNp. II-11 (1981) ("A competitive procurement was planned for the selection of the contractor to design the switching nodes and act as general systems contractor. An RFP was prepared and issued in July 1968. Bolt Beranek and Newman Inc. (BBN) was selected as the systems contractor for the design of the subnetwork and the switching nodes. BBN subcontracted with Honeywell for the switching node hardware itself, Over the life of the ARPANET program from January 1969 until the transfer to DCA in July of 1975, BBN served as the systems contractor for the design, implementation, operation and maintenance of the subnetwork; BBN is still serving that role under the aegis of the Defense Communications Agency (DCA).")
Leonard Kleinrock recalled
The computer guys would say, "Communication guys, will you please give us good data communications." The communications guys would turn around and say, "What are you talking about, the United States is a copper mine. You've got wires all over the place; use them." The computer guys would say, "No, you don't understand. It takes half a minute to set up a call, and your charge is for a minimum of three minutes. All I want to do is send a hundred milliseconds of data." These guys would turn around back to the computing guys and say, "Go away little boy, there's no revenue there." So the little boys went away, and they created packet switching. [Babbage 27] [See also Gaudin]
Sept. Larry Roberts succeeded Taylor as head of IPTO in 1969. [Roberts, Net Chronology] Robert Taylor will end up at Xerox PARC. Pressure from Vietnam and Congress is redirecting ARPA's mission towards DoD's military needs. [Almanac] [Markoff Dec. 20, 1999]
See FCC Computer Inquiry I (questioning whether telecommunications providers are meeting the needs of computer services).
"Since nobody was going to give the agency a few billion dollars to string its own wires across the country, ARPA would have to move the data through AT&T's telephone system. Unfortunately, that system's basic dial up process was far too cumbersome and slow for computer-speed communications. So instead, Roberts decided ARPA would make a series of long-distance calls, and just never hang up. More precisely, the agency would go to AT&T and lease a series of high-capacity phone lines linking one ARPA site to the next, so that the computers would always be connected." ARPANet connected the IMPs with leased 56 kbps AT&T long distances lines. [Waldrop 80] [Abbate p 56] [Salus p 35] [Nerds p 82] [NIST 1992 p 4] [Roberts Wessler 1970 ("The IMPs are connected together via 50 Kbps data transmission facilities using common carrier (ATT) point to point leased lines... The communications circuits for this network will cost $49K per node per year and the network can support an average traffic of 16 KB per node. ")] The AT&T lines, however, presented a limitation on the network: "Officially, only ARPA contractors could utilize the ARPANET, but this was extended whereby “authorized users” were permitted to use the network. Other parties were not permitted to use the network because the AT&T tariffs for the underlying leased communications lines did not permit shared usage." [History of Telenet p 28]
Larry Roberts, Multiple Computer Networks and Intercomputer Communications, June 1967 "The common carriers currently provide 2 or 4 wire, 2 kc lines between two points either dialed or leased, as well as higher band width leased lines and lower band width teletype service. Considering the 2 kc offering, since it is the best dial up service, the use of 2 wire service appears to be very inefficient for the type of traffic predicted for .the network. In the Lincoln - SDC experimental link the average message length appears to be 20 characters. Each message must be acknowledged so that the originator may retransmit o; free the buffer. Thus the line must be reversed so often that the reversal time will effectively half the transmission rate. Therefore, full duplex, four-wire service is more economic and simpler to use. PARA "Current automatic dialing equipment requires about 20 seconds to obtain a connection and a similar time to disconnect. Thus the response time is much too long assuming a call is made only after a message arrives and that the line is disconnected if no other messages arrive soon. It has proven necessary to hold a line which is being used intermittently to obtain the one-tenth to one second response time required for interactive work. This is very wasteful of the line and unless faster dial up times become available, message switching and concentration will be very important to network participants."
"DECCO was able to handle all the contractual details with the common carriers for circuit leases. Most of the required 50 kilobit circuits used in the ARPANET were leased through DECCO from AT&T, but a small number of circuits were leased from other carriers such as General Telephone. In addition, DARPA arranged for a special point of contact in AT&T (long lines), which greatly facilitated the interactions between the network system contractor, DARPA, and AT&T." [DARPA 1981 II-10] See also [DARPA 1981 III-32 The Telephone Companies]
"It was decided that 50 Kb communications lines would be used because of the vastly improved response time which could be obtained with these lines as opposed to the previously proposed 2.4 Kb lines. The 50 Kb lines were to be leased, eliminating the slow dial-up procedure. The nature of the telephone tariffs available to the government made use of 50 Kb. lines affordable." [DARPA 1981 III-16]
"A critical necessity for a resource sharing computer network was to provide reliable communication and one component of such reliability was an ability to work through the expected phone line errors on the 50 kilobit circuits. The approach taken was to design special checksumming hardware at the transmitting and receiving end of each 50 kilobit circuit in the network. As part of the switching node transmission procedure, a powerful 24 bit cyclic checksum is appended to every packet of information to be transmitted. Then, at the receiving node, the checksum is recomputed in hardware and compared with Lhe transmitted checksum. If there is no error, an acknowledgment (with its own checksum) is sent back to the transmitting node. If there is an error, no acknowledgment is sent and the packet will be retransmitted." [DARPA 1981 II-14]
Frank Heart: "the phone company had never been able to tell when a phone line was about to fail. Their technology for dealing with phone lines was when someone called up and said, "I can't talk over the phone," they would send someone out to figure out what was wrong with the phone line. The IMPS watched the phone lines all the time, all the time, and they could tell when a line was degrading, not just when it was failing. So there were amusing instances when somebody here would call up the phone company office in California, and tell them that the phone line between Los Angeles and San Francisco was about to break. And the phone company guy, after first thinking we were calling as a joke, would then say, "How could you possibly know that in Boston?" A lot of that went on." [Heart p 27 1990]
1970: 230.4 kbps circuit tested between IMPs. [DARPA 1981 III-53]
Compuserv founded by Jeffery Wilkins [About Compuserve]
SITA "high level network is implemented. This leads to the development of the world's first worldwide packet switching network dedicated to business traffic." [SITA History] "The SITA network for air line reservations was surprisingly advanced in concept in the mid-1960s, but details about SITA were generally not known in the U.S. computing community." [DARPA 1981 II-5]
ARPANet is Born Fall 1969
The first node of the ARPANet, an Interface Massage Processor (IMP) (built by BBN), delivered on August 30. There had been ruminations that BBN would be late with the delivery, giving those on the receiving end the false perception that they had additional breathing room in which to prepare. But BBN got their work done and airlifted the hulk of an IMP out to UCLA on time. [Roberts, Computer Science Museum 1988 (Pelkey: "he first one went in September '69 at UCLA, and was delivered on-time Labor Day weekend, which caused consternation at UCLA to hear their side of the story")]
Host to Host
1983 - ??
1996 - ??
The first IMP was installed in October 1969 in UCLA. It was a Honeywell DDP-516. The Operating System took 6K of memory. [Picture of Leonard Kleinrock with IMP1 at UCLA] [Roberts, Net Chronology] [Salus p 35] [RFC 1000 (UCLA was counting on a delay by BBN which was having timing troubles. BBN fixed the problem and air shipped the IMP)] [Hauben] "They were each the size of a refrigerator and cost about $100,000 in 1969 dollars." [RFC 2555] [Kleinrock 1996] [Cerf, Oral History 1990] It connected to a host computer a Sigma7 [Cerf, Oral History 1990]
"(This minicomputer had just been released in 1968 and Honeywell displayed it at the 1968 Fall Joint Computer Conference where Kleinrock saw the machine suspended by its hooks at the conference; while running, there was this brute whacking it with a sledge hammer just to show it was robust. Kleinrock suspects that that particular machine is the one that was delivered by BBN to UCLA.) As it turns out, BBN was running two weeks late (much to Kleinrock's delight, since he and his team badly needed the extra development time); BBN, however, shipped the IMP on an airplane instead of on a truck, and it arrived on time. [Kleinrock 1996]
Present the day were Kleinrock and his team, BBN, Honeywell, Scientific Data Systems, AT&T long lines, GTE (the local telephone company), and ARPA. [Kleinrock 1996] Graduate students involved in the project included Vint Cerf, Steve Crocker, Jon Postel... [Cerf, Oral History 1990] [Kleinrock Slides 1999, UCLA to Be First Station in Nationwide Computer Network, UCLA Office of Public Affairs July 3, 1969 "As of now, computer networks are still in their infancy," says Dr. Kleinrock, "But as they grow up and become more sophisticated, we will probably see the spread of 'computer utilities', which, like present electric and telephone utilities, will service individual homes and offices across the country.")]
The hosts interconnected with a host-to-host software. "The host-to-host interface was awful to begin with." [Babbage 23] This would would be replaced by NCP which would then be replaced by TCP/IP. [Roberts, Computer Science Museum 1988 ("Steve said, and it was a difficult period too, in terms of getting the host-to-host issues worked out. He said at some level, it was hard to get other people to take it seriously, because some people really didn't want to interface to the network, because they wanted to protect their own private interests to get more computing funding at their own site, so it took a lot of politics and arm twisting to get people's cooperation.")]
In the Fall, Bob Kahn comes out to UCLA to examine the IMP's performance, and meets Vint Cerf [Cerf, Oral History 1990]
Leonard Kleinrock and the first Interface Message Processor
Second IMP: Oct. 29, 10:30 pm:
"A month later the second node was added (at Stanford Research Institute) and the first Host-to-Host message ever to be sent on the Internet was launched from UCLA. This occurred in early October when Kleinrock and one of his programmers proceeded to "logon" to the SRI Host from the UCLA Host. The procedure was to type in "log" and the system at SRI was set up to be clever enough to fill out the rest of the command, namely to add "in" thus creating the word "login". A telephone headset was mounted on the programmers at both ends so they could communicate by voice as the message was transmitted. At the UCLA end, they typed in the "l" and asked SRI if they received it; "got the l" came the voice reply. UCLA typed in the "o", asked if they got it, and received "got the o". UCLA then typed in the "g" and the darned system CRASHED! Quite a beginning. On the second attempt, it worked fine!" [Kleinrock, Net History] [Kleinrock 1996][Kleinrock Internet's First Words]
The Second IMP was delivered to SRI in the beginning of October. [RFC 1000][Kleinrock 1996]
By the end of the year, there were four nodes: [Hauben]
- UCLA (Vint Cerf - PdD student, Steve Crocker - PhD student, and Jon Postel with Leonard Kleinrock) Installed Sept 1, 1969
- Designated the Network Measurement Center. The network itself was part of the experiment and therefore a significant endeavor for decades would be measuring network performance. The NMC would test, measure, and refine the network. [Abbate p 58] [Kleinrock] [Kleinrock 1996] [Cerf, Oral History 1990] [DARPA 1981 II-11]
- SRI (Doug Engelbart) (ARPANet's Network Information Center [ISOC] [DARPA 1981 II-11]) Installed Oct. 1, 1969 [Roberts, Net Chronology]
- UCSB (Glen Culler), Installed Nov. 1, 1969 and
- Uni Utah Salt Lake (Dave Evans, Ivan Sutherland). installed Dec. 1, 1969
Apollo 11 Goes to the Moon with Neil Armstrong stepping on the moon July 20 [Apollo] Of the two original ARPA projects, one made headlines, the public knew nothing about - both radically changed the world.
Nov. 21 Larry Roberts visits UCLA. Telenet connection to SRI is demonstrated. [RFC 1000]
[DARPA 1981 III-76]
40th Anniversary of the Net - October 29, 1969
Computer History Museum
ARPAnet - the team behind the internet Created by Arlington County 2011
See also FCC :: Customer Premises Equipment (which affirmed individual's right to attach devices to the end of the telephone network, a necessary precondition to being able to attach IMPs and then modems to the network)
Alan Kay on ARPA: "90 percent of all good things that I can think of that have been done in computer science have been funded by that agency. Chances that they would have been funded elsewhere are very low. The basic ARPA idea is that you find good people and you give them a lot of money and then you step back. If they dont do good things in three years they get dropped - where 'good' is very much related to new or interesting." [Spacewar]
ARPA Project Multiple Access Computer story by Alan Kay: "They had a thing on the PDP-1 called 'The Unknown Glitch'. They used to program the thing either in direct machine code, direct octal, or in DDT. In the early days it was a paper-tape machine. It was painful to assemble stuff, so they never listed out the programs. The programs and stuff just lived in there, just raw seething octal code. And one of the guys wrote a program called 'The Unknown Glitch,' which at random intervals would wake up, print out I AM THE UNKNOWN GLITCH. CATH ME IF YOU CAN, and then it would relocate itself somewhere else in core memory, set a clock interrupt, and go back to sleep. There was no way to find it." [Spacewar]
Old Boys Network: The informal culture of ARPANet has been described as an Old Boys Network. Those who were on the inside were said to have the advantage in receiving ARPA funding; those on the outside were not so advantaged. [Abbate p 55] This informal culture of the community would continue into the 90s, when the Internet was privatized, and create problems. As the Internet moved from private network to public network, questions about arrangements, authority, and structures created consternation, that would be echoed for years in such forums as the COM-PRIV discussion group.
Myth: The Internet Was Designed to Survive Nuclear War.
Fact: This was the separate work of luminary Paul Baran. But Paul Baran worked for RAND under contract with the USAF. The ARPANet was built by Larry Roberts at ARPANet.
"[I]n the early 1960s, when computers were scarce, expensive, and cumbersome, using a computer for communications was almost unthinkable. Even the sharing of software or data among users of different computers could be a formidable challenge. Before the advent of computer networks, a person who wanted to transfer information between computers usually had to carry some physical storage medium, such as a reel of magnetic tape or a stack of punch cards, from one machine to the other." [Abbate p. 1]
|One thing that Baran, Davies, and Roberts had in common was the insight that the capabilities of a new generation of small but fast computers could be harnessed to transcend the limitations of previous communications systems. [Abbate p. 40]|
Like distant islands sundered by the sea,
- Vint Cerf, Requiem for the ARPANET
ARPANet Design Objectives / Goals / Purpose
1967: ARPA initiates planning of the ARPANet. Design objectives of ARPANet included
- interconnecting different researchers and research computers,
- Data Sharing ,
- Load Sharing of processing power (where one mainframe was busy, processing could be shifted to a different mainframe with available capacity)
- Program Sharing
- Remote Service
- general purpose open platform reducing need for duplication and
- Message Service: communications between different research centers (minor objective that became a major benefit and use). See Email History.
[See NIST 1992 p 4 ("Sharing of computing resources among researchers was the primary objective. . . Despite heavy military involvement, the resulting ARPANET turned out to be a fairly open network. It provided a test bed for the development of communication protocols to support functionality such as transmission of graphical data, remote login, file transfer, and electronic mail.")] [Roberts 1967 ("The advantages which can be obtained when computers are interconnected in a network such that remote running of programs is possible, include advantages due to specialized hardware and software at particular nodes as well as increased scientific communication.")] [Abbate p. 44, 96]
History of the ARPANET 1981: "The DARPA program had the following objectives: (1) To develop techniques and obtain experience on interconnecting computers in such a way that a very broad class of interactions are possible, and (2) To improve and increase computer research productivity through resource sharing... The most important criterion for the type of network interconnection desired was that any user or program on any of the networked computers be able to utilize any program or subsystem available on any other computer without having to modify the remote program." [DARPA 1981 II-2]
ARPAnet Plan 1967
"At the meeting it was agreed that work could begin on the conventions to be used for exchanging messages between any pair of computers in the proposed network, and also on consideration of the kinds of communications lines and data sets to be used. In particular, it was decided that the inter-host communication 'protocol' would include conventions for character and block transmission, error checking and retransmission, and computer and user identification. Frank Westervelt, then of the University of Michigan, was picked to write a position paper on these areas of communication, an ad hoc 'Communication Group' was selected from among the institutions represented, and a meeting of the group scheduled." (ARPA draft, III-26)
[Greenstein 25 ("DARPA's administrators wanted innovations in the form of ideas, new designs, and new software. The inventive goals were large and ambitious, as well as open ended, and that meant the opportunity could not be addressed by a single organization, or by the insight of one lone genius. The inventors and DARPA administrators also understood the goals broadly and did not presume to know what specific designs and applications would suit their needs. They broadly funded pie-in-sky research as well as inventions addressing pragmatic problems with anticipated military applications.")]
CSTB, Realizing the Info Future p. 21 1994 "The purpose of the Internet, the largest packet switching network in the world, is to provide a very general communication infrastructure targeted not to one application, such as telephony or delivery of TV, but rather to a wide range of computer-based services, such as electronic mail (e-mail), information retrieval, and teleconferencing."
Resource Sharing: Instead of paying for duplicated resources spread isolated at different universities, ARPA's objective was to network those computers in order to share resources and to share money.
About 1966, Mr. [Robert] Taylor recalls, his office in the Pentagon had a terminal connected to time-sharing community at MIT, a terminal connected to a different kind of computer at the University of California at Berkeley, and a third terminal to the Systems Development Corp. in Santa Monica. "To talk to MIT I had to sit at the MIT terminal. To bring in someone from Berkeley, I had to change chairs to another terminal," he says. "I wished I could connect someone at MIT directly with someone at Berkeley. Out of that came the idea: Why not have one terminal that connects with all of them? "That's why we built ARPAnet," he says. [Almanac] [See also Taylor quoted in Vanity Fair]
Kleinrock: "The interesting thing is, as I recall, that part of the motivation for this network is the fact that in 1967, in the mid 1960s DARPA was heavily supporting a lot of people doing work on time-sharing. And every time an investigator got a new contract, the first thing he wanted was a computer - the best and biggest. Pretty soon Larry said, "This is getting ridiculous," because each facility they created evolved into a specialized kind of facility, like the graphics capability at Utah, the database capability at SRI, and the simulation capability at UCLA. So Larry came up with the concept of a resource sharing network, where there would be specialized sites, and if you wanted that special capability, you connect to that site to get it, or you would pull back data or programs and use them locally. That was one of his motivating reasons, namely, to reduce the number of time-sharing systems he had to support." [Babbage 7]
"Currently, each computer center in the country is forced to recreate all of the software and data files it wishes to utilize. In many cases this involves complete reprogramming of software or reformatting the data files. This duplication is extremely costly and has led to considerable pressure for both very restrictive language standards and the use of identical hardware systems." [Roberts Wessler 1970]
"The goal of the program was to advance the state-of-the-art in computer networking. The re[ILLEGIBLE]tant network successfully provided efficient communications between heterogeneous computers, allowing convenient sharing of hardware, software, and data resources among a varied community of geographically dispersed users." [ARPANET Brochure 1985, p. 3]
"The ARPANET originated as a purely experimental network in late 1969 under a research and development program sponsored by DARPA to advance the state-of-the-art in computer internetting. The network was designed to provide efficient communications between heterogeneous computers so that hardware, software, and data resources could be conveniently and economically shared by a wide community of users. " [ARPANET Brochure 1979, p. 1]
"This objective was an entirely new and different approach to an extremely serious problem which existed throughout botn the Defense Department and society at large. The many hundreds of computer centers in the Defense Department and the many other thousands of computer centers in the private and public sectors operate almost completely autonomously. Each computer center is forced to recreate all the software and data files that it wishes to utilize. In many cases this involves the complete reprogramming of software or reformatting of data files. This duplication and redundant effort is extremely costly and time consuming. In fiscal year 1969 DARPA estimated that such duplicative efforts more than double the national costs of creating and maintaining the software. There had been other completely different attempts to address this problem, such as attempts at language standards for computers, attempts at standardizing the types of hardware, and attempts at automatic translation between computer languages. Although each such approach had some value and utility, the problems of trying to share computer software resources or files was truly enormous." [DARPA 1981 II-3]
As with most technology transitions, not all in the community were enthused. Why, after all, should a university with the latest-greatest main frame have any incentive to share it with others? The quality of one's computer facilities was a boasting right for universities. In order to motivate reluctant participants, further ARPA funding was conditioned upon participation in ARPANET. [Abbate p 55] [Kleinrock 1996 ("most of the ARPA - supported researchers were opposed to joining the network for fear that it would enable outsiders to load down their "private" computers")]
"If you had to give the single most important reason why it was as successful as it was, it was that Larry Roberts had a great deal of authority and freedom and was able to control not only the contractors who were working on it, like BBN, but also the users, since he was supplying all their money. In other words, all the sites at which the IMPs were installed were research sites being supported by DARPA. So he could get their cooperation by the simplest of techniques: he was supplying the money." [Heart 1990]
The western ARPA - funded universities gave Larry Roberts less resistance to this idea of sharing over a network, and this is the reason why the first four nodes on the ARPANET are found out west. [Steve Crocker, Nov. 2011, Smithsonian American Art Museum talk]
[Roberts, Computer Science Museum 1988 ("the problem that most of them had was that if they agreed they would not do as well in their computer funding. They would rather have it themselves. So we just convinced them all they weren't going to get any computer funding anymore unless they cooperated.")]
By 1972, the University of Illinois Center for Advanced Computation was acquiring 90% of the computer services remotely over the ARPANET, at 40% the cost of doing provisioning those services themselves locally. [Abbate p 99]
General Purpose Network / Open / End to End
Larry Roberts sought to avoid the duplication by creating a single general purpose computer network. In 1970 he stated,
"There are many applications of computers for which current communications technology is not adequate. One such application is the specialized customer service computer systems in existence or envisioned for the future; these services provide the customer with information or computational capability. If no commercial computer network service is developed, the future may be as follows:
"One can envision a corporate officer in the future having many different consoles in his office: one to the stock exchange to monitor his own company's and competitor's activities, one to the commodities market to monitor the demand for his product or raw materials, one to his own company's data management system to monitor inventory, sales, payroll, cash flow, etc., and one to a scientific computer used for modeling and simulation to help plan for the future. There are probably many people within that same organization who need some of the same services and potentially many other services. Also, though the data exists in digital form on other computers, it will probably have to be keypunched into the company's modeling and simulation system in order to perform analyses. The picture presented seems rather bleak, but is just a projection of the service systems which have been developed to date.
"The organization providing the service has a hard time, too. In addition to collecting and maintaining the data, the service must have field offices to maintain the consoles and the communications multiplexors adding significantly to their cost. A large fraction of that cost is for communications and consoles, rather than the service itself. Thus, the services which can be justified are very limited.
"Let us now paint another picture given a nationwide network for computer-to-computer communication. The service organization need only connect its computer into the net. It probably would not have any consoles other than for data input, maintenance, and system development. In fact, some of the service's data input may come from another service over the Net. Users could choose the service they desired based on reliability, cleanliness of data, and ease of use, rather than proximity or sole source.
"Large companies would connect their computers into the net and contract with service organizations for the use of those services they desired. The executive would then have one console, connected to his company's machine. He would have one standard way of requesting the service he desires with a far greater number of services available to him.
"For the small company, a master service organization might develop, similar to today's time-sharing service, to offer console service to people who cannot afford their own computer. The master service organization would be wholesalers of the services and might even be used by the large companies in order to avoid contracting with all the individual service organizations.
"The kinds of services that will be available and the cost and ultimate capacity required for such service is difficult to predict. It is clear, however, that if the network philosophy is adopted and if it is made widely available through a common carrier, that the communications system will not be the limiting factor in the development of these services as it is now." [Roberts Wessler 1970]
See also Lyman Chapin, Chris Owens, Interconnection and Peering among Internet Service Providers: A Historical Perspective, An Interisle White Paper (2005) ("The concept of application independence—that the network should be adaptable to any purpose, whether foreseen or unforeseen, rather than tailored specifically for a single application (as the public switched telephone network had been purpose-built for the single application of analog voice communication).")
Sharing Data: In 1970, Larry Roberts described the ARPANET as follows:
"The data sharing between data management systems or data retrieval systems will begin an important phase in the use of the Network. The concept of distributed databases and distributed access to the data is one of the most powerful and useful applications of the network for the general data processing community. As described above, if the Network is responsive in the human time frame, databases can be stored and maintained at a remote location rather than duplicating them at each site the data is needed. Not only can the data be accessed as if the user were local, but also as a Network user he can write programs on his own machine to collect data from a number of locations for comparison, merging or further analysis." [Roberts Wessler 1970]
"The major lesson from the ARPANET experience is that information sharing is a key benefit of computer networking. Indeed it may be argued that many major advances in computer systems and artificial intelligence are the direct result of the enhanced collaboration made possible by the ARPANET." [Jennings p. 945] [Abbate p 100 quoting Jennings]
"When the network was originally built, Larry probably had - if you had to list his goals, you can look at the DARPA order, but if you had to list his goals - he certainly had high in his set of goals the idea that different host sites would cooperatively use software at the other sites. There's a guy at host one, instead of having to reproduce the software on his computer, he could use the software over on somebody else's computer with the software in his computer. And that goal, has, to this day, never been fully accomplished. That goal still to this very day has not been really accomplished to the degree that it was hoped for in its early days." [Heart p 25 1990]
Connectivity: "This network is envisioned as an interconnected communication facilities to utilize capabilities available at other ARPA sites. The network will provide a link between user(s) programs at one site, and programs and data at remote sites." [BBN Proposal]
1968: "The stated objectives of the program were to develop experience in interconnection computers and to improve and increase computer research productivity through resource sharing. Technical needs in scientific and military environments were cited as justification for the program objectives. Relevant prior work was described. It was noted that the computer research centers supported or partially supported by IPT provided a unique testbed for computer networking experiments, as well as providing immediate benefits to the centers and valuable research results to the military. The network planning that had gone on was described, the need for a network information center was noted, and the network design was sketched. A five year schedule for network procurement, construction, operation, and transfer out of ARPA was presented. (It was noteworthy that IPT had initially had in mind eventual transfer of the operational network to a common carrier.) Finally a several-million-dollar, several-year budget was stated." (ARPA draft, III-35)"The Internet developed out of research efforts funded by the U.S. Department of Defense Advanced Research Projects Agency in the 1960s and 1970s to create and test interconnected computer networks. The fundamental aim of computer scientists working on this "ARPANET" was to develop an overall Internet architecture that could connect and make use of existing computer networks that might, themselves, be different both architecturally and technologically. The secondary aims of the ARPANET project were, in order of priority: (1) Internet communication must continue despite the loss of networks or gateways between them; (2) the Internet architecture must support multiple types of communications services; (3) the architecture must accommodate a variety of networks; (4) it must permit distributed, decentralized management of its resources; (5) the architecture must be cost-effective; (6) the architecture must permit attachment by computer devices with a low level of effort; and (7) the resources used in the Internet architecture must be accountable. [FTC Report 2007 p 13-14]
See also Salus p 19; David D. Clark, The Design Philosophy of the DARPA Internet Protocols, COMPUTER COMM. REV., Aug. 1988; B. Carpenter, IAB, Architectural Principals of the Internet, Network Working Group RFC 1958 (June 1996) ("2.1 Many members of the Internet community would argue that there is no architecture, but only a tradition, which was not written down for the first 25 years (or at least not by the IAB). However, in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network.");[CSTB, Realizing the Info Future p. 3 1994 (setting forth vision for Open Data Networks, stating in first principle that network should "permit universal connectivity.")]; The Internet's Coming of Age, Computer Science and Telecommunications Board, National Research Council 35 (2001) ("the value placed on connectivity as its own reward favors gateways and interconnections over restrictions on connectivity")]
The Design Philosophy of the DARPA Internet Protocols, D.D.Clark, Proc SIGCOMM 88, ACM CCR Vol 18, Number 4, August 1988, pages 106-114 (reprinted in ACM CCR Vol 25, Number 1, January 1995, pages 102-111).
The initial network planned was first 4 nodes and then 12 nodes
In this era, computer scientists experimenting with networks had a vision that what they were doing was creating a "Computer Utility," providing networked computer service much like the electric or telephone company.
Larry Roberts, History of Telenet p 33: "Specifically, common carriers such as GTE andWestern Union were planning to become “computer utilities” (cloud computing in today’s terminology) and to provide combined data processing and communications services in competition with established, unregulated computer service companies. Could these carriers subsidize their unregulated computer services from their regulated communications service offerings? Should they be required to set up separate subsidiaries to provide unregulated computer services? Further complicating matters, the tariffs of the existing carriers in the 1960’s prohibited sharing (or reselling) leased communication lines. Such tariff provisions would prevent companies from leasing lines and providing public packet network services. Should these provisions be revised? To address these and several other related issues, the FCC initiated its First Computer Inquiry in 1966 to define which computer communication services were to be regulated and which were not." - Larry Roberts, Head of the ARPANET
Manley R. Irwin, The Computer Utility: Competition or Regulation? 76 Yale L.J. 1299, 1299 (1967) ("Within the decade, electronic data centers will provide computa- tional power to the general public in a way somewhat analogous to today's distribution of electricity. Computer systems will blanket the United States, establishing an informational grid to permit the mass storage, processing, and consumption of a variety of data services: com- puter-aided instruction, medical information, marketing research, stock market information, airline and hotel reservations, banking by phone- to mention only a few. Many of these services already exist in embryonic form; and their growth prospects have received enormous impetus from recent developments in computer technology knowm as time-sharing or multiple access computer systems.")
Bernard Strassburg, October 1966 Speech, Jurimetrics Journal, September 1968, pp. 12-18
- p. 13-14: "Of the two, Western Union has been the most articulate in committing its future to offering "computer utility" services. The telegraph company has initiated several steps in this direction. It has, for example, computerized the switching operation of its Telex network so as to make that network compatible with Bell's TWX service. This compatibility enables Telex subscribers to communicate fully with TWX subscribers. Western Union has also introduced a management information service whereby the company will package and assume complete responsibility for computer and communication components of commercial digital information systems. It will provide the computer hardware, software, and the communications circuitry, or any one of these components. Western Union plans to open data processing centers in competition with the service bureau industry. The telegraph company's message-forwarding service could easily be as extensive as its proposed data processing activities."
- p. 14 ("IBM's Consent Decree likewise bears on the general question of market entry and the nature of IBM's participation in the "computer utility" field.")
- p. 12 1965 ("As one witnesses the growing convergence of the data processing and the communications industries toward the "computer utility" concept, the question of market entry takes on both antitrust and regulatory overtones.")
July 3, 1969, UCLA to Be First Station in Nationwide Computer Network, UCLA Office of Public Affairs.(Kleinrock Slides) "As of now, computer networks are still in their infancy," says Dr. Kleinrock, "But as they grow up and become more sophisticated, we will probably see the spread of 'computer utilities', which, like present electric and telephone utilities, will service individual homes and offices across the country.")
William F Massy, Computer Networks: Making the Decision to Join One, Science 1 November 1974, Vol. 186 No. 4162, pp. 414-20 (discussing how computer utility would meet needs of university computer centers).
"Western Union's planning looks to the establishment, through a national, regional, and local network of computers, of a gigantic real-time computer utility service which would gather, store, process, program, retrieve, and distribute information on a broad scale. This company will also arrange to design, procure, and install all necessary hardware for fully integrated data processing and communications systems for individual customers, and provide the total management service for such systems". --In The Matter Of Regulatory And Policy Problems Presented By The Interdependence Of Computer And Communication Services And Facilities, Docket No. 16979, Notice of Inquiry para 10 (November 9, 1966)
Paul Baran, The Coming Computer Utility-Laissez-Faire, Licensing or Regulation (1967)
D. Parkhill, THE CHALENGE OF THE COMPUTER UTILITY 148 (1966)
In re Microwave Communications, Inc., F.C.C. 66R-182, Dkts. 16,509-19 (May 10, 1966). Exhibit 4, The Economic Need for Common Carrier Microwave Communication as related to Time-Shared Comp. Utility Systems 7-8. (Prepared for Microwave Communications Ine, by Comm-Share, Inc., Ann Arbor, Mich., Oct. 20, 1966.)
C. C. Barnett, Jr., and Associates, The Future of the Computer Utility (1967).
Paul Baran: "One of his recommendations was for a national public utility to transport computer data, much in the way the telephone system transports voice data. "Is it time now to start thinking about a new and possibly nonexistent public utility," Baran asked, "a common user digital data communication plant designed specifically for the transmission of digital data among a large set of subscribers?"" [Hauben]
In 1971 Alex McKenzie took charge of the Network Control Center at BBN. He envisioned the ARPANET as a "computing utility." [Abbate p 65]
William F Massy, Computer Networks: Making the Decision to Join One, Science 1 November 1974, Vol. 186 No. 4162, pp. 414-20 (discussing how computer utility would meet needs of university computer centers).
Frank Heart: "A utility is something people depend upon. Like the electricity, or the phones, or the lights, or the railroads, or the airplanes. Yes, it was a utility. That's the thing that was the amazing surprise. It was started as an experiment to connect four sites, and it became a utility much, much faster than anybody would have guessed. People began to depend upon it." [Heart p 16 1990]
1971 Alex McKenzie took charge of the Network Control Center at BBN. He envisioned the ARPANET as a "computing utility." [Abbate p 65]
Larry Roberts believed that the existing communications networks at the time were inefficient and did not properly support communications for computers. By designing a computer network, Roberts believed that he was reinventing communications, designing it to benefit from the advantages and efficiencies of computers. In a 1970 paper, he set out to compare the cost of transmitting one million bits of information 1400 miles (the average distance between ARPANet nodes).
Media Cost per
Telegram $3300.00 For 100 words at 30 bits/wd, daytime Night Letter 565.00 For 100 words at 30 bits/wd, overnight delivery Computer Console 374.00 18 baud avg. use, 300 baud DDD service line & data sets only TELEX 204.00 50 baud teletype service DDD (103A) 22.50 300 baud data sets, DDD daytime service Autodin 8.20 2400 baud message service, full use during working hours DDD 3.45 2000 baud data sets Letter 3.30 Airmail, 4 pages, 250 wds/pg, 30 bits/wd W.U. Broadband 2.03 2400 baud service, full duplex WATS 1.54 2000 baud, used 8 hrs/working day Leased Line (201) .57 2000 baud, commercial, full duplex Data 50 .47 50 KB dial service, utilized full duplex Leased Line (303) .23 50 KB, commercia, full duplex Mail DEC Tape .20 2.5 megabit tape, airmail Mail IBM Tape .034 100 megabit tape, airmail
"Cost per Megabit for Various Communication Media 1400-Mile Distance" [Roberts Wessler 1970]
Roberts table above shows two things. First, it shows how compelling an efficient cost effective computer network could be, bypassing what would otherwise be significant charges from existing networks. Second, it also shows, what was subsequently demonstrated multiple times - that one of the most efficient means of transmitting data was by loading all the data on a memory device, and driving (or mailing) it to its destination.
"Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway" -.Tanenbaum, Andrew S. (1996). Computer Networks. New Jersey: Prentice-Hall. pp. 83. ISBN 0-13-349945-6
Roberts notes that a significant amount of the cost in switched networks are the switches themselves. "Previous store and forward systems like DoD's AUTODIN system, have had such complex, expensive switches that over 95% of the total communications service cost was for the switches. Other switch services adding to the system's cost, deemed superfluous in a computer network, were: long term message storage, multi-address messages and individual message accounting." [Roberts Wessler 1970] Remove the switch from the computer network and you remove the costs.
"By the late 1960s, computer scientists were experimenting with non-linear "packet-switched" techniques to enable computers to communicate with each other. Using this method, computers disassemble information into variable-size pieces of data called "packets" and forward them through a connecting medium to a recipient computer that then reassembles them into their original form. Each packet is a stand-alone entity, like an individual piece of postal mail, and contains source, destination, and reassembly information. Unlike traditional circuit-switched telephone networks, packet-switched networks do not require a dedicated line of communication to be allocated exclusively for the duration of each communication. Instead, individual data packets comprising a larger piece of information, such as an e-mail message, may be dispersed and sent across multiple paths before reaching their destination and then being reassembled. This process is analogous to the way that the individual, numbered pages of a book might be separated from each other, addressed to the same location, forwarded through different post offices, and yet all still reach the same specified destination, where they could be reassembled into their original form. [FTC Report 2007 p 14]
Larry Roberts considered several different designs for the ARPANet including fully interconnected point to point leased lines, line switched (dial-up) service, and packet switching. Roberts stated, "For the kind of service required, it was decided and later verified that the message switched service provided the greater flexibility, higher effective bandwidth, and lower cost than the other two systems." [Roberts Wessler 1970]
[Roberts, Computer Science Museum 1988 ("All of us thought, clearly, in those days, about computer switching rather than circuit switching; some sort of computerized switching. You got the traffic in and you put it out. It could have been that we put it out block for block as fast as it came in; it could have been that we stored the whole message and forwarded it. What we concluded was that you wanted to not store the whole message and forward it, and you couldn't have a perfect virtual cut-through where you sent every block immediately synchronously because it might interfere with the next message, so you had to do it in some smaller breakdown, which is like a packet, or whatever, which, of course, is the size lump you're in anyway, because you've got to put sum checks on it every interval. So, there wasn't any question about packets -- and clearly Donald gave it the name")]
End to End
Also, during the Internet's early years, network architectures generally were based on what has been called the "end-to-end argument." This argument states that computer application functions typically cannot, and should not, be built into the routers and links that make up a network's middle or "core." Instead, according to this argument, these functions generally should be placed at the "edges" of the network at a sending or receiving computer.41 This argument also recognizes, however, that there might be certain functions that can be placed only in the core of a network. Sometimes, this argument is described as placing "intelligence" at or near the edges of the network, while leaving the core's routers and links mainly "dumb" to minimize the potential for transmission and interoperability problems that might arise from placing additional complexity into the middle of the network. [FTC Report 2007 p 17]