ArticlePDF Available

Multiple Computer Networks and Intercomputer Communications

  • FSA Technologies, Inc.


There are many reasons for establishing a network which allows many computers to communicate with each other to interchange and execute programs or data. The definition of a network within this paper will always be that of a network between computers, not including the network of typewriter consoles surrounding each computer. Attempts at computer networks have been made in the past; however, the usual motivation has been either load sharing or interpersonal message handling. Three other more important reasons for computer networks exist, at least with respect to scientific computer applications. Definitions of these reasons for a computer network follow.
Dr. Lawrence G. Roberts
June 1967
There are many reasons for establishing a network which allows many computers to communicate with
each other to interchange and execute programs or data. The definition of a network within this paper will
always be that of a network between computers, not including the network of typewriter consoles
surrounding each computer. Attempts at computer networks have been made in the past; however, the usual
motivation has been either load sharing or interpersonal message handling. Three other more important
reasons for computer networks exist, at least with respect to scientific computer applications. Definitions of
these reasons for a computer network follow.
1. Load Sharing: Both the program and data are transmitted to a remote computer to equalize the
load between the two facilities. This type of operation requires identical computers or languages.
When a given machine is loaded, consideration can be given to processing the program on another
machine. Many determinations must be made before an alternate machine is used (is there an
alternate machine for which appropriate software exists, is that alternate machine in a condition to
handle the program, will more time and dollars be spent on transmission than if thde user waits
until the original machine is available, etc.). Such determinations are very difficult and the gain
only moderate, hence load sharing is not a major consideration here. However, it is felt that some
load equalization will occur in any computer network.
2. Message Service: Ii~ addition to computational network activities, a network can be used to
handle interpersonal message transmissions. This type of service can also be used for educational
services and conference activities. However, it is not an important motivation for a network of
scientific computers.
3. Data Sharing: The program is sent to a remote computer where a large data base exists. This type
of operation will be particularly useful where data files are too large to be duplicated economically.
Frequently gebgraphically discursed individuals need to access a common data base. Access to
this data base may be required simply to make an inquiry or may involve executing a complex
program using the data base. Use of a single data bank will save hardware required to store the
information and will eliminate theneed for maintaining multiple files. The term "single data bank"
does not necessarily mean the storing of all data at a single physical location, but rather the storing
of only one copy of each basic data file. This type of use is particularly important to the military
for command and control, information retrieval, logistics and war gaming applications. In these
cases, one command would send a program to be executed at another center where the data base
4. Program Sharing: Data is sent to a program located at a remote computer and the answer is
returned. Software of particular efficiency or capability exists on certain machines. For example, if
machine Y has a good LIST processor, it may be more efficient for users whose local machine is
X to use Y for LIST processing jobs. Even if a LIST processor exists for X, the time to execute the
program on Y may be sufficiently less than the time to execute on X that the total time (and/or
cost, including transmission) may be less. The use of specialized programs at remote facilities
makes possible large gains in performance. Perhaps even more important is the potential saving in
reprogramming effort.
5. Remote Service: Just a query need be sent if both the program and the data exist at a remote
location. This will probably be the most common mode of operation until communication costs
come down. There will be a tendency for other cases to migrate toward this type of operation. For
example, in a graphics application, the program would be available or created on the remote
computer and it would generate the data structure in its own computer. It would modify and
update the data structure from network commands transmitting back display changes. This
category includes most of the advantages of program and data sharing but requires less
information to be transmitted between the computers.
The advantages which can be obtained when computers are interconnected in anetwork such that remote
running of programs is possible, include advantages due to specialized hardware and software at particular
nodes as well as increased scientific communication.
Specialized Hardware
It is felt that new machine configurations can provide improvement factors of from 10 to 100 in the
problem area for which they were designed.
In some cases very large core and disk will substantially improve performance on existing machines. In
other cases the improvements will be brought about by introduction of new systems such as ILLIAC
1V1and macro modular machines.2A network is needed to make full use of machines with specialized
efficiency and with a network the development of such computers will be enhanced.
Specialized Systems Software
Handling jobs of widely varying sizes, particularly when initiated from many locations, presents an
extremely difficult scheduling problem for any single machine. A large machine serving a number of
smaller machines may provide significant improvements in efficiency by alleviating the scheduling
problem. Small time-sharing computers may be found to be efficiently utilized when employed as
communication equipment for relaying user requests to some larger remote machine on which substantive
work is done. What is envisioned is a system in which the local machine serves some limited needs of the
user while substantial requirements are satisfied by a remote computer particularly well adapted to handling
the problem.
Scientific Communication
Once it is practical to utilize programs at remote locations, programmers will consider investigating what
exists elsewhere. The savings possible from non-duplication of effort are enormous. A network would
foster the "community" use of computers. Cooperative programming would be stimulated, and in particular
fields or disciplines it will be possible to achieve a "critical mass" of talent by allowing geographically
separated people to work effectively in interaction with a system.
Basic Operation
The minimum requirement a system must meet to be eligible for membership in the network is a
time-sharing monitor which allows user programs to communicate with at leasttwo terminals. If
this requirement is uniformly met, the network can be implemented without major change to the
monitor at any installation, by the simple expedient of letting each computer in the network look
conceptually upon all the others as though they were its own remote user terminals.
Figuratively speaking, we may think of the computer-to-computer link in such a network as being
the result of removing a user terminal from its cable on computer A, removing a user terminal
from its cable on Computer B, and splicing the two computer cable ends together. Such a network
might operate as follows: (See Figure 1) The user dials up his home computer, CA, from a console.
He logs in normally by transmitting characters from his console to the monitor. He sets up a user
program and this program, through the second channel, calls the remote computer, logs in, sets up
the desired user program on the remote computer, submits parameter data to it and receives the
results. Note that neither system was required to behave in an unusual fashion. The monitors did
what they always do. The only requirement, as stated earlier, was that the user program be allowed
to communicate with two terminals, its own user terminal and the remote computer. Most present-
day monitors provide for such a capability.
A computer-computer network link as described above was established in 1966 experimentally
between MIT Lincoln Lab's TX-2 computer and System Development Corporation's Q-323. Both
nodes are general purpose, time-shared computers. This link allows programs on either computer
to utilize programs such as compilers and graphics systems which exist only at the other node. The
basic motivation was to test an initial network protocol, determine how well automatic dial up
communications service worked, and determine the extent of the time-sharing monitor changes
necessary. This has been done and the link is now utilized by users to increase their capability,
thus providing more evaluation data.
Interface Message Processor
One way to make the implementation of a network between a set of time-shared computers more
straightforward and unified is to build a message switching network with digital interfaces at each
node. This would imply that a small computer, an interface message processor (IMP), would be
located with each main computer to handle a communications interface. It would perform the
functions of dial up, error checking, retransmission, routing and verification. Thus the set of IMP's,
plus the telephone lines and data sets would constitute a message switching network (See Figure
The major advantage of this plan is that a unified, straightforward design of the network can be
made and implemented without undue consideration of the main computer's buffer space, interpret
speed and other machine requirements. The interface to each computer would be a much simpler
digital connection with an additional flexibility provided by programming the IMP. The network
section of the IMP's program would be completely standard and provide guaranteed buffer space
and uniform characteristics, thus the entire planning job is substantially simplified. The data sets
and transmission lines utilized between the IMP's would most likely be standardized upon, but as
changes occurred in the communication tariffs or data rates, it would be more straightforward just
to modify the program for the IMP's rather than twenty different computers. As soon as the need
became apparent, additional small computers could be located at strategic connection points
within the network to concentrate messages over cross-country lines. Finally, the modifications
required to currently operating systems would be substantially less utilizing these small computers
since there would be no requirement to find buffer spaces, hold messages for retransmission,
verify reception of messages and dial up telephone lines.
ARPA supports a number of computer research groups throughout the country most of which have their
own time-shared computer facility. These researchers have agreed to accept a single network protocol so
that they may all participate in an experimental network. The communication protocol is currently being
developed. It will conform to ASCII conventions as a basic format and include provisions for specifying
the conventions as a basic format and include provisions for specifying the origin, destination, routing,
block size, and sum check of a message. Messages will be character strings or binary blocks but the
communication protocol does not specify the internal form of such blocks. it is expected that these
conventions will be distributed in final form during July 1967.
Figure 3 shows a tentative layout of the network nodes and communication paths. However, since most of
the communications will be dial-up, the paths are just hypothetical. It is hoped that concentration and store
and forward capability will be available through the use of Interface Message Processors. The development
of the IMP's and the use of them at each node would allow store and forward operation as well as speeding
the realization of a unified network.
There are 35 computers shown in Figure 3 at 16 locations, there being several computers at most locations.
A rough estimate would place the number of consoles attached to the 35 computers by the end of 1967 at
1500 and the number of displays at 150. Assuming four characters per second for typewriters and 20
characters per second for scopes, the total I/O rate to the computers is 9000 char/sec. Estimating that 10%
of this I/O communication rate will be forwarded to another computer in the network leads us to an average
transmission rate of 60 char/sec per location. Thus, given console type activity on the network (messages of
from 10 to 1000 characters) the normal 2000 bits/second type communication should be sufficient at first.
Communication Needs
The common carriers currently provide 2 or 4 wire, 2 kc lines between two points either dialed or leased, as
well as higher band width leased lines and lower band width teletype service. Considering the 2 kc offering,
since it is the best dial up service, the use of 2 wire service appears to be very inefficient for the type of
traffic predicted for .the network. In the Lincoln-SDC experimental link the average message length
appears to be 20 characters. Each message must be acknowledged so that the originator may retransmit o;
free the buffer. Thus the line must be reversed so often that the reversal time will effectively half the
transmission rate. Therefore, full duplex, four-wire service is more economic and simpler to use.
Current automatic dialing equipment requires about 20 seconds to obtain a connection and a similar time to
disconnect. Thus the response time is much too long assuming a call is made only after a message arrives
and that the line is disconnected if no other messages arrive soon. It has proven necessary to hold a line
which is being used intermittently to obtain the one-tenth to one second response time required for
interactive work. This is very wasteful of the line and unless faster dial up times become available, message
switching and concentration will be very important to network participants.
1. Slotnick, D. L., "Achieving Large Computing Capabilities Through An Array Computer,"
presented at SJCC (April 1967).
2. Clark, W. A., "Macromodular Computer Systems," Proc. SJCC (April 1967).
3. Marill, T. and Roberts, L. G. , "Toward a Cooperative Network of Time-Shared Computers," Proc.
FJCC (1966).
Home || Contact Dr. Roberts
Copyright © 2001 Dr. Lawrence G. Roberts
Contact webmaster
... The rest of the paper is organized around these 4 phases, with some overlapping chronological progression. We begin with documents dating from before the first RFC-prior to the proposal of the ARPANET project [105]and trace the changing meaning of accounting to accountability through 1990, when the ARPANET was decommissioned [2, p. 195] [21]. We scope our project to this period because, while concerns about accountability clearly extend through to present day discussions of Internet governance [23,27,64], we found that it was during this time period that a notion of an (un)accountable network first came about and took hold as a defining characterization of the Internet. ...
... In the years that followed RFC 136, the ARPANET fell short of achieving IPTO's goal of facilitating resource-sharing [105]. Abbate's canonical narrative attributes the decline of this ideal to the fact that the ARPANET was very difficult to use, especially for new users trying to join the network. ...
... For more on the 1960s development of a notion of an interconnected computer network, both by Licklider and among other researchers see Abbate [2, Ch. 1] Turner [113] Aspray and Ceruzzi [6]. 11 IPTO/ARPA also leased communication lines from common carriers, to serve as the physical connection medium between remote nodes [105]. 12 As Abbate notes, this research ethos came from the very top of the US federal government; President Johnson supported basic research in universities, as opposed to translational or "mission-oriented," "narrowly-defined" projects [2, p. 37]. ...
Full-text available
Contemporary concerns over the governance of technological systems often run up against compelling narratives about technical (in)feasibility of designing mechanisms for accountability. While in recent FAccT literature these concerns have been deliberated predominantly in relation to machine learning, other instances in the history of computing also presented circumstances in which computer scientists needed to un-muddle what it means to design (un)accountable systems. One such a compelling narrative can frequently be found in canonical histories of the Internet that highlight how its original designers' commitment to the "End-to-End" architectural principle precluded other features from being implemented, resulting in the fast-growing, generative, but ultimately unaccountable network we have today. This paper offers a critique of such technologically essentialist notions of accountability and the characterization of the "unaccountable Internet" as an unintended consequence. We explore the changing meaning of accounting and its relationship to accountability in a selected corpus of requests for comments (RFCs) concerning the early Internet's design from the 1970s and 80s. We characterize 4 phases of conceptualizing accounting: as billing, as measurement, as management, and as policy, and demonstrate how an understanding of accountability was constituted through these shifting meanings. Recovering this history is not only important for understanding the processes that shaped the Internet, but also serves as a starting point for unpacking the complicated political choices that are involved in designing accountability mechanisms for other technological systems today.
... Inspired by the memorandum for members and affiliates of the 'Intergalactic Computer Network' by Licklider (1963), Roberts (1967) of the MIT Lincoln Laboratory developed the concept of computer-to-computer networking with communication facilitated via 'data packets' (Hey & Pápay, 2014). Roberts (1967) is also credited with connecting two computers via dial-up telephone lines when connecting one of the computers at MIT with one in California, proving the feasibility of what has since become commonly known as Wide Area Networking (WAN) (Laudon & Traver, 2006). ...
... Inspired by the memorandum for members and affiliates of the 'Intergalactic Computer Network' by Licklider (1963), Roberts (1967) of the MIT Lincoln Laboratory developed the concept of computer-to-computer networking with communication facilitated via 'data packets' (Hey & Pápay, 2014). Roberts (1967) is also credited with connecting two computers via dial-up telephone lines when connecting one of the computers at MIT with one in California, proving the feasibility of what has since become commonly known as Wide Area Networking (WAN) (Laudon & Traver, 2006). ...
... Taylor and Roberts (1967) made three critical decisions about the development of ARPANET that hold through to key tenants in the development of the Internet today. The first came from recognizing the fact that ARPANET had limited funding. ...
Full-text available
The purpose of this chapter is to address challenges related to the integration and implementation of the developing internet of things (IoT) into the daily lives of people. Demands for communication between devices, sensors, and systems are reciprocally driving increased demands for people to communicate and manage the growing digital ecosystem of the IoT and an unprecedented volume of data. A larger study was established to explore how digital transformation through unified communication and collaboration (UC&C) technologies impact the productivity and innovation of people in the context of one of the world’s largest automotive enterprises, General Motors (GM). An analysis and exploration of this research milieu, supported by a critical realist interpretation of solutions, suggested that recommendations can be made that the integration and implementation of digital transformation, delivered via UC&C technologies, impact productivity and opportunity for driving innovation within a global automotive enterprise.
... Programs and data retrieving as needed on remote machine were made possible but switched telephone system was not enough for this job. At that point, Kleinrock's concept of packet switching was adopted and in 1966 Roberts went to DARPA to made concept of computer network more quickly and put this plan together with ARPANET published in 1967 (Roberts, 1967). community demanded reformation of mechanisms to expend the network. ...
Full-text available
Current era is called era of internet technology. Internet provides platform where various channels are just one click far away. The core aspire of this research study was to explore internet use for educational learning targeting public universities female students belonging to Punjab Pakistan. The study recorded student's internet usage pattern, level of usage, preferred viewing sites, favorite visiting sites, and most usage time and core reasons of using internet. It also analyzed demographic information of respondents such as hostelries or home, class and urban or rural areas. The study crossed checked the opinion regarding internet usage as well as demographic characteristics. The data was collected using survey methodology. A total sample size of N=1157 respondents were taken from six selected public sectors universities with highest student statistical rate using multi-level sample technique. The collected data was analyzed using SPSS and study focused how female university students think about internet as a medium of educational learning at undergraduate, graduate and post-graduate level. The study concluded that internet usage is more common and is the source of inspiration among female for educational learning and enhanced learning trends and searching educational content. Using internet causes dual learning i.e. educational and technological for female and due to multimedia content, internet is a more effective source of learning for female than text books.
Full-text available
RESUMO: Investigamos nesta tese os discursos sobre a neutralidade da rede nos parlamentos de Portugal e do Brasil, entre os anos de 2006 e 2019. A neutralidade da rede é o princípio através do qual se afirma que não deve haver discriminação, por parte dos operadores de conexão, ao tráfego dos dados que percorrem as infraestruturas da Internet. Partindo das contradições presentes nas narrativas de uma Internet cujos protocolos são apontados como sendo neutros, analisamos 322 textos de parlamentares, falados e escritos, para responder sobre os seus sentidos ideológicos. O percurso analítico que propomos recorre ao pensamento crítico, numa aproximação da abordagem da Análise Crítica do Discurso com a Economia Política da Comunicação, campos indispensáveis para a compreensão dos sentidos ideológicos associados aos discursos que estão inseridos, de forma mais ampla, no âmbito da regulação e da governação da Internet. Numa primeira etapa da análise verificamos que a imagem da Internet, que se encontra absorvida nos textos analisados, contrasta com aquela que é apresentada nos estudos que buscam refletir criticamente sobre a tecnologia. Numa segunda etapa da análise constatámos que, ao falar e escrever sobre a neutralidade da rede, os parlamentares recorreram a procedimentos discursivos e linguísticos, tais como pressupostos, metáforas e antíteses que os ajudaram a operacionalizar a ideologia. Por meio das estratégias discursivas, os discursos parlamentares sobre a neutralidade da rede legitimam uma Internet conformada pela lógica de privatização das infraestruturas comunicacionais e dissimulam a ideia de que uma construção democrática possa se concretizar com a liberdade concorrencial dos mercados. Concluímos que, discursivamente, a neutralidade da rede, está mais próxima da manutenção das lógicas de poder hegemónicas do que poderíamos supor inicialmente. As reflexões finais deste trabalho apontam para a necessidade de integração do debate sobre a neutralidade da rede no âmbito dos desafios da reapropriação das infraestruturas públicas comunicacionais e da disputa da hegemonia pela Internet. // ABSTRACT: This thesis aims to investigate the discourses on net neutrality in the parliaments of Portugal and Brazil between the years 2006 and 2019. The principle of net neutrality states that there should be no discrimination, by Internet Service Providers (ISPs), in the data traffic that travel through Internet infrastructures. Starting from the existing contradictions in the narratives of an Internet whose protocols are considered to be neutral, we analyzed 322 texts by parliamentarians, both spoken and written, to investigate their ideological meanings. The use of critical thinking in our analytical path brings together the Critical Discourse Analysis methodology and Political Economy of Communication, two crucial fields to understand the ideological meanings associated with the discourses that are inserted, in a broader way, within the scope of regulation and Internet governance. In a first stage of the analysis, we found that the image of the Internet, absorbed in the analyzed texts, contrasts with the image presented in studies that seek to reflect critically on technology. In a second stage of the analysis, we found that, when speaking and writing about net neutrality, parliamentarians resorted to discursive and linguistic procedures, such as assumptions, metaphors, and antitheses, which helped them to operationalize ideology. Through discursive strategies, parliamentary speeches about net neutrality legitimize an Internet conformed by the logic of privatization of communication infrastructures, hiding the idea that a democratic construction can become real, with competitive freedom of markets. We conclude that, discursively, net neutrality is closer to the maintenance of hegemonic power logics than we could have initially supposed. The final reflections of this work reveal the urgent need to integrate the net neutrality debate, in terms of the challenges of reappropriating public communicational infrastructures and the dispute for hegemony over the Internet.
Este artículo presenta un análisis sobre la cultura del software, en particular, del llamado software libre o abierto. Para ello establece como base la metáfora de la colaboración, es decir, revisa cómo su modelo de producción se ha convertido en una plantilla para otros procesos y disciplinas. Dado que el objeto de análisis es complejo y permea varios aspectos actuales, el abordaje metodológico es interdisciplinar y emplea la perspectiva de los estudios de software, con la ayuda de los estudios en ciencias, tecnología, sociedad, la ingeniería de software y la arqueología de medios. Así, se suministran varios ejemplos históricos de colaboración tanto en el software como en las redes de computadores. Finalmente, se discuten algunas definiciones como red sociotécnica, colectivo y los tres tipos de agencia que históricamente se han visto en el software. Particularmente, la agencia distribuida es de ayuda para contrastar la metáfora de la colaboración en un caso de estudio futuro representado por la cultura maker.
Over the past decades, the Internet has fundamentally influenced almost all areas of our everyday lives. It has profoundly changed the ways in which we communicate, gather information, and consume media, and has led to the emergence of Internet companies that are based on fundamentally new business models. This chapter introduces Internet computing as a scientific field that is concerned with applications provided via the Internet, the underlying architectures and technologies necessary to build such applications, and systemic matters that inform the design of such applications. Based on these foundations, this chapter outlines this book’s structure. In addition to defining Internet computing and briefly presenting the chapters, an overview of the historical background and development of the Internet is provided. This chapter also introduces the concepts of information systems (IS) and distributed systems as important related scientific fields that shaped the ways Internet-based applications have been designed. To round off this introduction, several common Internet-based applications are presented.
ARPANET demonstrated that packet switching was an effective routing principle for computer networks, accelerating the evolution towards the current network paradigm, in which packetization is found in almost all forms of digital communication. The decision to use the packet switching principle was crucial to the development of the Internet and computer networking broadly, and in popular literature, packet switching is often attributed to three originators: Paul Baran, Donald Davies and Leonard Kleinrock. This paper does not concern itself with who invented packet switching, but rather, who had the greatest influence on the risky and highly consequential decision to base the ARPANET design on this previously untested technology. A close examination of the available documentation, including two newly re-emerged documents, indicate that Paul Baran had a much more substantial influence on the decision to use packet switching for ARPANET than most retellings of Internet history portray. The findings also show that Donald Davies’ role in the adoption of packet switching as a dominant principle for digital communication may have been less impactful than depicted in current literature and that Kleinrock may have played a significant role in bringing packet switching to the ARPANET, influenced by interactions with Baran.
Full-text available
Incompatible machines represent an old problem in the computer field. Very often, because of computer incompatibility, programs developed at one installation are not available to users of other installations. The same program may therefore have to be rewritten dozens of times.