ArticlePDF Available

Ethical challenges to citizens of ‘The automatic Age’: Norbert Wiener on the information society

Authors:

Abstract

This article discusses the foresight of philosopher/mathematician Norbert Wiener who, in the 1940s, founded Information Ethics as a research discipline. Wiener envisioned the coming of an “automatic age” in which information technology would have profound social and ethical impacts upon the world. He predicted, for example, machines that will learn, reason and play games; “automatic factories” that will replace assembly-line workers and middle managers with computerized devices; workers who will perform their jobs over great distances with the aid of new communication technologies; and people who will gain remarkable powers by adding computerized “prostheses” to their bodies. To analyze the ethical implications of such developments, Wiener presented some principles of justice and employed a powerful practical method of ethical analysis.
Keynote Address, Ethicomp 2004, Syros, Greece 14th April 2004
Ethical Challenges to Citizens of ‘The Automatic Age’:
Norbert Wiener on the Information Society
1. INTRODUCTION
In Chapter I of his foundational informa-
tion-ethics book, The Human Use of Human
Beings (1950, 1954) Norbert Wiener said:
It is the thesis of this book that soci-
ety can only be understood through a
study of the messages and the com-
munication facilities which belong to
it; and that in the future…messages
between man and machines, between
machines and man, and between
machine and machine, are destined to
play an ever-increasing part. (1954, 16)
To live effectively is to live with
adequate information. Thus commu-
nication and control belong to the
essence of man’s inner life, even as
they belong to his life in society.
(1954, 18)
communications in society…are the
cement which binds its fabric togeth-
er. (1954, 27)
Wiener believed that, in the coming ‘auto-
matic age’ (as he called today’s era), the
nature of society, as well as its citizens’ rela-
tionships with society and with each other,
will depend more and more upon informa-
tion and communications. He predicted
that, in our time, machines will join human
Info, Comm & Ethics in Society (2004) 2: 65–74
©2004 Troubador Publishing Ltd.
KKEEYYWWOORRDDSS
Information
Ethics
Entropy
Human
Purposes
Justice
Terrell Ward Bynum
Research Center on Computing & Society,
Southern Connecticut State University, New Haven, USA
Email: computerethics@earthlink.net
CCOOVVEERRAAGGEE
 
This article discusses the foresight of philosopher/mathematician Norbert Wiener who, in the 1940s, founded
Information Ethics as a research discipline. Wiener envisioned the coming of an “automatic age” in which informa-
tion technology would have profound social and ethical impacts upon the world. He predicted, for example,
machines that will learn, reason and play games; “automatic factories” that will replace assembly-line workers and
middle managers with computerized devices; workers who will perform their jobs over great distances with the aid
of new communication technologies; and people who will gain remarkable powers by adding computerized “pros-
theses” to their bodies. To analyze the ethical implications of such developments, Wiener presented some principles of
justice and employed a powerful practical method of ethical analysis.
ABSTRACT
beings in the creation and interpretation of
messages and communications, and indeed
in shaping the ties that bind society togeth-
er. There will be, he argued, machines that
learn -- that gather, store and interpret
information -- that reason, make decisions,
and take actions on the basis of the mes-
sages which they send and receive. With
the help of information technology, he pre-
dicted, mechanical prosthetic devices will
merge with the bodies of disabled persons
to help them overcome their disabilities;
and indeed even people who are not dis-
abled will acquire ‘prostheses’ to give them
powers that a human never had before.
According to Wiener, the social and ethical
importance of these developments cannot
be overstated. “The choice of good and evil
knocks at our door,” he said. (1954, 186)
Today we have entered Wiener’s ‘auto-
matic age’, and it is clear that he percep-
tively foresaw the enormous social and eth-
ical importance of information and com-
munication technology (ICT). Remarkably,
he even foresaw – more than a decade
before the Internet was created – some of
the social and ethical problems and oppor-
tunities that came to be associated with the
Internet. (Some examples are given below.)
2. HUMAN PURPOSES
AND THE PROBLEM
OF ENTROPY
Although he thought of himself primarily
as a scientist, Wiener considered it impor-
tant for scientists to see their own activities
in the broader human context in which
they function. Thus, he said, “we must
know as scientists what man’s nature is and
what his built-in purposes are.” (1954, 182)
As an early twentieth-century scientist,
who was philosophically alert to recent
developments in physics, Wiener faced the
challenge of reconciling the existence and
importance of human purposes and values on
the one hand, and the thermodynamic
principle on the other hand that increasing
entropy -- that is, growing chaos and disor-
der – eventually will destroy all organized
structures and entities in the universe. In
Chapter II of The Human Use of Human
Beings, Wiener described contemporary sci-
ence’s picture of the long-term fate of the
universe:
Sooner or later we shall die, and it is
highly probable that the whole uni-
verse around us will die the heat
death, in which the world shall be
reduced to one vast temperature
equilibrium…. (1954, 31)
In that same chapter, however, Wiener res-
cued his reader from pessimism and point-
lessness by noting that ‘the heat death’ of
the universe will occur many millions of
years in the future. In addition, in our local
region of the universe, living entities and
even machines are capable of reducing chaos
and disorder rather than increasing it.
Living things and machines are anti-
entropy entities that create and maintain
structure and organization locally, even if
the universe as a whole is ‘running down’
and losing structure. For millions of years
into the future, therefore, human purposes
and values can continue to have meaning
and worth, despite the overall increase of
entropy in the universe:
In a very real sense we are ship-
wrecked passengers on a doomed
planet. Yet even in a shipwreck,
human decencies and human values
do not necessarily vanish... [Thus]
the theory of entropy, and the con-
siderations of the ultimate heat
death of the universe, need not have
such profoundly depressing moral
consequences as they seem to have at
first glance. (1954, 40–41)
3. JUSTICE AND A
GOOD HUMAN LIFE
Having rescued the meaningfulness of
human purposes and values, Wiener could
discuss what would count as a good human
life. To have a good life, human beings must
live in a society where “the great human
values which man possesses” (1954, 52) are
Bynum: Ethical Challenges to Citizens of the Automatic Age
AAccccoorrddiinngg ttoo WWiieenneerr,, tthhee
ssoocciiaall aanndd eetthhiiccaall iimmppoorrttaannccee
ooff tthheessee ddeevveellooppmmeennttss ccaannnnoott
bbee oovveerrssttaatteedd
66
Bynum: Ethical Challenges to Citizens of the Automatic Age
nurtured; and this can only be achieved, he
said, in a society that upholds the “great
principles of justice” (1954, 106). In
Chapter VI of The Human Use of Human
Beings he stated those principles, although
he did not give them names. For the sake of
clarity and ease of remembering them, let
us attach names to Wiener’s own defini-
tions:
The Principle of Freedom: Justice
requires “the liberty of each human
being to develop in his freedom the
full measure of the human possibili-
ties embodied in him.” (1954, 105)
The Principle of Equality: Justice
requires “the equality by which what
is just for A and B remains just when
the positions of A and B are inter-
changed.” (1954, 106)
The Principle of Benevolence: Justice
requires “a good will between man and
man that knows no limits short of
those of humanity itself.” (1954, 106)
Wiener considered humans to be funda-
mentally social beings who can reach their
full potential only by active participation in
a community of similar beings. For a good
human life, therefore, society is indispensa-
ble. But it is possible for a society to be
oppressive and despotic in ways that limit
or even stifle individual freedom; so
Wiener added a fourth principle of justice,
which we can appropriately call “The
Principle of Minimum Infringement of
Freedom”: (Wiener himself did not give it a
name.)
The Principle of Minimum Infringement
of Freedom: “What compulsion the
very existence of the community and
the state may demand must be exer-
cised in such a way as to produce no
unnecessary infringement of free-
dom.” (1954, 106)
According to Wiener, the overall purpose
of a human life is the same for everyone: to
realize one’s full human potential by engag-
ing in a variety of chosen actions (1954, 52).
It is not surprising, therefore, that the
Principle of Freedom would head his list,
and that the Principle of Minimum
Infringement of Freedom would limit the
power of the state to thwart freedom.
Because the general purpose of each human
life, according to Wiener, is the same, his
Principle of Equality follows logically;
while the Principle of Benevolence follows
from his belief that human freedom flour-
ishes best when everyone sympathetically
looks out for the wellbeing of all.
4. WIENER’S METHOD OF
DOING INFORMATION
ETHICS
Wiener was keen to ask questions about
“what we do and how we should react to
the new world that confronts us” (1954, 12).
He developed strategies for analyzing,
understanding, and dealing with ICT-relat-
ed social and ethical problems or opportu-
nities that could threaten or advance
human values like life, health, security,
knowledge, freedom and happiness. Today,
half a century after Wiener founded
Information Ethics as an academic
research subject, we can look back at his
writings in this field and examine the meth-
ods that he used to develop his arguments
and draw his conclusions. While Wiener
was busy creating Information Ethics as a
new area of academic research, he normal-
ly did not step back – like a metaphiloso-
pher would – and explain to his readers
what he was about to do or how he was
going to do it. Instead, he simply tackled an
ICT-related ethical problem or opportuni-
ty and began to analyze it and try to solve
the problem or benefit from the opportu-
nity.
Today, in examining Wiener’s methods and
arguments, we have the advantage of helpful
concepts and ideas developed later by semi-
nal thinkers such as Walter Maner and James
Moor. We can use their ideas to illuminate
Wiener’s methodology, examining what he
did in addition to what he said. In Chapter VI
of The Human Use of Human Beings, for exam-
ple, Wiener considers the law and his own
conception of justice as tools for identifying
and analyzing social and ethical issues associ-
ated with ICT. Combining Maner’s ideas in
his “Heuristic Methods for Computer
Ethics” (1999) with Moor’s famous account
of the nature of computer ethics in “What Is
Computer Ethics?” (1985), we can describe
Wiener’s account of Information Ethics
67
methodology as the following five-step
heuristic procedure:
Step One: Identify an ethical problem or posi-
tive opportunity regarding the integration
of ICT into society. (If a problem or
opportunity can be foreseen before it
occurs, we should develop ways to solve
the problem or benefit from the oppor-
tunity before we are surprised by – and
therefore unprepared for – its appear-
ance.)
Step Two: If possible, apply existing ‘policies
[as Moor would call principles, laws,
rules, and practices that already apply in
the given society] using precedent and tra-
ditional interpretations to resolve the
problem or to benefit from the oppor-
tunity.
Step Three: If existing policies appear to be
ambiguous or vague when applied to the
new problem or opportunity, clarify
ambiguities and vagueness. [In Moor’s lan-
guage: identify and eliminate ‘conceptu-
al muddles’.]
Step Four: If precedent and existing inter-
pretations, including the new clarifica-
tions, are insufficient to resolve the
problem or to benefit from the oppor-
tunity, one should revise the old policies or
create new ones using ‘the great principles of
justice’ and the purpose of a human life to
guide the effort. [In Moor’s language,
one should identify ‘policy vacuums’ and
then formulate and ethically justify new
policies to fill the vacuums.]
Step Five: Apply the new or revised policies to
resolve the problem or to benefit from
the opportunity.
It is important to note that this method of
engaging in Information Ethics need not
involve the expertise of a trained philoso-
pher (though such expertise often can be
helpful). In any society, a successfully func-
tioning adult will be familiar with the laws,
rules, customs, and practices (Moor’s ‘poli-
cies’) that normally govern one’s behavior
in that society. These policies enable a cit-
izen to tell whether a proposed action
should be considered ethical. Thus, all
those in society who must cope ethically
with the introduction of ICT – whether
they are public policy makers, ICT profes-
sionals, business people, workers, teachers,
parents, or others – can and should engage in
Information Ethics by helping to integrate ICT
into society in ways that are socially and ethical-
ly good. Information Ethics, understood in
this very broad way, is too vast and too
important to be left only to academics or to
ICT professionals. This was clear to
Wiener, who especially challenged govern-
ment officials, business leaders, and public
policy makers to wake up and begin to
address the ‘good and evil” implications of
the coming information society.
5. UNEMPLOYMENT AND
THE ‘AUTOMATIC
FACTORY’
After World War II, Wiener became con-
cerned about the possibility that unprece-
dented unemployment could be generated
if ‘automatic factories’ were created with
robotic machines to replace assembly-line
workers and with information processing
devices to replace middle-level managers.
Such a factory would “play no favorites
between manual labor and white-collar
labor”. (1954, 159) An automatic factory,
said Wiener, would be very much like an
animal with a computer functioning like a
central nervous system; industrial instru-
ments such as thermometers and photo-
electric cells serving as ‘sense organs’; and
effectors’ like valve-turning motors, elec-
tric clutches, and newly-invented robotic
tools functioning like limbs:
The all-over system will correspond
to the complete animal with sense
organs, effectors, and propriocep-
tors, not...[just] to an isolated brain.
(1954, 157)
Such a factory, said Wiener, would need far
fewer human workers, blue-collar or white-
collar, and the resulting industrial output
could nevertheless be copious and of high
quality.
Bynum: Ethical Challenges to Citizens of the Automatic Age
TThheessee ppoolliicciieess eennaabbllee aa cciittiizzeenn
ttoo tteellll wwhheetthheerr aa pprrooppoosseedd
aaccttiioonn sshhoouulldd bbee ccoonnssiiddeerreedd
eetthhiiccaall
68
Bynum: Ethical Challenges to Citizens of the Automatic Age
Wiener noted that there is at least one
good feature of ‘automatic factories’ that
speaks in favor their creation; namely, the
safety that they could offer to humans.
Since such factories would employ few
humans, they would be ethically preferable
for the manufacture of risky items like
radioactive products or dangerous chemi-
cals. Far fewer people would be killed or
injured in cases of emergency or accident in
such a factory. Nevertheless, Wiener was
concerned that the widespread creation of
automatic factories could generate massive
unemployment:
Let us remember that the automatic
machine...is the economic equivalent
of slave labor. Any labor which com-
petes with slave labor must accept the
economic conditions of slave labor. It
is perfectly clear that this will pro-
duce an unemployment situation, in
comparison with which the present
recession and even the depression of
the thirties will seem a pleasant joke.
(1954, 162)
Thus the new industrial revolution is
a two-edged sword. It may be used for
the benefit of humanity. ...It may also
be used to destroy humanity, and if it
is not used intelligently it can go very
far in that direction. (1954, 162)
Wiener was not a mere alarmist, however;
nor was he just a theoretician. Instead, hav-
ing identified a serious threat to society
and to individual workers, he took action.
In the early 1950s, he met with corporate
managers, public policy makers, and union
leaders to whom he expressed his deep
concerns about automatic factories. By
1954, when he published the Second
Revised Edition of The Human Use of Human
Beings, Wiener had become optimistic that
his warnings were being heeded. (1954, 162)
6. LONG-DISTANCE
COMMUNICATIONS,
TELEWORKING AND
GLOBALIZATION
Besides the automatic factory, Wiener
envisioned other ways in which informa-
tion technology could affect working con-
ditions. For example, he foresaw what
today is called ‘teleworking’ or ‘telecom-
muting’ -- doing one’s job while being a
long distance from the work site. This will
be possible, he said, because of communi-
cations technologies like telephones,
‘Ultrafaxes’, telegraph, teletype, and long-
distance communications technologies that
are bound to be invented in the future.
Performing one's job at a distance – even
thousands of miles away – is possible, said
Wiener, because
where a man’s word goes, and where
his power of perception goes, to that
point his control and in a sense his
physical existence is extended. To see
and to give commands to the whole
world is almost the same as being
everywhere. (1954, 97)
As an example, Wiener imagined an archi-
tect in Europe supervising the construction
of a building in the United States. Although
an adequate building staff would be on the
construction site in America, the architect
himself would never leave Europe:
Ultrafax gives a means by which a fac-
simile of all the documents concerned
may be transmitted in a fraction of a
second, and the received copies are
quite as good working plans as the
originals. The architect may be kept
up to date with the progress of the
work by photographic records taken
every day or several times a day, and
these may be forwarded back to him
by Ultrafax. Any remarks or advice he
cares to give...may be transmitted by
telephone, Ultrafax, or teletypwriter.
(1954, 98)
Thus long-distance communications tech-
nologies which were available even in the
early 1950s made it possible for certain
kinds of ‘teleworking’ to take place.
In addition, Wiener noted that the long
reach of such communications technolo-
gies is likely to have significant impacts
upon government. “For many millennia”, he
said, the difficulty of transmitting language
restricted “the optimum size of the state to
the order of a few million people, and gen-
erally fewer.” (1954, p. 91) Exceptions like
the Persian and Roman Empires were made
possible by improved means of communi-
cation, such as messengers on ‘the Royal
69
Road’ conveying the Royal Word across
Persia, or the dramatically improved roads
of the Roman Empire conveying the
authority of the Emperor. By the early
1950s, he noted, there already were global
communications networks made possible
by airplanes and radio technology, in addi-
tion to the telecommunications technolo-
gies mentioned above. The resulting glob-
alization of communication, he suggested,
may even move the world community
toward some kind of world government:
very many of the factors which previ-
ously precluded a World State have
been abrogated. It is even possible to
maintain that modern communica-
tion...has made the World State
inevitable. (1954, 92)
By today’s standards, the long-distance
communications technologies of the early
1950s, when Wiener published The Human
Use of Human Beings, were very slow and
clumsy. Nevertheless Wiener identified,
even then, early indications of ‘contempo-
rary’ information-ethics topics like tele-
working, job outsourcing, globalization,
and the impact of ICT on government and
world affairs.
7. DISABILITIES,
PROSTHESES AND THE
MERGING OF HUMANS
AND MACHINES
Norbert Wiener’s foundational
Information Ethics works were concerned
with possible and actual impacts of infor-
mation technology upon human values,
such as life, health, security, knowledge,
resources, opportunities, and most of all free-
dom. He focussed not only upon harms and
threats to such values, but also upon bene-
fits and opportunities that information
technology could make possible. Wiener
and some colleagues, for example, used
cybernetic theory to explain two medical
problems called ‘intention tremor’ and
‘Parkinsonian tremor’. The result was the
creation of two information feedback
machines, called ‘the moth’ and ‘the bed-
bug’, to prove that the cybernetic explana-
tion of Wiener and his colleagues was cor-
rect. The machines were successful and
made a positive contribution to human
health and medicine. (1954, 163–167)
A second project of Wiener and his col-
leagues was the creation of a ‘hearing glove’
that could be worn by someone who is
totally deaf. This device was designed to
use information technology to convert
human conversation into vibration patterns
in a deaf person’s hand. These tactile pat-
terns would then be used to help the deaf
person understand human speech.
Although the project was not pursued to
completion, it did lead to the creation of
other devices which enabled persons who
were blind to find their way through a maze
of streets and buildings. (154, 167–174)
A proposed prosthesis project that
Wiener described in The Human Use of
Human Beings (1954, 174) was an iron lung
that would be electronically attached to
damaged breathing muscles in a person’s
body and would use the patient’s own brain
to control his breathing. This project
would physically merge a person's body
with an electronic machine to create a
functioning being that is part man and part
machine.
Another project like that was the cre-
ation of a mechanical hand to replace a
hand that had been amputated. Wiener and
some Russian and American colleagues
worked together to develop such a hand,
some of which were created in Russia
where they “permitted some hand
amputees to go back to effective work”.
(1964, 78) Electrical action potentials in
the remaining forearm were generated by
the amputee’s brain when he tried to move
his fingers. These potentials were sensed by
electronic circuits in the mechanical hand
and used to run motors which closed and
opened the mechanical fingers. Wiener
suggested that a kind of ‘feeling’ could be
added to the artificial hand by including
electronic pressure sensors that would gen-
erate vibrations in the forearm.
Besides using prostheses to help persons
with disabilities, said Wiener, people with-
out disabilities will eventually use prosthe-
ses to give themselves significant powers
that human beings never had before:
Thus there is a new engineering of
prostheses possible, and it will involve
the construction of systems of a
mixed nature, involving both human
and mechanical parts. However, this
Bynum: Ethical Challenges to Citizens of the Automatic Age
70
Bynum: Ethical Challenges to Citizens of the Automatic Age
type of engineering need not be con-
fined to the replacement of parts we
have lost. There is a prosthesis of
parts we do not have and which we
never have had. (1964, 77)
The dramatic new powers of
man/machine beings could be used for
good purposes or for bad ones, and this is
one more example of Wiener's point about
good and evil knocking at our door”:
Render unto man the things which
are man’s and unto the computer the
things which are the computer’s...
What we now need is an independ-
ent study of systems involving both
human and mechanical elements.
(1964, 77)
In today’s language, a being who is part
human and part machine is called a
cyborg’. In the 1950s, when Wiener wrote
The Human Use of Human Beings, he did not
use this word, but he did see the urgent
need to consider the ethical issues that
were bound to arise when such beings are
created.
8. ROBOT ETHICS AND
MACHINES THAT LEARN
In addition to ethical concerns about
man/machine beings, Wiener also
expressed worries about decision-making
machines. The project that originally led
him and some of his colleagues to create
the new scientific field of cybernetics during
World War Two was the development of an
anti-aircraft cannon that could ‘perceive’
the presence of an airplane, calculate its
likely trajectory, aim the cannon and fire
the shell. This project made it clear to
Wiener that humans possessed the scientif-
ic and engineering knowledge to create
decision-making machines which gather
information about the world, ‘think about’
that information, reach decisions based
upon that ‘thinking’, and then carry out the
decisions they had made.
Besides the anti-aircraft cannon, Wiener
discussed other decision-making machines,
including the checker-playing (i.e.
draughts-playing) computer of A. L.
Samuel of the IBM Corporation (1964, 19)
and various chess-playing computers (1964,
Ch. II). Samuel’s checker-playing computer
was able to reprogram itself to take account
of its own past performances in checker
games. It made adjustments in its own play-
ing strategy until it began to win more fre-
quently. Although Samuel created this
game-playing computer, it learned how to
defeat him consistently by playing games
against him. Wiener also discussed chess-
playing computers. In his day, they played
chess very poorly, although some of them
were able to learn from their ‘experiences’
and improve their playing skills to some
extent. Wiener predicted, as many of his
colleagues also did, that chess-playing com-
puters would eventually become excellent
opponents, even for chess masters.
Although machines that play checkers or
chess do not pose major ethical challenges,
they nevertheless demonstrate the fact that
computerized devices can be designed to
learn from their ‘experiences’, make deci-
sions, and act on those decisions. Wiener
noted that, in the 1950s and 1960s, both
the United States and the Soviet Union –
following John von Neumann’s view that
war can be seen as a kind of game (1954,
181) – were using computers to play war
games in order to prepare themselves for
possible nuclear war with each other. He
was most concerned that one or the other
of the two nuclear powers would come to
rely, unwisely, upon war-game machines
that learn and reprogram themselves:
[Man] will not leap in where angels
fear to tread, unless he is prepared to
accept the punishment of the fallen
angels. Neither will he calmly trans-
fer to the machine made in his own
image the responsibility for his
choice of good and evil, without con-
tinuing to accept a full responsibility
for that choice. (1954, 184)
the machine...which can learn and
can make decisions on the basis of its
71
WWiieenneerr aallssoo eexxpprreesssseedd wwoorrrriieess
aabboouutt ddeecciissiioonn--mmaakkiinngg
mmaacchhiinneess
learning, will in no way be obliged to
make such decisions as we should
have made, or will be acceptable to
us. For the man who is not aware of
this, to throw the problem of his
responsibility on the machine,
whether it can learn or not, is to cast
his responsibility to the winds, and to
find it coming back seated on the
whirlwind. (1954, 185)
War and business are conflicts
resembling games, and as such, they
may be so formalized as to constitute
games with definite rules. Indeed, I
have no reason to suppose that such
formalized versions of them are not
already being established as models
to determine the policies for pressing
the Great Push Button [of nuclear
war]...(1964, 31–32.)
If machines that play ‘war games’ are used
by governments to plan for war, or even to
decide when to “push the nuclear button”,
said Wiener, the human race may not sur-
vive the consequences. Woe to us humans,
if we allow machines to make our decisions
for us in situations where human judgment
and responsibility are crucial to a good
outcome. Decision-making machines must
be governed by ethical principles that
humans select. But if such machines learn
from their past activities, how can we
humans be sure that they will obey the eth-
ical principles that we would have used to
make those decisions? Even in 1950, there-
fore, it was clear to Wiener that the world
would need to develop a genuine robot
ethics -- not just science-fiction ‘laws of
robotics’ from a writer like Isaac Asimov
(1950), but genuine rules to govern the
behavior of decision-making machines
that learn. Today, Wiener would not be
surprised to hear that there exists a branch
of software engineering to deal with robot
ethics. (See Eichmann, 1994; and Floridi &
Sanders, 2001.)
9. ARTIFICIAL
INTELLIGENCE AND
PERSONAL IDENTITY
Wiener’s cybernetic analyses of living
organisms – including human beings – as
well as his consideration of learning
machines, led him to comment on a variety
of ideas that, today, are associated with AI
(artificial intelligence). He did not have a
rigorously worked out theory of AI, and
many of his comments were guesses or
speculations; but, taken together, they con-
stitute a significant perspective on human
nature and intelligence; and they have pro-
found implications for the concept of per-
sonal identity.
Wiener would consider many of today’s
AI questions – like whether machines
could be ‘alive’, or ‘intelligent’, or ‘purpose-
ful’ – to be essentially semantic questions
using words that are far too vague to be
used for scientific purposes:
I want to interject the semantic
point that such words as life, pur-
pose, and the soul are grossly inade-
quate to precise scientific thinking.
These terms have gained their signif-
icance through our recognition of
the unity of a certain group of phe-
nomena, and do not in fact furnish us
with any adequate basis to character-
ize this unity. Whenever we find a
new phenomenon which partakes to
some degree of the nature of those
which we have already termed ‘living
phenomena,’ but which does not
conform to all the associated aspects
which define the term ‘life,’ we are
faced with the problem whether to
enlarge the word ‘life’ so as to include
them, or to define it in a more
restrictive way so as to exclude them.
(1954, 31)
Now that certain analogies of behav-
ior are being observed between the
machine and the living organism, the
problem as to whether the machine
is alive or not is, for our purposes,
semantic and we are at liberty to
answer it one way or the other as best
suits our convenience. (1954, 32)
Wiener thought of both human beings and
machines as physical entities whose behav-
Bynum: Ethical Challenges to Citizens of the Automatic Age
WWooee ttoo uuss hhuummaannss,, iiff wwee aallllooww
mmaacchhiinneess ttoo mmaakkee oouurr ddeecciissiioonnss
ffoorr uuss
72
Bynum: Ethical Challenges to Citizens of the Automatic Age
ior and performance can be explained by
the interaction of their parts with each
other and with the outside world. He some-
times spoke of human beings as a “special
sort of machine”. (e.g., 1954, 79) In the case
of humans, the parts are atoms that are
combined in an exquisitely complex pat-
tern to form a living person. The parts of a
non-human machine, on the other hand,
are much larger and less finely structured,
being simply shaped ‘lumps’ of steel, cop-
per, plastic, silicon, and so on.
Nevertheless, according to Wiener, it is
physical structure that accounts for the
‘intellectual capacities’ of both humans and
machines:
Cybernetics takes the view that the struc-
ture of the machine or of the organism is
an index of the performance that may be
expected from it....Theoretically, if we
could build a machine whose
mechanical structure duplicated
human physiology, then we could
have a machine whose intellectual
capacities would duplicate those of
human beings. (1954, 57, italics in the
original text)
Consistent with this view, Wiener regularly
analyzed human intellectual and psycholog-
ical phenomena, both normal and patho-
logical, by applying cybernetic theory to
the various parts of a person’s body. In the
early 1960s, because of the relatively large
size of electronic components (compared
to the neurons in a person’s brain), and
because of the tendency of electronic com-
ponents to generate much heat, Wiener
expressed doubt that humans would ever
create a machine as complex and sophisti-
cated as a person's body or nervous system.
(1964, 76) Today, perhaps, he would change
his mind, given recent progress in microcir-
cuit development.
Wiener’s view that human beings are
sophisticated physical entities whose parts
are atoms enabled him to speculate, in
Chapter V of The Human Use of Human
Beings, about the possibility of creating a
complex mathematical formula that would
completely describe the intimate structure
of a person’s body. If one were able, he said,
to send this formula across telephone lines,
or over some other long-distance commu-
nications network, and if the formula
enabled someone or some device at the
other end to ‘reassemble’ the person –
atom by atom – then it would be possible
for that person to travel long distances
instantly via telephone or some other com-
munications network. Today, Wiener’s
physiological account of human nature,
including human intellectual and emotion-
al capacities, is widely shared by many sci-
entists and other thinkers, including biolo-
gists, physicians, psychologists, and
philosophers, to name but a few examples.
When this view is combined with Wiener’s
ideas about electronic ‘traveling’ over com-
munications networks, a number of chal-
lenging questions arise regarding a human
being's personal identity. Wiener himself
did not explore these questions, but they
are worth mentioning here:
1. ‘Traveling’ in this manner would requite
that a person be ‘disassembled’ into
atoms at the starting point and
‘reassembled’ at the destination. Since
the original atoms themselves do not
travel across the network (only the
mathematical formula travels), new
atoms must be used at the destination.
Does this mean that the traveler is gen-
tly ‘killed’ at the starting point and then
carefully ‘resurrected’ at the destina-
tion?
2. What if the person’s identity formula
somehow gets scrambled while traveling
over the network? The ‘reassembled’
person at the destination could be sig-
nificantly different from the original
one. Who is this new person? Where
did the original person go? Could the
new person, on behalf of the original
person, sue someone for murder? –
manslaughter? – bodily harm? – breach
of contract? Would all these issues
become moot points if the original per-
son is simply ‘reassembled’ correctly at
the original starting point? Could the
‘new person’ at the other end then be
killed because he or she was a ‘mistake’?
3. What if a person’s identity formula is
sent across the network, but his or her
body is not disassembled? The ‘traveler’,
in other words, stays home and remains
alive just as he or she was? If a person is
nevertheless ‘reassembled’ at the desti-
nation, using the formula that was sent
across the network, who is that new per-
son? He or she would have all the mem-
ories, knowledge, personality traits, and
73
so on, of the original person. Indeed, he
or she would have a body – atom for
atom – identical to that of the original
person. This ‘new’ person would be more
than a clone of the original, since a
clone’ in today’s sense of the term would
start out as a baby, and not be ‘reassem-
bled’ as an adult. The ‘new’ person also
would not just be the twin sibling of the
original person, since such twins have
different memories and different past
experiences. The new person would be a
perfect copy of the original one, whose
knowledge and memories would then
begin to diverge more and more from
the original person’s as time goes on.
4. What constitutes someone’s unique per-
sonal identity? Perhaps it is the mathe-
matical formula that fully describes his or
her physiology at any given moment. But
this would mean that someone’s personal
identity changes from moment to
moment as his or her body changes. This
conflicts with our usual view that a person
keeps his or her identity over a lifetime.
5. Suppose someone stores away complete
identity formulas corresponding to my
body on my tenth birthday, my twenti-
eth birthday, and my thirtieth birthday.
Then, on my fortieth birthday, he or she
‘reassembles’ all three past versions of
me. Who is ‘the real me’? Are they all
me? Who can claim to own my proper-
ty? Who gets to go home to my wife and
live with her? Why?
6. If a ‘life insurance’ organization stores
away one of my personal identity formu-
las and always ‘reassembles’ me anew
when I die, does this mean that I have
been granted something approaching
eternal life? If the ‘resurrected’ me
always has the same original memories,
knowledge, personality, etc., does this
mean that I get to relive part of my life
many different times, taking different
paths? – marrying different partners? –
holding down different jobs?
10. CONCLUSION
Norbert Wiener was a scientist, an engi-
neer and a mathematician; but he also was
a philosopher with the vision to see the
enormous social and ethical implications of
the information and communication tech-
nologies that he and his colleagues were
inventing. His creative tour de force, The
Human Use of Human Beings (1950, 1954), was
the first book-length publication in
Information Ethics; and it instantly created
a solid foundation for that subject as a field
of academic research. Wiener’s many con-
tributions to this field – in books, articles,
lectures and interviews – not only estab-
lished him as its ‘founding father’, they
continue to provide a rich source of ideas
and issues to inspire Information Ethics
thinkers for many years to come.
REFERENCES
Isaac Asimov (1950) I, Robot, Gnome.
Eichmann, D. (1994), Ethical Web Agents
(accessed 2 April 2004) http://archive.
ncsa.uiuc.edu/SDG/IT94/Proceedings/Agents/
eichmann.ethical/eichmann.html
Floridi, L. and Sanders, J. W. (2001) On the
Morality of Artificial Agents, CEPE 2001,
Computer Ethics: Philosophical Enquiry,
Lancaster, UK, 14–16 December, (accessed 2
April 2004) http://www.wolfson.ox.ac.uk/
~floridi/pdf/maa.pdf
Maner, W. (1999) Heuristic methods for com-
puter ethics, Keynote Speech, AICEC99,
Australian Institute of Computer Ethics
Conference, Melbourne, Australia. (First pub-
lished in J. Moor and T. W. Bynum (eds.)
(2003), Cyberphilosophy: The Intersection of
Computing and Philosophy, Blackwell.)
Moor, J. H. (1985) What is computer ethics? In:
T. W. Bynum (ed.), Computers and Ethics,
Blackwell, pp. 263–275. (Published as the
October 1985 issue of Metaphilosophy.)
Wiener, N. (1950, 1954) The Human Use of Human
Beings: Cybernetics and Society, Houghton
Mifflin; 2nd edn Doubleday Anchor. (In the
present essay, all quotations from this book
are from the Second Revised Edition.)
Wiener, N. (1964) God and Golem Inc. MIT
Press.
Bynum: Ethical Challenges to Citizens of the Automatic Age
74
CORRESPONDING AUTHOR
Terrell Ward Bynum
Research Center on Computing
and Society, Southern Connecti-
cut State University
New Haven, CT 06515, USA
Email: computerethics@earthlink.net
... Because of its fight against entropy, cybernetics might be used to control man by so-called 'priests of power' in search of power and money who "regard with impatience the limitations of mankind, that is man's undependability and unpredictability" (Wiener, 1964). The 'Principle of Minimum Infringement of Freedom' and the ability "to realize one's potential of engaging in a variety of chosen actions" are central in Weiner's information ethics (Bynum, 2004). Similarly, for Heidegger (1966), cybernetics corresponds to "the determination of man as an acting social being" since the "apparent freedom of human plans and actions" is a source of irritation for cyberneticians (Heidegger, 1967). ...
Article
Full-text available
The implementation of smart technologies in the built environment presents unprecedented opportunities and challenges for the real estate sector. Among the challenges is building occupants’ behavioural control due to smart buildings’ technological apparatus underpinned by pervasive computing. Since the early days of cybernetics, control stemming from information technology has generated many arguments about freedom, privacy and surveillance. Arguments only focused on technology or ethics tend to foster a Manichean view which obscures our ability to rationally assess calm and transparent technology’s role in controlling space users’ behaviours in smart buildings. The paper applies two classic economic frameworks to decipher the economic nature of behavioural control in smart real estate. In the process, it sheds some light on the complex utilitarian relationship between behavioural control and smart space’s user centricity. It concludes by assessing whether regulators should step in, for instance, through de jure property rights allocation among all parties.
... Terrell Bynum [1] credits the American philosopher/scientist Howard Weiner (the founder of the science of cybernetics) with foreseeing the enormous ethical and social impacts of information technology and laying the groundwork for the study of computer ethics. Writing in the 1950s, Wiener grounded his ethical theories of computer technology in the view that human beings are complex information feedback system that govern their relationships with other humans and the world around them. ...
Article
This paper highlights interdisciplinary research grounding a course that is one of the core requirements of a new undergraduate informatics curriculum. Ethics and Information Technology explores the ethical dilemmas that exist where human beings, information objects, and information systems interact. The course tests the notion that the most effective way to explore how new technologies relate to integrity, truthfulness, trust, respect for privacy and individuality is to become immersed in a technological environment where unethical behavior as well as ethical norms can be safely and confidentially tested, evaluated, observed, and experienced. The paper will summarize an emerging literature in three areas: (1) the theories of ethics and information technology, (2) the characteristics of the ???Net Generation??? regarding the use of new technologies, and (3) the central role played by ???trust??? in assessing the ethical implications of new technologies, including online multiplayer games, image editing, collaborative authoring, and open source coding conventions. The paper will then demonstrate how this literature informs the design and implementation of the course.
... The second is concerned with the synthesis and justification of policies to ensure ethical use of computer technology. Computer technology encompasses computers as well as the related technologies including software, hardware, internet etc. (Bynum, 2004;Capurro, 2007).The aim of this study is to design and propose a course for ethics of computing for universities in GCC and Islamic countries. The need for this type of course is justified in the following lines. ...
... The present form of the Hippocratic Oath originated about 300 A.D. The accelerated pace of advances in information technologies transformed computer and information ethics from a theoretical exercise envisioned by Wiener into a reality faced by practitioners (Bynum, 2007). As information technology became more wide spread and its practitioners developed a professional identity of their own, the need for a professional code of ethical conduct for information technology professionals became apparent (ACM, 1993;Barquin, 1992;Becker-Kornstaedt, 2001;Bynum, 2000Bynum, , 2001Bynum, , 2004Bynum, , 2006Mason 1986) . ...
Article
Full-text available
Information security and ethics has been viewed as one of the foremost areas of concern and interest by academic researchers and industry practitioners. Information security and ethics is defined as an all encompassing term that refers to all activities needed to secure information and systems that support it in order to facilitate its ethical use. In this introductory chapter, this very important field of study is introduced and the fundamental concepts and theories are discussed. A broad discussion of tools and technologies used to achieve the goals of information security and ethics is followed by a discussion of guidelines for the design and development of such tools and technologies. Managerial, organizational and societal implications of information security and ethics are then evaluated. The chapter concludes after an assessment of a number of future developments and activities on the horizon that will have an impact on this field.
Book
Full-text available
In a world that is awash in ubiquitous technology, even the least tech-savvy know that we must take care how that technology affects individuals and society. That governments and organizations around the world now focus on these issues, that universities and research institutes in many different languages dedicate significant resources to study the issues, and that international professional organizations have adopted standards and directed resources toward ethical issues in technology is in no small part the result of the work of Simon Rogerson. – Chuck Huff, Professor of Social Psychology at Saint Olaf College, Northfield, Minnesota In 1995, Apple launched its first WWW server, Quick Time On-line. It was the year Microsoft released Internet Explorer and sold 7 million copies of Windows 95 in just 2 months. In March 1995, the author Simon Rogerson opened the first ETHICOMP conference with these words: We live in a turbulent society where there is social, political, economic and technological turbulence … it is causing a vast amount of restructuring within all these organisations which impacts on individuals, which impacts on the way departments are set up, organisational hierarchies, job content, span of control, social interaction and so on and so forth. … Information is very much the fuel of modern technological change. Almost anything now can be represented by the technology and transported to somewhere else. It's a situation where the more information a computer can process, the more of the world it can actually turn into information. That may well be very exciting, but it is also very concerning. That could be describing today. More than 25 years later, these issues are still at the forefront of how ethical digital technology can be developed and utilised. This book is an anthology of the author’s work over the past of 25 years of pioneering research in digital ethics. It is structured into five themes: Journey, Process, Product, Future and Education. Each theme commences with an introductory explanation of the papers, their relevance and their interrelationship. The anthology finishes with a concluding chapter which summarises the key messages and suggests what might happen in the future. Included in this chapter are insights from some younger leading academics who are part of the community charged with ensuring that ethical digital technology is realised. [see https://www.routledge.com/The-Evolving-Landscape-of-Ethical-Digital-Technology/Rogerson/p/book/9781032017211]
Article
To live effectively is to live with adequate information. Thus, communication and control belong to the essence of man's inner life, even as they belong to his life in society. Norbert Wiener SCIENCE, TECHNOLOGY, AND ETHICS. Major scientific and technological innovations often have profound social and ethical effects. For example, in Europe during the sixteenth and seventeenth centuries, Copernicus, Newton, and other scientists developed a powerful new model of the universe. This stunning scientific achievement led to increased respect for science and for the power of human reasoning. During that same era, recently invented printing-press technology made it possible to spread knowledge far and wide across Europe, instead of leaving it, as before, in the hands of a privileged minority of scholars. Inspired by these scientific and technological achievements, philosophers, such as Hobbes, Locke, and Rousseau, re-examined human nature and the idea of a good society. They viewed human beings asrational agentscapable of thinking for themselves and acquiring knowledge through science and books. In addition, they interpreted society as a creation of informed, rational citizens working together throughsocial contracts. These philosophical developments laid foundations for ethical theories such as those of Bentham and Kant, and for political changes such as the American Revolution and the French Revolution. Today, after far-reaching scientific achievements in physics, biology, and cybernetics – and after recent technological advances in digital computing and information networks – philosophers are again rethinking the nature of human beings and of society.
Conference Paper
An exocortex is a wearable (or implanted) computer, used to augment a brain's biological high-level cognitive processes and inform a user's decisions and actions. In this paper, we focus on Brain-Computer Interfaces (BCIs), a special type of exocortex used to interact with the environment via neural signals. BCI use ranges from medical applications and rehabilitation to operation of assistive devices. They can also be used for marketing, gaming, and entertainment, where BCIs are used to provide users with a more personalized experience. BCI-enabled technology carries a great potential to improve and enhance the quality of human lives. This technology, however, is not without risk. In this paper, we address a specific class of privacy issues, brain spyware, shown to be feasible against currently available non-invasive BCIs. We show this attack can be mapped into a communication-theoretic setting. We then show that the problem of preventing it is similar to the problem of information hiding in communications. We address it in an information-theoretic framework. Finally, influenced by Professor Wiener's computer ethics work, we propose a set of principles regarding appropriate use of exocortex.
Chapter
Full-text available
This article focuses on the ethical perspective of information and communication technologies (ICT), which is termed as computer and information ethics. This article gives a brief look at the roots of computer and information ethics and then follows an overview of the many definitions of computer and information ethics where the aim is to illustrate the breadth of this multidisciplinary field. It then turns to application and considers two distinct issues about embedding an ethical perspective into information systems development and the second is to consider the nature of ICT professionalism within the Information Society. A series of Information Society challenges are then considered in turn. These include privacy, property, culture, and crime. The final substantive section looks at the future, paying particular attention to ICT continually increasing in global interaction within a dispersed community and between different and geographically distant communities. access full paper at http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199580583.001.0001/oxfordhb-9780199580583-e-23?rskey=0ahIsH&result=1
Article
Full-text available
The concept of Artificial Agents (AA) and the separation of the concerns of morality and responsibility of AA were discussed. Method of abstraction (MOA) was used as a vital component for analyzing the level of abstraction (LoA) at which an agent was considered to act. The approach facilitated the discussion of the morality of agents both in the cyberspace and in the biosphere where systems like organization can play the role of moral agents. It was found that computer ethics had an important scope for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility.
Book
The new and rapidly growing field of communication sciences owes as much to Norbert Wiener as to any one man. He coined the word for it—cybernetics. In God & Golem, Inc., the author concerned himself with major points in cybernetics which are relevant to religious issues.The first point he considers is that of the machine which learns. While learning is a property almost exclusively ascribed to the self-conscious living system, a computer now exists which not only can be programmed to play a game of checkers, but one which can "learn" from its past experience and improve on its own game. For a time, the machine was able to beat its inventor at checkers. "It did win," writes the author, "and it did learn to win; and the method of its learning was no different in principle from that of the human being who learns to play checkers. A second point concerns machines which have the capacity to reproduce themselves. It is our commonly held belief that God made man in his own image. The propagation of the race may also be interpreted as a function in which one living being makes another in its own image. But the author demonstrates that man has made machines which are "very well able to make other machines in their own image," and these machine images are not merely pictorial representations but operative images. Can we then say: God is to Golem as man is to Machines? in Jewish legend, golem is an embryo Adam, shapeless and not fully created, hence a monster, an automation.The third point considered is that of the relation between man and machine. The concern here is ethical. "render unto man the things which are man's and unto the computer the things which are the computer's," warns the author. In this section of the book, Dr. Wiener considers systems involving elements of man and machine. The book is written for the intellectually alert public and does not involve any highly technical knowledge. It is based on lectures given at Yale, at the Société Philosophique de Royaumont, and elsewhere.
The Human Use of Human Beings: Cybernetics and Society, Houghton Mifflin (In the present essay, all quotations from this book are from the Second Revised Edition
  • N Wiener
Wiener, N. (1950, 1954) The Human Use of Human Beings: Cybernetics and Society, Houghton Mifflin; 2nd edn Doubleday Anchor. (In the present essay, all quotations from this book are from the Second Revised Edition.)
Ethical Challenges to Citizens of the Automatic Age 74 CORRESPONDING AUTHOR Terrell Ward Bynum Research Center on Computing and Society
  • Bynum
Bynum: Ethical Challenges to Citizens of the Automatic Age 74 CORRESPONDING AUTHOR Terrell Ward Bynum Research Center on Computing and Society, Southern Connecticut State University New Haven, CT 06515, USA
http://archive. ncsa.uiuc On the Morality of Artificial Agents
  • Isaac Asimov
  • I Robot
  • Gnome
  • D Eichmann
Isaac Asimov (1950) I, Robot, Gnome. Eichmann, D. (1994), Ethical Web Agents (accessed 2 April 2004) http://archive. ncsa.uiuc.edu/SDG/IT94/Proceedings/Agents/ eichmann.ethical/eichmann.html Floridi, L. and Sanders, J. W. (2001) On the Morality of Artificial Agents, CEPE 2001, Computer Ethics: Philosophical Enquiry, Lancaster, UK, 14–16 December, (accessed 2
Heuristic methods for computer ethics, Keynote Speech, AICEC99, Australian Institute of Computer Ethics Conference
  • W Maner
Maner, W. (1999) Heuristic methods for computer ethics, Keynote Speech, AICEC99, Australian Institute of Computer Ethics Conference, Melbourne, Australia. (First published in J. Moor and T. W. Bynum (eds.) (2003), Cyberphilosophy: The Intersection of Computing and Philosophy, Blackwell.)