Content uploaded by Terrell Ward Bynum
Author content
All content in this area was uploaded by Terrell Ward Bynum on Dec 02, 2022
Content may be subject to copyright.
Metaphi losophy. 2022;00:1–6.
|
1
wileyonlinelibrary.com/journal/meta
The hour is very late, and the choice of good and evil knocks at our door.
— Norbert Wiener (1954, 186)
DOI: 10.1111/meta.12583
ORIGINAL ARTICLE
Flourishing Ethics and identifying ethical values to
instill into artificially intelligent agents
NesibeKantar1 | Terrell WardBynum2
This is an ope n acc ess artic le unde r the te rms of t he Creative Com mons Attribut ion-NonCommercial-NoDeri vs Lice nse, which
per mits u se and d istr ibution i n any medium, p rovide d the or igin al work is properly c ited , the us e is non- com mercial and no
modi fic ations or adapt ations a re mad e.
© 2022 The Aut hors. Metaphilosophy pu blish ed by Metaphil osophy LLC a nd John Wi ley & Son s Ltd.
1Center for C omputi ng and So cial
Respon sibi lity, De Mont fort Unive rsity,
Leic este r, United K ingdom
2Depa rtme nt of Phi losophy, Sout hern
Conne cticut State Un ivers ity, New Haven,
USA
Correspondence
Terrell Ward Bynum, Infor mation Ethic s
Institute, 96 Glenview Terrac e, New Haven,
CT, 06515, USA
Emai l: computerethics@mac.com
Abstract
The present paper uses a Flourishing Ethics analysis
to address the question of which ethical values and
principles should be “instilled” into artificially intel-
ligent agents. This is an urgent question that is still
being asked seven decades after philosopher/scientist
Norbert Wiener first asked it. An answer is devel-
oped by assuming that human flourishing is the central
ethical value, which other ethical values, and related
principles, can be used to defend and advance. The
upshot is that Flourishing Ethics can provide a com-
mon underlying ethical foundation for a wide diversity
of cultures and communities around the globe; and
the members of each specific culture or community
can add their own specific cultural values— ones which
they treasure, and which help them to make sense of
their moral lives.
KEYWORDS
arti ficial intell igenc e, cybernet ics, F lourishing Ethics, machine
decisions, machine learning, Norbert Wie ner
2
|
KANTAR AND BYNUM
1 | INTRODUCTION
In the past, major technological and scientif ic revolutions have always had significant social
and ethical consequences. This is certainly true today, because information science and infor-
mation technologies are rapidly— and profoundly— changing the world socially, politically,
and even philosophically. For example, information science and related technologies have led
to new philosophical conceptions of being, life, thinking, knowledge, consciousness, emotions,
society, good and evil, and the ultimate nature of the universe— to name just a few examples. (For
numerous examples that are discussed in detail, see Floridi2016; Himma and Tavani 2008;
and van den Hoven and Weckert2008.) The present essay, however, focuses mainly upon just
one of the many urgent ethical questions of the Information Age: namely, What general ethical
values and principles should be instilled into artificially intelligent agents like robots, softbots, and
sophisticated computer programs?
The first person known to recognize the urgency and importance of this question was MIT
scientist/philosopher Norbert Wiener, who often expressed related concerns in speeches and
writings from the 1940s (while he was creating the new science of cybernetics) to the early 1960s
(see, e.g., Wiener1948, 1950, 1954, 1960, and1964). Section 2 below describes the circumstances
in which Wiener first raised this key question; then, section 3 considers the challenge of ethi-
cally integrating such machines into the fabric of society. Given all that has happened in the
Information Revolution in the past seventy years, it has become more and more urgent to un-
derstand how nonhuman agents can be integrated, safely and ethically, into societies and cul-
tures worldwide. This monumental challenge is discussed below, together with a number of
related challenges, because we believe that Flourishing Ethics can help to address them effec-
tively.1 Finally, in section 4, we use key Flourishing Ethics ideas to identify general ethical
values and principles that ought to be “instilled” into artificially intelligent agents.
2 | MACHINES THAT DECIDE AND LEARN
During World War II, as part of the American war effort, Wiener worked with colleagues to
develop a better antiaircraft cannon. Military airplanes had become so fast and maneuverable
that human eyes and muscles were less able to control antiaircraft cannons effectively. Wiener
and his colleagues decided to use radar (which was still being improved) to spot and identify
enemy airplanes quickly. And they also decided to use electronic computers (which Wiener
and others were in the process of creating) to perform the following tasks: (1) gather informa-
tion about an incoming enemy plane, (2) determine the plane's likely trajectory, (3) quickly and
precisely aim the cannon, and (4) fire the cannon at exactly the right time to cause the explosive
shell and the plane to come together in midair. All these tasks were to be carried out by the
cannon itself without human intervention.
To advance this project, Wiener developed a new applied science, which was focused espe-
cially upon “control and communication in the animal and the machine” (the subtitle of his
1948 book, Cybernetics). He decided to name his new science “cybernetics,” based upon the
Greek word for the pilot of a ship.
When the war ended, the desired new antiaircraft cannon was still incomplete; but the proj-
ect nevertheless yielded— unexpectedly— technological breakthroughs that would change the
world significantly in just a few decades. Even while working on that project, Wiener had real-
ized that machines soon would be able to make decisions, carry them out, and learn from their
own past activities; so in his book Cybernetics: Or Control and Communication in the Animal
and the Machine, Wiener noted: “Long before Nagasaki and the public awareness of the atomic
1For a very di fferent and i mport ant ar ticle on the sa me topic, see F lorid i and Cowl s2021.
|
3
FLOURISH ING ETHICS A ND IDENT IFYI NG ETHICAL VALUES TO
INSTILL I NTO ARTIFIC IALLY INTELLIGEN T AGENTS
bomb, it had occurred to me that we were here in the presence of another potentiality of un-
heard- of importance for good and for evil” (1948, 36).
3 | IN T E GR AT I N G E T H IC A L M AC H I N E S I N T O T H E
“FABRIC” OF SOCIETY
In Cybernetics, Wiener made several comments about future ethical impacts of information
technology, and some of his friends were intrigued by those comments. His friends urged him
to say much more, in future writings, about likely ethical impacts of the new information
technology that he and his colleagues had just created (Conway and Siegelman2005). Quickly
taking their advice, Wiener published a book in 1950 containing a number of predictions and
examples about future social and ethical impacts of information science and information tech-
nology. He called the book The Human Use of Human Beings: Cybernetics and Society. In
chapter I, he said: “That we shall have to change many details of our mode of life in the face of
the new machines is certain; but these machines are secondary, in all matters of value that con-
cern us, to the proper evaluation of human beings for their own sake.…The message of this
book as well as its title is the human use of human beings” (Wiener1950, 2; italics in the origi-
nal). Wiener predicted in The Human Use of Human Beings that future societies would include
machines that are integrated into the social fabric: “It is the thesis of this book that society can
only be understood through a study of the messages and the communication facilities which
belong to it; and that in the future development of these messages and communications facili-
ties, messages between man and machines, between machines and man, and between machine
and machine, are destined to play an ever- increasing part” (1950, 9). In that book, and in later
relevant publications (for example, Wiener1954 and1960), Wiener frequently expressed, quite
strongly, his concern about the possibility of decision- making machines replacing human deci-
sion makers. Integrating such machines into the social fabric could become very dangerous, he
noted, because there are many ways in which machine decisions can be inaccessible, or faulty,
or otherwise inappropriate. Consider just three of his examples:
1. Because computerized machines can make decisions and carry them out, thousands
of times faster than humans can, people may be unable to watch over them as the
machines decide and “act”— and this applies even to machines that cannot learn. So,
Wiener noted, “though machines are theoretically subject to human criticism, such
criticism may be ineffective until long after it is relevant. To be effective in warding
off disastrous consequences, our understanding of our man- made machines should in
general develop pari passu [at the same rate] with the performance of the machine. By
the very slowness of our human actions, our effective control of the machines may be
nullified. By the time we are able to react to the information conveyed by our senses
and stop the car we are driving, it may already have run head on into a wall” (1960,
1355).
2. The world is very complex, and so when a person wants, or needs, to make a decision, it typi-
cally is difficult or impossible to understand fully the circumstances and possible outcomes
of a decision. For this reason, someone may take a quick- and- easy way out by allowing a
machine to make the decision, rather than making it himself. Wiener noted, however, that
by leaving the decision to the machine such a person “will put himself sooner or later in the
position of the father in W. W. Jacobs' The Monkey's Paw, who has wished for a hundred
pounds, only to find at his door the agent of the company for which his son works, tender-
ing him one hundred pounds as a consolation for his son's death at the factory” (1954, 185).
From examples like this, Wiener concluded that “[a]ny machine constructed for the purpose
of making decisions, if it does not possess the power of learning, will be completely literal
4
|
KANTAR AND BYNUM
minded. Woe to us if we let it decide our conduct, unless we have previously examined the
laws of its action, and know fully that its conduct will be carried out on principles acceptable
to us!” (1954, 185).
3. On the other hand, a machine that can learn might also make very harmful decisions, espe-
cially if it has learned things that its maker or programmer did not know about or anticipate.
As Wiener explains, a machine “which can learn and can make decisions on the basis of its
learning, will in no way be obliged to make such decisions as we should have made, or will be
acceptable to us. For the man who is not aware of this, to throw the problem of his respon-
sibility on the machine, whether it can learn or not, is to cast his responsibility to the winds,
and to find it coming back seated on the whirlwind” (1954, 185).
Soon after the Second World War, both the United States and the Soviet Union had nu-
clear weapons, and Wiener heard rumors that both countries were using John von Neumann's
game theory and related computer technology to provide “war games” to human military
decision makers, for practice and educational purposes. Wiener knew that von Neumann's
game theory was ill suited for that purpose, and he recommended that anyone dealing with a
“manifestation of original power, like the splitting of the atom,” should do so “with fear and
trembling”; he should not “leap in where angels fear to tread, unless he is prepared to accept
the punishment of the fallen angels. Neither will he calmly transfer to the machine made in
his own image the responsibility for his choice of good and evil, without continuing to accept
a full responsibility for that choice” (1954, 184).
During that same time, “bureaus and vast laboratories and corporations” too were
considering the use of game- theory computers to help them win against their competitors.
Wiener's famous comment, at the time, about dangers that threatened the world was: “The
hour is very late, and the choice of good and evil knocks at our door” (1954, 186).
4 | IDE N T I F Y I N G E TH IC A L PR I NC I PL E S F OR
ARTIFICIALLY INTELLIGENT AGENTS
Today, nearly seven decades after Wiener's famous “knocks at our door” comment, desire
for artificially intelligent agents is a significant worldwide phenomenon. Nations, corpora-
tions, public institutions, small businesses, and individuals are seeking AI devices to help them
achieve their goals. This is happening in spite of the fact that the world still faces monumental
dangers from inappropriate decisions by AI agents: for example, risks concerning the use of
nuclear weapons (at least nine countries now have them), global warming, worldwide pandem-
ics, political extremism, and a growing number of risks from information technology beyond
those identified by Wiener (for example, risks from invasions of privacy, computer malware,
identity theft, online bullying, and on, and on).
Millions of information technology devices, today, are making decisions and carrying them
out— for example, medical robots perform surgery, bank computers decide who qualifies for
a loan, satellites in orbit perform various tasks, “rovers” on Mars send data back to Earth,
cellphone apps with softbots do various jobs, and so on. Nevertheless, more than seventy years
after Wiener first identif ied the challenge of determining which ethical principles and values
should be instilled into computerized decision- making agents, tremendous challenges remain.
Even the fundamental question remains about which basic ethical principles and values ought
to be instilled into artificially intelligent agents and why. We believe that a Flourishing Ethics
approach to such questions can help to identify the best answers.
In a recent article (Kantar and Bynum2021) we explained that Flourishing Ethics is not a single
ethical theory but rather a set of similar ethical theories with “family resemblance” relationships.
All the Flourishing Ethics theories, however, take human flourishing to be the central ethical value
that other ethical values support and defend. Of course, humans will not be flourishing if their
|
5
FLOURISH ING ETHICS A ND IDENT IFYI NG ETHICAL VALUES TO
INSTILL I NTO ARTIFIC IALLY INTELLIGEN T AGENTS
health is poor, or they are being harmed by other people or by damaging forces of nature (such
as floods, violent storms, wildfires, wild animals, terrible diseases, and so on); so all Flourishing
Ethics theories assume that this is true. In addition, all Flourishing Ethics theories assume that
human beings share a common nature. In Kantar and Bynum2021 we fo cused upon that common
nature in order to identify ethical values and principles that are needed to create and sustain
human f lourishing. Applying that process yielded the following results:
1. Autonomy— the ability to make significant choices and carry them out— is a necessary con-
dition for human flourishing. For example, if someone is in prison, or enslaved, or severely
pressured and controlled by others, such a person is not flourishing.
2. To flourish, people need to be part of a supportive community. Knowledge and science,
wisdom and ethics, justice and the law are all social achievements. And in addition, psycholog-
ically, humans need each other to avoid loneliness and feelings of isolation.
3. The community should provide— as effectively as it can— security, knowledge, opportuni-
ties, and resources. Without these, a person might be able to make choices, but nearly all those
choices might be bad ones, and a person could not flourish under those conditions.
4. To maximize flourishing within a community, justice must prevail. Consider the traditional
distinction between “distributive justice” and “retributive justice”: if goods and benefits are
unjustly distributed, some people will be unfairly deprived, and flourishing will not be max-
imized in that community. Similarly, if punishment is unjustly meted out, flourishing, again,
will not be maximized.
5. Respect— including mutual respect between persons— plays a significant role in creating
and maintaining human flourishing. Lack of respect from one's fellow human beings can gen-
erate hate, jealousy, and other very negative emotions, causing harmful conf licts between
individuals— even wars within and between countries. Self- respect also is important for human
flourishing in order to preserve human dignity and minimize the harmful effects of shame,
self- disappointment, and feelings of worthlessness.
In Bynum2006 and in Kantar and Bynum2021 we argued that, given a universally shared
human nature, and taking human flourishing to be the central ethical value, the consider-
ations described above can serve as a common underlying ethical foundation for a wide diversity
of cultures and communities around the globe. This is possible because each culture or com-
munity can add, to the Flourishing Ethics “foundation,” specific cultural values which they
treasure, and which help them to make sense of their moral lives.
REFERENCES
Bynum, Terrell Ward. 2006. “Flourish ing Ethics.” Ethics and Information Technology 8, no. 4: 157– 73.
Conway, Flo, and Jim Siegelman. 2005. Dark Hero of the Information Age: In Search of Norbert Wiener, Father of
Cybernetics. New York: Basic Books.
Floridi, Luciano, ed. 2016. The Routledge Handbook of Philosophy of Information. London: Routledge.
Floridi, Luciano, and Josh Cowls. 2021. “A Uni fied Framework of Five Pri nciples for AA in Society.” In Ethics,
Governance, and Policies in Artificial Intelligence, edited by Luciano Floridi, 5– 17. Heidelberg: Spri nger.
Himma, Ken neth Einar, and Her man T. Tavani, eds. 2008. The Handbook of Information and Computer Ethics. New
York: John Wiley and Sons.
Kantar, Nesibe, and Terrell Ward Bynum. 2021. “Global Ethics for the Digital Age— Flourish ing Ethics.” Journal of
Information, Communication and Ethics in Society 19, no. 3: 329– 44.
van den Hoven, Jeroen, and John Wecker t, eds. 2008. Information Techn ology and Moral Philosophy. Cambridge:
Cambridge University P ress.
Wiener, Norbert. 1948. Cybernetics: Or Control and Communication in the Animal and the Machine. New York:
Technology Press.
Wiener, Norbert. 1950. The Human Use of Human Beings: Cybernetics a nd Society. Boston: Houg hton Miff lin.
Wiener, Norbert. 1954. T he Huma n Use of Hu man Beings: Cybernetics and Society. Second E dition Rev ised. New
York: Doubleday Anchor.
Wiener, Norbert. 1960. “Some Moral and Technic al Consequences of Automation.” Science 131: 1355– 58.
Wiener, Norbert. 1964. God & Golem, Inc .: A Comm ent on Certain Points Where Cybernetics Impinges on Religion.
Cambridge, Mass.: MIT Press.