ArticlePDF Available

Flourishing Ethics and identifying ethical values to instill into artificially intelligent agents

Authors:

Abstract

The present paper uses a Flourishing Ethics analysis to address the question of which ethical values and principles should be “instilled” into artificially intelligent agents. This is an urgent question that is still being asked seven decades after philosopher/scientist Norbert Wiener first asked it. An answer is developed by assuming that human flourishing is the central ethical value, which other ethical values, and related principles, can be used to defend and advance. The upshot is that Flourishing Ethics can provide a common underlying ethical foundation for a wide diversity of cultures and communities around the globe; and the members of each specific culture or community can add their own specific cultural values—ones which they treasure, and which help them to make sense of their moral lives.
Metaphi losophy. 2022;00:1–6.
|
1
wileyonlinelibrary.com/journal/meta
The hour is very late, and the choice of good and evil knocks at our door.
Norbert Wiener (1954, 186)
DOI: 10.1111/meta.12583
ORIGINAL ARTICLE
Flourishing Ethics and identifying ethical values to
instill into artificially intelligent agents
NesibeKantar1 | Terrell WardBynum2
This is an ope n acc ess artic le unde r the te rms of t he Creative Com mons Attribut ion-NonCommercial-NoDeri vs Lice nse, which
per mits u se and d istr ibution i n any medium, p rovide d the or igin al work is properly c ited , the us e is non- com mercial and no
modi fic ations or adapt ations a re mad e.
© 2022 The Aut hors. Metaphilosophy pu blish ed by Metaphil osophy LLC a nd John Wi ley & Son s Ltd.
1Center for C omputi ng and So cial
Respon sibi lity, De Mont fort Unive rsity,
Leic este r, United K ingdom
2Depa rtme nt of Phi losophy, Sout hern
Conne cticut State Un ivers ity, New Haven,
USA
Correspondence
Terrell Ward Bynum, Infor mation Ethic s
Institute, 96 Glenview Terrac e, New Haven,
CT, 06515, USA
Emai l: computerethics@mac.com
Abstract
The present paper uses a Flourishing Ethics analysis
to address the question of which ethical values and
principles should be “instilled” into artificially intel-
ligent agents. This is an urgent question that is still
being asked seven decades after philosopher/scientist
Norbert Wiener first asked it. An answer is devel-
oped by assuming that human flourishing is the central
ethical value, which other ethical values, and related
principles, can be used to defend and advance. The
upshot is that Flourishing Ethics can provide a com-
mon underlying ethical foundation for a wide diversity
of cultures and communities around the globe; and
the members of each specific culture or community
can add their own specific cultural valuesones which
they treasure, and which help them to make sense of
their moral lives.
KEYWORDS
arti ficial intell igenc e, cybernet ics, F lourishing Ethics, machine
decisions, machine learning, Norbert Wie ner
2
|
KANTAR AND BYNUM
1 | INTRODUCTION
In the past, major technological and scientif ic revolutions have always had significant social
and ethical consequences. This is certainly true today, because information science and infor-
mation technologies are rapidly— and profoundly— changing the world socially, politically,
and even philosophically. For example, information science and related technologies have led
to new philosophical conceptions of being, life, thinking, knowledge, consciousness, emotions,
society, good and evil, and the ultimate nature of the universe— to name just a few examples. (For
numerous examples that are discussed in detail, see Floridi2016; Himma and Tavani 2008;
and van den Hoven and Weckert2008.) The present essay, however, focuses mainly upon just
one of the many urgent ethical questions of the Information Age: namely, What general ethical
values and principles should be instilled into artificially intelligent agents like robots, softbots, and
sophisticated computer programs?
The first person known to recognize the urgency and importance of this question was MIT
scientist/philosopher Norbert Wiener, who often expressed related concerns in speeches and
writings from the 1940s (while he was creating the new science of cybernetics) to the early 1960s
(see, e.g., Wiener1948, 1950, 1954, 1960, and1964). Section 2 below describes the circumstances
in which Wiener first raised this key question; then, section 3 considers the challenge of ethi-
cally integrating such machines into the fabric of society. Given all that has happened in the
Information Revolution in the past seventy years, it has become more and more urgent to un-
derstand how nonhuman agents can be integrated, safely and ethically, into societies and cul-
tures worldwide. This monumental challenge is discussed below, together with a number of
related challenges, because we believe that Flourishing Ethics can help to address them effec-
tively.1 Finally, in section 4, we use key Flourishing Ethics ideas to identify general ethical
values and principles that ought to be “instilled” into artificially intelligent agents.
2 | MACHINES THAT DECIDE AND LEARN
During World War II, as part of the American war effort, Wiener worked with colleagues to
develop a better antiaircraft cannon. Military airplanes had become so fast and maneuverable
that human eyes and muscles were less able to control antiaircraft cannons effectively. Wiener
and his colleagues decided to use radar (which was still being improved) to spot and identify
enemy airplanes quickly. And they also decided to use electronic computers (which Wiener
and others were in the process of creating) to perform the following tasks: (1) gather informa-
tion about an incoming enemy plane, (2) determine the plane's likely trajectory, (3) quickly and
precisely aim the cannon, and (4) fire the cannon at exactly the right time to cause the explosive
shell and the plane to come together in midair. All these tasks were to be carried out by the
cannon itself without human intervention.
To advance this project, Wiener developed a new applied science, which was focused espe-
cially upon “control and communication in the animal and the machine” (the subtitle of his
1948 book, Cybernetics). He decided to name his new science “cybernetics,” based upon the
Greek word for the pilot of a ship.
When the war ended, the desired new antiaircraft cannon was still incomplete; but the proj-
ect nevertheless yielded— unexpectedly— technological breakthroughs that would change the
world significantly in just a few decades. Even while working on that project, Wiener had real-
ized that machines soon would be able to make decisions, carry them out, and learn from their
own past activities; so in his book Cybernetics: Or Control and Communication in the Animal
and the Machine, Wiener noted: “Long before Nagasaki and the public awareness of the atomic
1For a very di fferent and i mport ant ar ticle on the sa me topic, see F lorid i and Cowl s2021.
|
3
FLOURISH ING ETHICS A ND IDENT IFYI NG ETHICAL VALUES TO
INSTILL I NTO ARTIFIC IALLY INTELLIGEN T AGENTS
bomb, it had occurred to me that we were here in the presence of another potentiality of un-
heard- of importance for good and for evil” (1948, 36).
3 | IN T E GR AT I N G E T H IC A L M AC H I N E S I N T O T H E
“FABRIC” OF SOCIETY
In Cybernetics, Wiener made several comments about future ethical impacts of information
technology, and some of his friends were intrigued by those comments. His friends urged him
to say much more, in future writings, about likely ethical impacts of the new information
technology that he and his colleagues had just created (Conway and Siegelman2005). Quickly
taking their advice, Wiener published a book in 1950 containing a number of predictions and
examples about future social and ethical impacts of information science and information tech-
nology. He called the book The Human Use of Human Beings: Cybernetics and Society. In
chapter I, he said: “That we shall have to change many details of our mode of life in the face of
the new machines is certain; but these machines are secondary, in all matters of value that con-
cern us, to the proper evaluation of human beings for their own sake.The message of this
book as well as its title is the human use of human beings” (Wiener1950, 2; italics in the origi-
nal). Wiener predicted in The Human Use of Human Beings that future societies would include
machines that are integrated into the social fabric: “It is the thesis of this book that society can
only be understood through a study of the messages and the communication facilities which
belong to it; and that in the future development of these messages and communications facili-
ties, messages between man and machines, between machines and man, and between machine
and machine, are destined to play an ever- increasing part” (1950, 9). In that book, and in later
relevant publications (for example, Wiener1954 and1960), Wiener frequently expressed, quite
strongly, his concern about the possibility of decision- making machines replacing human deci-
sion makers. Integrating such machines into the social fabric could become very dangerous, he
noted, because there are many ways in which machine decisions can be inaccessible, or faulty,
or otherwise inappropriate. Consider just three of his examples:
1. Because computerized machines can make decisions and carry them out, thousands
of times faster than humans can, people may be unable to watch over them as the
machines decide and act”— and this applies even to machines that cannot learn. So,
Wiener noted, “though machines are theoretically subject to human criticism, such
criticism may be ineffective until long after it is relevant. To be effective in warding
off disastrous consequences, our understanding of our man- made machines should in
general develop pari passu [at the same rate] with the performance of the machine. By
the very slowness of our human actions, our effective control of the machines may be
nullified. By the time we are able to react to the information conveyed by our senses
and stop the car we are driving, it may already have run head on into a wall” (1960,
1355).
2. The world is very complex, and so when a person wants, or needs, to make a decision, it typi-
cally is difficult or impossible to understand fully the circumstances and possible outcomes
of a decision. For this reason, someone may take a quick- and- easy way out by allowing a
machine to make the decision, rather than making it himself. Wiener noted, however, that
by leaving the decision to the machine such a person “will put himself sooner or later in the
position of the father in W. W. Jacobs' The Monkey's Paw, who has wished for a hundred
pounds, only to find at his door the agent of the company for which his son works, tender-
ing him one hundred pounds as a consolation for his son's death at the factory” (1954, 185).
From examples like this, Wiener concluded that “[a]ny machine constructed for the purpose
of making decisions, if it does not possess the power of learning, will be completely literal
4
|
KANTAR AND BYNUM
minded. Woe to us if we let it decide our conduct, unless we have previously examined the
laws of its action, and know fully that its conduct will be carried out on principles acceptable
to us!” (1954, 185).
3. On the other hand, a machine that can learn might also make very harmful decisions, espe-
cially if it has learned things that its maker or programmer did not know about or anticipate.
As Wiener explains, a machine “which can learn and can make decisions on the basis of its
learning, will in no way be obliged to make such decisions as we should have made, or will be
acceptable to us. For the man who is not aware of this, to throw the problem of his respon-
sibility on the machine, whether it can learn or not, is to cast his responsibility to the winds,
and to find it coming back seated on the whirlwind” (1954, 185).
Soon after the Second World War, both the United States and the Soviet Union had nu-
clear weapons, and Wiener heard rumors that both countries were using John von Neumann's
game theory and related computer technology to provide “war games” to human military
decision makers, for practice and educational purposes. Wiener knew that von Neumann's
game theory was ill suited for that purpose, and he recommended that anyone dealing with a
“manifestation of original power, like the splitting of the atom,” should do so “with fear and
trembling”; he should not “leap in where angels fear to tread, unless he is prepared to accept
the punishment of the fallen angels. Neither will he calmly transfer to the machine made in
his own image the responsibility for his choice of good and evil, without continuing to accept
a full responsibility for that choice” (1954, 184).
During that same time, bureaus and vast laboratories and corporations” too were
considering the use of game- theory computers to help them win against their competitors.
Wiener's famous comment, at the time, about dangers that threatened the world was: “The
hour is very late, and the choice of good and evil knocks at our door” (1954, 186).
4 | IDE N T I F Y I N G E TH IC A L PR I NC I PL E S F OR
ARTIFICIALLY INTELLIGENT AGENTS
Today, nearly seven decades after Wiener's famous “knocks at our door” comment, desire
for artificially intelligent agents is a significant worldwide phenomenon. Nations, corpora-
tions, public institutions, small businesses, and individuals are seeking AI devices to help them
achieve their goals. This is happening in spite of the fact that the world still faces monumental
dangers from inappropriate decisions by AI agents: for example, risks concerning the use of
nuclear weapons (at least nine countries now have them), global warming, worldwide pandem-
ics, political extremism, and a growing number of risks from information technology beyond
those identified by Wiener (for example, risks from invasions of privacy, computer malware,
identity theft, online bullying, and on, and on).
Millions of information technology devices, today, are making decisions and carrying them
out— for example, medical robots perform surgery, bank computers decide who qualifies for
a loan, satellites in orbit perform various tasks, “rovers” on Mars send data back to Earth,
cellphone apps with softbots do various jobs, and so on. Nevertheless, more than seventy years
after Wiener first identif ied the challenge of determining which ethical principles and values
should be instilled into computerized decision- making agents, tremendous challenges remain.
Even the fundamental question remains about which basic ethical principles and values ought
to be instilled into artificially intelligent agents and why. We believe that a Flourishing Ethics
approach to such questions can help to identify the best answers.
In a recent article (Kantar and Bynum2021) we explained that Flourishing Ethics is not a single
ethical theory but rather a set of similar ethical theories with “family resemblance” relationships.
All the Flourishing Ethics theories, however, take human flourishing to be the central ethical value
that other ethical values support and defend. Of course, humans will not be flourishing if their
|
5
FLOURISH ING ETHICS A ND IDENT IFYI NG ETHICAL VALUES TO
INSTILL I NTO ARTIFIC IALLY INTELLIGEN T AGENTS
health is poor, or they are being harmed by other people or by damaging forces of nature (such
as floods, violent storms, wildfires, wild animals, terrible diseases, and so on); so all Flourishing
Ethics theories assume that this is true. In addition, all Flourishing Ethics theories assume that
human beings share a common nature. In Kantar and Bynum2021 we fo cused upon that common
nature in order to identify ethical values and principles that are needed to create and sustain
human f lourishing. Applying that process yielded the following results:
1. Autonomy— the ability to make significant choices and carry them out— is a necessary con-
dition for human flourishing. For example, if someone is in prison, or enslaved, or severely
pressured and controlled by others, such a person is not flourishing.
2. To flourish, people need to be part of a supportive community. Knowledge and science,
wisdom and ethics, justice and the law are all social achievements. And in addition, psycholog-
ically, humans need each other to avoid loneliness and feelings of isolation.
3. The community should provide as effectively as it cansecurity, knowledge, opportuni-
ties, and resources. Without these, a person might be able to make choices, but nearly all those
choices might be bad ones, and a person could not flourish under those conditions.
4. To maximize flourishing within a community, justice must prevail. Consider the traditional
distinction between “distributive justice” and “retributive justice”: if goods and benefits are
unjustly distributed, some people will be unfairly deprived, and flourishing will not be max-
imized in that community. Similarly, if punishment is unjustly meted out, flourishing, again,
will not be maximized.
5. Respect— including mutual respect between personsplays a significant role in creating
and maintaining human flourishing. Lack of respect from one's fellow human beings can gen-
erate hate, jealousy, and other very negative emotions, causing harmful conf licts between
individualseven wars within and between countries. Self- respect also is important for human
flourishing in order to preserve human dignity and minimize the harmful effects of shame,
self- disappointment, and feelings of worthlessness.
In Bynum2006 and in Kantar and Bynum2021 we argued that, given a universally shared
human nature, and taking human flourishing to be the central ethical value, the consider-
ations described above can serve as a common underlying ethical foundation for a wide diversity
of cultures and communities around the globe. This is possible because each culture or com-
munity can add, to the Flourishing Ethics “foundation,” specific cultural values which they
treasure, and which help them to make sense of their moral lives.
REFERENCES
Bynum, Terrell Ward. 2006. “Flourish ing Ethics.” Ethics and Information Technology 8, no. 4: 157– 73.
Conway, Flo, and Jim Siegelman. 2005. Dark Hero of the Information Age: In Search of Norbert Wiener, Father of
Cybernetics. New York: Basic Books.
Floridi, Luciano, ed. 2016. The Routledge Handbook of Philosophy of Information. London: Routledge.
Floridi, Luciano, and Josh Cowls. 2021. “A Uni fied Framework of Five Pri nciples for AA in Society.In Ethics,
Governance, and Policies in Artificial Intelligence, edited by Luciano Floridi, 5– 17. Heidelberg: Spri nger.
Himma, Ken neth Einar, and Her man T. Tavani, eds. 2008. The Handbook of Information and Computer Ethics. New
York: John Wiley and Sons.
Kantar, Nesibe, and Terrell Ward Bynum. 2021. “Global Ethics for the Digital Age— Flourish ing Ethics.” Journal of
Information, Communication and Ethics in Society 19, no. 3: 329– 44.
van den Hoven, Jeroen, and John Wecker t, eds. 2008. Information Techn ology and Moral Philosophy. Cambridge:
Cambridge University P ress.
Wiener, Norbert. 1948. Cybernetics: Or Control and Communication in the Animal and the Machine. New York:
Technology Press.
Wiener, Norbert. 1950. The Human Use of Human Beings: Cybernetics a nd Society. Boston: Houg hton Miff lin.
Wiener, Norbert. 1954. T he Huma n Use of Hu man Beings: Cybernetics and Society. Second E dition Rev ised. New
York: Doubleday Anchor.
Wiener, Norbert. 1960. “Some Moral and Technic al Consequences of Automation.” Science 131: 1355– 58.
Wiener, Norbert. 1964. God & Golem, Inc .: A Comm ent on Certain Points Where Cybernetics Impinges on Religion.
Cambridge, Mass.: MIT Press.
6
|
KANTAR AND BYNUM
How to cite this article: Kantar, Nesibe, and Terrell Ward Bynum. 2022. “Flourishing
Ethics and identifying ethical values to instill into artificially intelligent agents.”
Metaphilosophy 00 (0): 1–6. https://doi.org/10.1111/meta.12583
... 1. Supporting people's ability to make and act on important choices 2. Involving people in a supportive community that will enable them to thrive 3. Ensuring that they have safety, information, opportunities and resources, regardless of the type of community they are in. 4. Ensuring justice to maximize flourishing within the community in which they live (including 'distributive justice' and 'retributive justice') 5. Mutual respect between people, including self-respect for oneself (Kantar and Bynum, 2022) Of course, human flourishing is not limited to these five principles. AI has the potential to develop biases that threaten justice and equality during the data it collects and learning processes. ...
... Indeed, Flourishing Ethics can provide a common ethical foundation for a wide variety of cultures and communities around the world. The theory holds that members of each particular culture or community can add to this umbrella ethical approach their own particular cultural values that they value and that help them make sense of their moral lives(Kantar & Bynum, 2022). This includes non-human computing technologies such as artificial intelligence ecosystems.In their study titled Flourishing Ethics and identifying ethical values to instill into artificially intelligent agents, published in the journal Metaphilosophy in 2022, Kantar and Bynum asked what is fundamentally necessary for human flourishing and what deficiencies make it impossible for them to develop. ...
... One way to determine which "components" should be included in an excellent Flourishing Ethics theory would be to adopt the "negative strategy" of asking whether human flourishing would be damaged if certain suggested "components" were missing. In a recent article (Kantar and Bynum 2022), the results of employing such a strategy were these (see pages 342-343): ...
Chapter
The Cambridge Handbook of Emerging Issues at the Intersection of Commercial Law and Technology is a timely and interdisciplinary examination of the legal and societal implications of nascent technologies in the global commercial marketplace. Featuring contributions from leading international experts in the field, this volume offers fresh and diverse perspectives on a range of topics, including non-fungible tokens, blockchain technology, the Internet of Things, product liability for defective goods, smart readers, liability for artificial intelligence products and services, and privacy in the era of quantum computing. This work is an invaluable resource for academics, policymakers, and anyone seeking a deeper understanding of the social and legal challenges posed by technological innovation, as well as the role of commercial law in facilitating and regulating emerging technologies.
Chapter
Yapay zekâ (YZ), özellikle büyük dil modellerinin herkesin erişimine açılmasıyla, son yıllarda daha fazla konuşulan bir konu haline gelmiştir. YZ teknolojilerinin hızla yayılması, özellikle gençler ve çalışma hayatının paydaşları için büyük önem taşımaktadır. Bu çalışma, YZ ve dijital dönüşümün çalışma hayatına etkilerini literatür taramasıyla inceleyerek işgücü, verimlilik, etik sorunlar ve yasal düzenlemeleri değerlendirmeyi amaçlamaktadır. Çalışmada, öncelikle otomasyon, sanal insanlar, karanlık fabrikalar gibi bazı temel olgular etrafında YZ ve dijital dönüşüm ele alınmaktadır. Ardından istihdam ilişkileri açısından YZ’nin verimlilik etkileri ve YZ’nin sosyal yönü tartışılmaktadır. Son olarak YZ’nin yarattığı etik sorunlar ve hukuki düzenlemeler incelenmektedir. Çalışmanın sonuç kısmında, YZ’nin çalışma yaşamına etkisi ve çalışma yaşamının geleceği bağlamında çalışanlara, kamu ve özel sektör yöneticilerine ve politika yapıcılarına yön verebilecek öneriler ve değerlendirmelerde bulunulmaktadır.
Article
We live in a cyber-universe created by millions of data sets. This universe, where there are almost no time-space constraints, allows people to perform activities in the intercardinal direction in a comfort they could not imagine before. At the same time, this multicultural and global world is a source of ethical challenges. While living in our unique culture, is it possible to share common ethical values in our world, which is becoming more global with each passing day? Or is it getting more and more impossible in this complex cyber universe? This article draws attention to Bynum's Flourishing Ethics theory, which carries the umbrella concept with its potential to unite people around some common values, even if we have different ethical approaches. The main purpose of this article is how, although different approaches they are, theories based on the common nature of human beings can be combined for the same purpose and Thanks to the 'ethical family unity' structure they have created under the ethical umbrella, it is questioned how they will determine the ethical components that can be applied to intelligence systems for the solution of the common problems of the information age.
Chapter
Full-text available
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
Article
Full-text available
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
Article
Full-text available
The thesis of this book is that Norbert Wiener, 1894–1964, was unknown outside the mathematical community until shortly after World War II. Then he invented cybernetics, which has the capacity to enormously transform the world for the better. The authors believe that since the promises of cybernetics have not been realized, Wiener is not the recognized genius of the Age of the Information, but its dark hero. And what, according to the authors, was the greatest of the forces that prevented the realization of the cybernetics utopia? It was a single person, Wiener’s wife Margaret, née Engemann. There are many points to be examined here. First of all, was the significance of Wiener’s mathematical contributions really secondary compared to his latter work in naming and championing cybernetics? More significantly, what is cybernetics, and what does its implementation promise? Also, what did Margaret do and what good does it do us and Wiener’s memory to dwell on it? And, finally, why is there such a continuing fascination with Norbert
Article
Full-text available
This essay describes a new ethical theory that has begun to coalesce from the works of several scholars in the international computer ethics community. I call the new theory ‚Flourishing Ethics’ because of its Aristotelian roots, though it also includes ideas suggestive of Taoism and Buddhism. In spite of its roots in ancient ethical theories, Flourishing Ethics is informed and grounded by recent scientific insights into the nature of living things, human nature and the fundamental nature of the universe – ideas from today’s information theory, astrophysics and genetics. Flourishing Ethics can be divided conveniently into two parts. The first part, which I call ‚Human-Centered FE,’ is focused exclusively upon human beings – their actions, values and characters. The second part, which I call ‚General FE,’ applies to every physical entity in the universe, including humans. Rather than replacing traditional ‚great ethical theories,’ Flourishing Ethics is likely to deepen and broaden our understanding of them.
Article
Purpose The purpose of this paper is to explore an emerging ethical theory for the Digital Age – Flourishing Ethics – which will likely be applicable in many different cultures worldwide, addressing not only human concerns but also activities, decisions and consequences of robots, cyborgs, artificially intelligent agents and other new digital technologies. Design/methodology/approach In the past, a number of influential ethical theories in Western philosophy have focused upon choice and autonomy, or pleasure and pain or fairness and justice. These are important ethical concepts, but we consider “flourishing” to be a broader “umbrella concept” under which all of the above ideas can be included, plus additional ethical ideas from cultures in other regions of the world (for example, Buddhist, Muslim, Confucianist cultures and others). Before explaining the applied approach, this study discusses relevant ideas of four example thinkers who emphasize flourishing in their ethics writings: Aristotle, Norbert Wiener, James Moor and Simon Rogerson. Findings Flourishing Ethics is not a single ethical theory. It is “an approach,” a “family” of similar ethical theories which can be successfully applied to humans in many different cultures, as well as to non-human agents arising from new digital technologies. Originality/value This appears to be the first extended analysis of the emerging flourishing ethics “family” of theories.
Book
The new and rapidly growing field of communication sciences owes as much to Norbert Wiener as to any one man. He coined the word for it—cybernetics. In God & Golem, Inc., the author concerned himself with major points in cybernetics which are relevant to religious issues.The first point he considers is that of the machine which learns. While learning is a property almost exclusively ascribed to the self-conscious living system, a computer now exists which not only can be programmed to play a game of checkers, but one which can "learn" from its past experience and improve on its own game. For a time, the machine was able to beat its inventor at checkers. "It did win," writes the author, "and it did learn to win; and the method of its learning was no different in principle from that of the human being who learns to play checkers. A second point concerns machines which have the capacity to reproduce themselves. It is our commonly held belief that God made man in his own image. The propagation of the race may also be interpreted as a function in which one living being makes another in its own image. But the author demonstrates that man has made machines which are "very well able to make other machines in their own image," and these machine images are not merely pictorial representations but operative images. Can we then say: God is to Golem as man is to Machines? in Jewish legend, golem is an embryo Adam, shapeless and not fully created, hence a monster, an automation.The third point considered is that of the relation between man and machine. The concern here is ethical. "render unto man the things which are man's and unto the computer the things which are the computer's," warns the author. In this section of the book, Dr. Wiener considers systems involving elements of man and machine. The book is written for the intellectually alert public and does not involve any highly technical knowledge. It is based on lectures given at Yale, at the Société Philosophique de Royaumont, and elsewhere.
Book
This handbook provides an accessible overview of the most important issues in information and computer ethics. It covers: foundational issues and methodological frameworks; theoretical issues affecting property, privacy, anonymity, and security; professional issues and the information-related professions; responsibility issues and risk assessment; regulatory issues and challenges; access and equity issues. Each chapter explains and evaluates the central positions and arguments on the respective issues, and ends with a bibliography that identifies the most important supplements available on the topic.
Book
Information technology is an integral part of the practices and institutions of post-industrial society. It is also a source of hard moral questions and thus is both a probing and relevant area for moral theory. In this volume, an international team of philosophers sheds light on many of the ethical issues arising from information technology, including informational privacy, digital divide and equal access, e-trust and tele-democracy. Collectively, these essays demonstrate how accounts of equality and justice, property and privacy benefit from taking into account how information technology has shaped our social and epistemic practices and our moral experiences. Information technology changes the way that we look at the world and deal with one another. It calls, therefore, for a re-examination of notions such as friendship, care, commitment and trust.