Chapter

Employing AI for Better Understanding Our Morals

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Having addressed the prerequisite issues for a justified and contextualized computational morality, the absence of radically new problems resulting from the co-presence of agents of different nature, and addressed the difficulties inherent in the creation of moral algorithms, it is time to present the research we have conducted. The latter considers both the very aspects of programming, as the need for protocols regulating competition among companies or countries. Its aim revolves around a benevolent AI, contributing to the fair distribution of the benefits of development, and attempting to block the tendency towards the concentration of wealth and power. Our approach denounces and avoids the statistical models used to solve moral dilemmas, because they are “blind” and risk perpetuating mistakes. Thus, we use an approach where counterfactual reasoning plays a fundamental role and, considering morality primarily a matter of groups, we present conclusions from studies involving the pairs egoism/altruism; collaboration/competition; acknowledgment of error/apology. These are the basic elements of most moral systems, and studies make it possible to draw generalizable and programmable conclusions in order to attain group sustainability and greater global benefit, regardless of their constituents.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... No obstante, la comunidad científica se ha dividido. Por una parte, podemos encontrar un grupo de investigadores que apoyan el desarrollo de agentes artificiales éticos o morales [9], [10], [13] y aquel grupo de investigadores que critican este enfoque y que consideran inviable el desarrollo de este tipo de agentes [14], [15]. Sin embargo, más que contribuir a este debate, el objetivo de este artículo es ofrecer una revisión de los avances logrados en esta área de investigación hasta este momento. ...
Conference Paper
Full-text available
En los últimos años, el ser humano ha experimentado la inclusión de agentes artificiales en su entorno, cómo vehículos no tripulados, el internet de las cosas, ciudades inteligentes e incluso robots humanoides capaces de acompañar y convivir con las personas. Investigadores del área de Inteligencia Artificial están proponiendo diseños de agentes artificiales capaces de imitar la forma en que el ser humano se comporta y realiza diversas tareas. Esto ha dado origen a una sub-disciplina de la Inteligencia Artificial denominada "la ética de las máquinas". Esta área de investigación se centra en el estudio y desarrollo de mecanismos éticos para dotar a los agentes artificiales de las capacidades necesarias para enfrentar problemas éticos que puedan surgir como consecuencia de la interacción entre agentes humanos y agentes artificiales. El objetivo de este artículo es realizar una revisión de la literatura para presentar el estado actual de esta área de investigación.
Article
Full-text available
Before engaging in a group venture agents may seek commitments from other members in the group and, based on the level of participation (i.e. the number of actually committed participants), decide whether it is worth joining the venture. Alternatively, agents can delegate this costly process to a (beneficent or non-costly) third-party, who helps seek commitments from the agents. Using methods from Evolutionary Game Theory, this paper shows that, in the context of Public Goods Game, much higher levels of cooperation can be achieved through such centralized commitment management. It provides a more efficient mechanism for dealing with commitment free-riders, those who are not willing to bear the cost of arranging commitments whilst enjoying the benefits provided by the paying commitment proposers. We show that the participation level plays a crucial role in the decision of whether an agreement should be formed; namely, it needs to be more strict in terms of the level of participation required from players of the centralized system for the agreement to be formed; however, once it is done right, it is much more beneficial in terms of the level of cooperation and social welfare achieved. In short, our analysis provides important insights for the design of multi-agent systems that rely on commitments to monitor agents' cooperative behavior.
Chapter
Full-text available
Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.
Article
Full-text available
Agreements and commitments have provided a novel mechanism to promote cooperation in social dilemmas in both one-shot and repeated games. Individuals requesting others to commit to cooperate (proposers) incur a cost, while their co-players are not necessarily required to pay any, allowing them to free-ride on the proposal investment cost (acceptors). Although there is a clear complementarity in these behaviours, no dynamic evidence is currently available that proves that they coexist in different forms of commitment creation. Using a stochastic evolutionary model allowing for mixed population states, we identify non-trivial roles of acceptors as well as the importance of intention recognition in commitments. In the one-shot prisoner’s dilemma, alliances between proposers and acceptors are necessary to isolate defectors when proposers do not know the acceptance intentions of the others. However, when the intentions are clear beforehand, the proposers can emerge by themselves. In repeated games with noise, the incapacity of proposers and acceptors to set up alliances makes the emergence of the first harder whenever the latter are present. As a result, acceptors will exploit proposers and take over the population when an apology-forgiveness mechanism with too low apology cost is introduced, and hence reduce the overall cooperation level.
Conference Paper
Full-text available
Before engaging in a group venture agents may seek commitments from other members in the group and, based on the level of participation (i.e. the number of actually committed participants), decide whether it is worth joining the venture. Alternatively, agents can delegate this costly process to a (beneficent or non- costly) third-party, who helps seek commitments from the agents. Using methods from Evolutionary Game Theory, this paper shows that, in the context of Pub- lic Goods Game, much higher levels of cooperation can be achieved through such centralized commitment management. It provides a more efficient mechanism for dealing with commitment free-riders, those who are not willing to bear the cost of arranging commitments whilst enjoying the benefits provided by the paying commitment proposers. We show also that the participa- tion level plays a crucial role in the decision of whether an agreement should be formed; namely, it needs to be more strict in the centralized system for the agree- ment to be formed; however, once it is done right, it is much more beneficial in terms of the level of coopera- tion as well as the attainable social welfare. In short, our analysis provides important insights for the design of multi-agent systems that rely on commitments to moni- tor agents’ cooperative behavior.
Article
Full-text available
Before engaging in a group venture agents may require commitments from other members in the group, and based on the level of acceptance (participation) they can then decide whether it is worthwhile joining the group e ort. Here, we show in the context of Public Goods Games and using stochastic evolutionary game theory modelling, which implies imitation and mutation dynamics, that arranging prior commitments while imposing a minimal participation when interacting in groups induces agents to behave cooperatively. Our analytical and numerical results show that if the cost of arranging the commitment is su ciently small compared to the cost of cooperation, commitment arranging behavior is frequent, leading to a high level of cooperation in the population. Moreover, an optimal participation level emerges depending both on the dilemma at stake and on the cost of arranging the commitment. Namely, the harsher the common good dilemma is, and the costlier it becomes to arrange the commitment, the more participants should explicitly commit to the agreement to ensure the success of the joint venture. Furthermore, considering that commitment deals may last for more than one encounter, we show that commitment proposers can be lenient in case of short-term agreements, yet should be strict in case of long-term interactions.
Conference Paper
Full-text available
Social punishment, whereby cooperators punish defectors , has been suggested as an important mechanism that promotes the emergence of cooperation or maintenance of social norms in the context of the one-shot (i.e. non-repeated) interaction. However, whenever antisocial punishment, whereby defectors punish cooper-ators, is available, this antisocial behavior outperforms social punishment, leading to the destruction of cooperation. In this paper, we use evolutionary game theory to show that this antisocial behavior can be efficiently restrained by relying on prior commitments, wherein agents can arrange, prior to an interaction, agreements regarding posterior compensation by those who dishonor the agreements. We show that, although the commitment mechanism by itself can guarantee a notable level of cooperation, a significantly higher level is achieved when both mechanisms, those of proposing prior commitments and of punishment, are available in co-presence. Interestingly, social punishment prevails and dominates in this system as it can take advantage of the commitment mechanism to cope with antisocial behaviors. That is, establishment of a commitment system helps to pave the way for the evolution of social punishment and abundant cooperation, even in the presence of antisocial punishment.
Article
Full-text available
Making agreements on how to behave has been shown to be an evolutionarily viable strategy in one-shot social dilemmas. However, in many situations agreements aim to establish long-term mutually beneficial interactions. Our analytical and numerical results reveal for the first time under which conditions revenge, apology and forgiveness can evolve and deal with mistakes within ongoing agreements in the context of the Iterated Prisoners Dilemma. We show that, when the agreement fails, participants prefer to take revenge by defecting in the subsisting encounters. Incorporating costly apology and forgiveness reveals that, even when mistakes are frequent, there exists a sincerity threshold for which mistakes will not lead to the destruction of the agreement, inducing even higher levels of cooperation. In short, even when to err is human, revenge, apology and forgiveness are evolutionarily viable strategies which play an important role in inducing cooperation in repeated dilemmas.
Article
Full-text available
When creating a public good, strategies or mechanisms are required to handle defectors. We first show mathematically and numerically that prior agreements with posterior compensations provide a strategic solution that leads to substantial levels of cooperation in the context of public goods games, results that are corroborated by available experimental data. Notwithstanding this success, one cannot, as with other approaches, fully exclude the presence of defectors, raising the question of how they can be dealt with to avoid the demise of the common good. We show that both avoiding creation of the common good, whenever full agreement is not reached, and limiting the benefit that disagreeing defectors can acquire, using costly restriction mechanisms, are relevant choices. Nonetheless, restriction mechanisms are found the more favourable, especially in larger group interactions. Given decreasing restriction costs, introducing restraining measures to cope with public goods free-riding issues is the ultimate advantageous solution for all participants, rather than avoiding its creation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Article
Full-text available
Decision making about which are the scrutinized intentions of others, usually called intention reading or intention recognition, is an elementary basic decision making process required as a basis for other higher- level decision making, such as the intention-based decision making which we have set forth in previous work. We present herein a recognition method possessing several features desirable of an elementary process: (i) The method is context-dependent and incremental, enabling progressive construction of a three-layer Bayesian network model as more actions are observed, and in a context-situated manner that relies on a logic programming knowledge base concerning the context; (ii) The Bayesian network is structured from a specific knowledge base of readily specified and readily maintained Bayesian network fragments with simple structures, thereby enabling the efficient acquisition of that knowledge base (engineered either by domain experts or else automatically from a plan corpus); and, (iii) The method addresses the issue of intention change and abandonment, and can appropriately resolve the issue of the recogni- tion of multiple intentions. The several aspects of the method have been experimentally evaluated in applications and achieving definite success, using the Linux plan corpus and the so-called IPD plan corpora, which are playing sequences generated by game playing strategies needing to be recognized, in the iterated Prisoner’s Dilemma. One other application concerns variations of Elder Care in the context of Ambient Intelligence.
Chapter
Full-text available
In this chapter, the authors present an intention-based decision-making system. They exhibit a coherent combination of two Logic Programming-based implemented systems, Evolution Prospection and Intention Recognition. The Evolution Prospection system has proven to be a powerful system for decision-making, designing, and implementing several kinds of preferences and useful environment-triggering constructs. It is here enhanced with an ability to recognize intentions of other agents—an important aspect not well explored so far. The usage and usefulness of the combined system are illustrated with several extended examples in different application domains, including Moral Reasoning, Ambient Intelligence, Elder Care, and Game Theory.
Article
Full-text available
A model is presented to account for the natural selection of what is termed reciprocally altruistic behavior. The model shows how selection can operate against the cheater (non-reciprocator) in the system. Three instances of altruistic behavior are discussed, the evolution of which the model can explain: (1) behavior involved in cleaning symbioses; (2) warning cries in birds; and (3) human reciprocal altruism. Regarding human reciprocal altruism, it is shown that the details of the psychological system that regulates this altruism can be explained by the model. Specifically, friendship, dislike, moralistic aggression, gratitude, sympathy, trust, suspicion, trustworthiness, aspects of guilt, and some forms of dishonesty and hypocrisy can be explained as important adaptations to regulate the altruistic system. Each individual human is seen as possessing altruistic and cheating tendencies, the expression of which is sensitive to developmental variables that were selected to set the tendencies at a balance ap...
Article
Full-text available
Cooperation is needed for evolution to construct new levels of organization. Genomes, cells, multicellular organisms, social insects, and human society are all based on cooperation. Cooperation means that selfish replicators forgo some of their reproductive potential to help one another. But natural selection implies competition and therefore opposes cooperation unless a specific mechanism is at work. Here I discuss five mechanisms for the evolution of cooperation: kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection. For each mechanism, a simple rule is derived that specifies whether natural selection can lead to cooperation.
Chapter
Machine ethics is a sprouting interdisciplinary field of enquiry arising from the need of imbuing autonomous agents with some capacity for moral decision-making. Its overall results are not only important for equipping agents with a capacity for moral judgment, but also for helping better understand morality, through the creation and testing of computational models of ethics theories. Computer models have become well defined, eminently observable in their dynamics, and can be transformed incrementally in expeditious ways. We address, in work reported and surveyed here, the emergence and evolution of cooperation in the collective realm. We discuss how our own research with Evolutionary Game Theory (EGT) modelling and experimentation leads to important insights for machine ethics, such as the design of moral machines, multi-agent systems, and contractual algorithms, plus their potential application in human settings too.
Chapter
The mechanisms of emergence and evolution of cooperation — in populations of abstract individuals with diverse behavioral strategies in co-presence — have been undergoing mathematical study via Evolutionary Game Theory, inspired in part on Evolutionary Psychology. Their systematic study resorts as well to implementation and simulation techniques in parallel computers, thus enabling the study of aforesaid mechanisms under a variety of conditions, parameters, and alternative virtual games. The theoretical and experimental results have continually been surprising, rewarding and promising. Recently, in our own work we have initiated the introduction, in such groups of individuals, of cognitive abilities inspired on techniques and theories of Artificial Intelligence, namely those pertaining to Intention Recognition, encompassing the modeling and implementation of a tolerance/intolerance to errors in others — whether deliberate or not — and tolerance/intolerance to possible communication noise. As a result, both the emergence and stability of cooperation, in said groups of distinct abstract individuals, become reinforced comparatively to the absence of such cognitive abilities. The present paper aims to sensitize the reader to these Evolutionary Game Theory based studies and issues, which are accruing in importance for the modeling of minds with machines. And to draw attention to our own newly published results, for the first time introducing the use of Intention Recognition in this context, with impact on mutual tolerance.
Article
How does cooperation emerge among selfish individuals? When do people share resources, punish those they consider unfair, and engage in joint enterprises? These questions fascinate philosophers, biologists, and economists alike, for the "invisible hand" that should turn selfish efforts into public benefit is not always at work.The Calculus of Selfishnesslooks at social dilemmas where cooperative motivations are subverted and self-interest becomes self-defeating. Karl Sigmund, a pioneer in evolutionary game theory, uses simple and well-known game theory models to examine the foundations of collective action and the effects of reciprocity and reputation.Focusing on some of the best-known social and economic experiments, including games such as the Prisoner's Dilemma, Trust, Ultimatum, Snowdrift, and Public Good, Sigmund explores the conditions leading to cooperative strategies. His approach is based on evolutionary game dynamics, applied to deterministic and probabilistic models of economic interactions.Exploring basic strategic interactions among individuals guided by self-interest and caught in social traps,The Calculus of Selfishnessanalyzes to what extent one key facet of human nature--selfishness--can lead to cooperation.
Article
Turing's present-day and all-time relevance arises from the timelessness of the issues he tackled, and the innovative light he shed upon them. Turing first defined the algorithmic limits of computability, when determined via effective mechanism, and showed the generality of his definition by proving its equivalence to other general, but less algorithmic, non-mechanical, more abstract formulations of computability. In truth, his originality much impressed Gödel, for the simplicity of the mechanism invoked—what we nowadays call a Turing Machine (or program)—and for the proof of existence of a Universal Turing Machine (what we call digital computer)—which can demonstrably mimic any other Turing Machine, i.e. execute any program. Indeed, Turing Machines simply rely on having a finite-state automaton (like a vending machine), and an unbound paper tape made of discrete squares (like a paper roll), with at most one rewritable symbol on each square. Turing also first implicitly introduced the perspective of ‘functionalism'—though he did not use the word, it was introduced later by Putnam, inspired by Turing's work—by showing that what counts is the realizability of functions, independently of the hardware that embodies them. And that realizability is afforded by the very simplicity of his devised mechanism, what he then called A-machines (but now bear his name), which rely solely on the manipulation of symbols—as discrete as the fingers of one hand—wherein both data and instructions are represented with symbols, both being subject to manipulation. The twain, data as well as instructions, are stored in memory, where instructions double as data and as rules for acting—the stored program idea. No one to this day has invented a computational mechanical process with such general properties, which cannot be theoretically approximated with arbitrary precision by some Turing Machine, wherein interactions are to be captured by Turing's innovative concept of oracle. In these days of discrete-time quantization, computational biological processes, and proof of ever expanding universe—the automata and the tape—the Turing Machine reigns supreme. Moreover, universal functionalism—another Turing essence—is what enables the inevitable bringing together of the ghosts in the several embodied machines (silicon-based, biological, extra-terrestrial or otherwise) to promote their symbiotic epistemic co-evolution, since they partake of the same theoretic functionalism. Turing is truly and forever among us.
Article
This paper uses laboratory experiments to evaluate the performance of a deposit-refund mechanism used to enforce compliance with voluntary public-good commitments made in the absence of strong regulatory institutions. With this mechanism agents decide whether to join an agreement and pay a deposit prior to making their contribution decisions. If an agreement receives sufficient membership to form, members then make their contribution decisions and compliant members are refunded their deposits. If an agreement does not form, then deposits are immediately refunded and a standard voluntary contribution game is played. We find that the deposit-refund mechanism achieves nearly full efficiency when agreements require full participation, but is far less effective, and in some cases disruptive, when agreements require only partial participation. As the mechanism does not require the existence of strong sanctioning institutions, it is particularly suited for enforcing compliance with international environmental agreements.
Book
"This is the classic work upon which modern-day game theory is based. What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published Theory of Games and Economic Behavior. In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences. This sixtieth anniversary edition includes not only the original text but also an introduction by Harold Kuhn, an afterword by Ariel Rubinstein, and reviews and articles on the book that appeared at the time of its original publication in the New York Times, tthe American Economic Review, and a variety of other publications. Together, these writings provide readers a matchless opportunity to more fully appreciate a work whose influence will yet resound for generations to come.
Article
Scientists from various disciplines have begun to focus attention on the psychology and biology of human morality. One research program that has recently gained attention is universal moral grammar (UMG). UMG seeks to describe the nature and origin of moral knowledge by using concepts and models similar to those used in Chomsky's program in linguistics. This approach is thought to provide a fruitful perspective from which to investigate moral competence from computational, ontogenetic, behavioral, physiological and phylogenetic perspectives. In this article, I outline a framework for UMG and describe some of the evidence that supports it. I also propose a novel computational analysis of moral intuitions and argue that future research on this topic should draw more directly on legal theory.
Five rules for the evolution of cooperation
  • M A Nowak
  • MA Nowak
Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314(5805), 1560-1563.
Human behavior recognition technologies: Intelligent applications for monitoring and security
  • T A Han
  • L M Pereira
  • TA Han
Han, T. A., & Pereira, L. M. (2013b). Intention-based decision making via intention recognition and its applications. Human behavior recognition technologies: Intelligent applications for monitoring and security (pp. 174-211). IGI Global: Hershey, PA.
State-of-the-art of intention recognition and its use in decision making
  • T A Han
  • L M Pereira
  • TA Han
Han, T. A., & Pereira, L. M. (2013c). State-of-the-art of intention recognition and its use in decision making. AI Communications, 26(2), 237-246.
Programming machine ethics. SAPERE series
  • L M Pereira
  • A Saptawijaya
Pereira, L. M., & Saptawijaya, A. (2016). Programming machine ethics. SAPERE series (Vol. 26). Berlin: Springer.
  • T A Han
  • L M Pereira
  • L A Martinez-Vaquero
Han, T. A., Pereira, L. M., Martinez-Vaquero L. A., & Lenaerts T. (2017b).