ArticlePDF Available

Homo sapiens 2.0: Why we should build the better robots of our nature

Authors:

Abstract

This species could have been great, and now everybody has settled for sneakers with lights in them (George Carlin). Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us (Calvin).
Editorial
Homo sapiens 2.0: why we should build the better robots of our
nature
ERIC DIETRICH
Philosophy Department, Binghamton University, Binghampton, NY 13902-
6000, USA
This species could have been great, and now everybody has settled for sneakers with lights in
them (Ge orge Carlin).
Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none
of it has tried to contact us (Calvin).
1. What’s wrong with us?
It is possible to survey humankind and be proud, even to smile, for we accomplish
great things. Art and science are two notable worthy human accomplishments.
Consonant with art and science are some of the ways we treat each other. Sacri®ce
and heroism are two admirable human qualities that pervade human interaction.
But, as everyone knows, all this goodness is more than balanced by human
depravity. Moral corruption infests our being. Why?
Throughout history, distinguished philosophers, theologians and psychologists
have wrestled with this question. Why are we so bad? How does one explain the
Timothy McVeighs of the world? The Je rey Dahmers, the Ted Bundys? The Pol
Pots, the Hitlers? The WTC terrorists? How are we to understand C harles Whitman,
and Eric Harris and Dylan Klebold (the University of Texas clock tow er sniper and
the two Columbine killers)? All of these cases are ba‚ing to the point of
stupefaction. Moreover, we are powerless to prevent future monsters from killing us.
Immoralities that are less focused, that do not, as it were, have a pointman, are
equally bad. Sexism and racism, pervasive and damaging in the extreme, plague our
lives. Of course, reportable cases of sexism and racism are done by indiv idual people,
and these are usually quite awful, but milder versions of sexism and racism probably
inhabit each of us to some extent.
War is a horrible evil. Very few wars throughout history were what we might call
`just’ wars. Wars are fought for greedy reasons, at least that is often why they start.
War is also a persistent and common evil. About the recent terrorists attacks in the
USA, President Bush said: `This is the beginning of the ®rst war of the 21st century’.
As if it was i nevitable there would be a ®rst war of this century ± and surely he was
correct in that belief.
Then there are the horrors we live with each day: rape, murder, theft, assault and
the various new `rages’: road ra ge, air rage, referee rage (admittedly not usually
lethal, but damaging nevertheless: whoever said `Sticks and stones break by bones,
but words will nev er hurt me must have lived a solitary life on Mars).
J. Expt. Theor. Arti f . Intell .
13(2001)323 ±328
Journal of Experimental & Theoretical Artificial Intelligence ISSN 0952±813X print/ISSN 1362±3079 online
#
2001
Taylor & Francis Ltd http://www.tandf.co.uk/journals
DOI: 10.1080/09528130110100289
So we humans live out our lives su ering harms great and small, eking out some
measure of happiness via our art, our science, our loves, and our passions. Life is
nasty, brutish, beautiful and long or short, depending on which part of it you happen
to be experiencing. Can anything be done about this sobering and perhaps
depressing state of a  airs? I think so. Here, I o er my solution to you. It is
expensive, but, I will argue, worth it.
2. The evolutionary basis of some immorality
I shall be concerned with the badness or evil that ordina ry humans create while
behaving more or less normally. By `normally,’ I mean that the behaviours I will
consider are statistically common, that they fall within the bump of the bell curve of
human behaviours. I include in this set behaviours such as lying, cheating, steal ing,
raping, murdering, assaulting, mugging, child abuse, as well as such things as ruining
the careers of, and discriminating against on the basis of sex, race , religion, sexual
preference and national origin. Not al l of us have raped or murdered. But many of us
have thought about it. And virtually all of us have lied, cheated or stole at some time
in our lives. I intend to exclude war from my discussion, as well as such humans as
Hitler, Pol Pot, Timothy McVeigh, the Columbine murderers, the recent hijacking
terrorists etc. Beings such as these are capable of extraordinary ev il; evil that even if
in some sense provoked (if only in the mind of the perpetrator) far outstrips the
provocation. Beings such as these commit gargantuan evil. I have no idea how to
explain such beings, nor such evil. Like you, I can only shrug my shoulders and point
vaguely in the direction of broken minds working in collusion with random
circumstances.
How could ordinary humans have normal behaviour that includes such things as
rape, child abuse murder, sexism and racism? One standard answer is that such
behaviours arise due to our innate sel®shness, which can be overcome, at least in
principle, by learning or by correct, happy, upbringing (in all of the cases of bad
behaviour we will consider below, this standard answer is behind the scenes, working
to supply energy to the folk explanation of the behaviours). This answer is wrong, at
least for many of our immoral behaviours. The reasoning is simple. Sel®shness alone
cannot explain why we rape or kill our chil dren: If we are all sel®sh but few of us
murder or rape, then something else must be going on. The standard reply to this is
that such bad behaviours are either lea rned or that the perpetrators have not learned
ways of coping with the frustrations and aggravated sel®shness that cause or lead to
the bad behaviour. Unfortunately, this answer is not falsi®able and it does not
explain some rather striking facts. The correct answer is that many ordinary humans’
worse behaviour has an evolutionary explanation, arising because we are animals
that evolved, that have an evolutionary history dating back, through our immediate
ancestors, almost a dozen million years, and of course a continuous lineage dating
back 3.5 billion years, w hen life started on planet Earth. Let us explore in some deta il
the hypothesis that we are bad in part because of our evolutionary history. Let us
consider four cases: child abuse, sexism, rape and racism.
2.1. Child abuse
Here is a surprising statistic: the best predictor of whether or not a child will be
abused or killed is whether or not he or she has a step-father. (The data suggest that
abuse is meted out to older children; young children may be killed.) Why should this
E. Dietrich324
be the case? Learning or lack of learning does not seem to be a plausible explanation
here. Evolutionary theory, however, seems to succeed where the folk theory cannot.
In some male-dominated primate species (e.g. langurs), when a new alpha male takes
over the troop, he kills all the infants fathered by the previous alpha male. He then
mates with the females in his new harem, inseminating many of them, and now they
will bear his children. The langur pattern is just one extreme case of a ne arly
ubiquitous ma mmalian phenomenon: males kill or refuse to care for infants that they
conclude are unlikely to be their o spring, basing their conclusion on proximate
cues. We carry this evolutionary baggage around with us.
2.2. Sexism
Our sexism is explained the same way. First, thoug h, here is an interesting fact: every
human culture is male-dominated, and females are discriminated against in every
culture. There are matrilineal cultures, but not female-dominated ones (the Amazons
were a myth). What would explain this ubiquity of sexism? It obviously cannot be
learned behaviour because the behaviours that we are certain are learned are not
ubiquitous (e.g. driving on the left). Learned behaviours always vary substantially
around the globe. Certainly, how men and women implement their inherent sexism is
learned (e.g. always hold a door open for a woman, never let a woman vote), but
discriminating against the female sex is not learnedÐit is part of our evolutionary
heritageÐ our evolutionary baggage. Why? Because we evolved from a male-
dominated primate species (not all primate species are male-dominated, however;
some (vervets, many lemurs) are female dominated). In our cousin male-dominated
species, it is males that typically get ®rst helpings of the food, have the best locations
for shelter, are groomed the most, etc. Females in these species frequently get
seconds and the second-best in everything. Evolving from a species like this, human
males naturally tend to think of human females as second-class members of the
culture. (This explanation, by the way, is a case of inference to the best explanation.
We of course do not have access (or enough access) to the behaviours of the species
we evolved from to say with complete conviction that we evolved from a male-
dominated species. Nevertheless, this explanation is compelling in part because it
best explains the ubiquity of sexism and it coheres best with what we know about
other primate species.)
2.3. Rape
The common explanation of rape is that it is principally about violence against
women. The main consequence of this view is tha t rape is not sex. Many embrace this
explanation simply because, emotionally, it seems right. However, it is wrong. Most
rape victims around the world are females between the ages of 16 and 22, among the
prime reproductive years for females (the best reproductive years a re 19±24 or so, the
overlap is not exact). Most rapists are teens through to early twenties, the a ge of
maximum male sexual motivation. Few rape victims experience severe, lasting
physical injuries. On the available evidence, young women tend to resist rape more
than older women. Rape is also ubiquitous in human cultures; there are no societies
where rape is non-existent (interpretations of Turnbull’s and Mead’s anthropological
®ndings are incorrect). Rape exists in other animals: in insects, birds, reptiles,
amphibians, marine mammals and non-huma n primates. All of these facts cry out
for an evolutionary explanation of rape: rape is either an adaptation or a by-product
of adaptations for mating. Either way, rape is part of the human blueprint.
Homo sapiens 2.0 325
2.4. Racism
Though it is still somewhat disputatious, it is now reasonably clear that part of the
engine of human evolution was group selection. Standard evolutionary theory posits
that the unit of selection is the individual of a species. But selection pressures exist at
many levels of life, from the gene level all way up to whole populations, communities
and even ecosystemsÐmaybe even to memes (roughly: culturally transmitted ideas).
One such level is the group level, the level at which the traits of one member of a
population a ect the success of other members. It is known that group selection can
produce species with properties that are not evolvable by individual selection a lone
(e.g. altruism). Group selection w orks by encouraging co-operation between
members of the group and, often, discouraging co-operation between members of
di erent groups. Group selection, therefore, has a dark side. Not onl y does it
encourage within-group co-operation but, where groups overtly compete, it tends to
produce between-group animosity. So, from our evolutionary past, humans tend to
belong to groups, bond with the members of their own group and tend to ®ght with
members of outlying groups. Which particular groups you feel compelled to hate (or
dislike) is a matter of historical accident and bad luck. But that you tend to hate (or
dislike) m embers of other groups is pa rt of your genetic make-up.
To conclude, on the best available theory we have got, four very serious social
illsÐchild abuse, sex ism, rape, and racismÐare due to our evolutionary heritage. It
is a sad fact that much of our human psy chological is built by evolution (and not by
socialization, as many believe, though, of course, humans are profoundly susceptible
to socialization, hence our run-time psychology is a function of l earning). These
innate psychological capacities of ours are principally responsible for many of
humanity’s darkest ills. In short, we abuse, discriminate and rape because we are
human. If we add on top of this that we also almost certainly lie, cheat, steal and
murder because we are human, we arrive at the idea that our humanity is the source
for much anguish and su ering.
3. A modest proposal
The question naturally presents itself: `What can we do about the immorality
humans perpetrate on each other?’ The standard line taken by social scientists,
teachers, educators and parents, is: teach our children to behave better. However, if
the current evolutionary theories about some of our most dark behaviours are
correct, such teaching either will not work or will require draconian social measures.
Yet, for those who think that producing better humans through teaching is a live
option, I say great, give it a try, what have you got to lose? But I believe this path will
not work. I o er instead another path: Let’s build a race of robots that implement
only what is beautiful about humanity, that do not feel any evolutionary tug to
commit certain evils, and then let usÐthe humansÐexit stage left, leaving behind a
planet populated with robots that, while not perfect angels, will nevertheless be a
vast improvement over us.
Another way to look at this project is to consider implementing in robots our best
moral theories. These are the theories that see morality as comprising universal
truths, applying fairly to all sentient beings. One such truth is that it is wrong to
harm another being, normally. (I say `normally’ because, as I will discuss below, even
in a better robot society, it is likely there will be bad robots and these must be dealt
with. Also, care must be taken here not to de®ne `harm’ too narrowly. Dental work
E. Dietrich326
hurts, but it is not harming the individual.) Many of us, and many religions (but not
all), aspire to such a morality. For example, Christians say `Love thy neighbour’,
and, on their best days, they de®ne everyone a s their neighbour.
Robots implemented with such a morality would not murder or engage in the
robot equivalent of rape. Why? Because such acts harm. Robots of one group or
type, however constituted, would not discriminate against robots of another group,
because such discrimination harms. (The robots could quite easily come in types and
hence could have the equivalent of race.) War would be eliminated and, along with
it, greed, envy, jealousy and a host of other dangerous causes of behaviour. Indeed,
we could probably eliminate garden-variety rudeness. Doing that would make this
planet very much happier.
A couple of quick caveats. It is a virtual certainty that robots will not hav e sexes, nor
mate as we do. This, the cynic might say, already makes them way ahead of us in terms
of morality. However, a human might reply that this is a kind of cheat. It is easy not to
lie to your spouse if you do not have one; coercive sexual acts are easy to avoid if there
are no such things as sexual a cts. The same is true with sexism. It is easy to avoid
sexism if there is no such thing as sex. Still, it cannot be a moral failing of the robots
that they avoid many of our moral failings simply by not having the relevant, requisite
desires. There is some sentiment to contrary in western culture. A moral agent is seen
as one who avoids temptation. But this is erroneous. The only reason we believe this is
that we a re all so tempted to do various bad things. Remove the temptations, and then,
as long as y ou still have agents, you still have morality. Indeed, perhaps the most
moral being would be that one who never thought about right and wrong, because it
never occurred to it to do wrong. And note, whether or not one regards the robots as
morally superior in light of their fewer temptations, the world of the robots is
obviously a much better place than our world: their world is devoid of racism, sexism
and rape, etc. True, some of these improvements are got cheaply, e.g. they have no sex,
but this is part of why their world is a better place than ours. Finally, as I discuss
below, the robots will be autonomous and have desires, hence they will almost
certainly have con¯icting desires. Therefore, they will have temptations of their own
to deal with. Hence, they will have to make recognizably moral decisions. And they
will also make mistakes. Still, they will behave much better than we.
Let us assume that our technological society will not self-destruct in the next
couple of centuries (a huge assumption, in my opinion, which in itself is another
argument for my proposal). Then, what are the options for building such a race of
robots? They seem modestly high to me. We are babes in the woods when it comes to
arti®cial intelligence and robotics, but we are making decent advances and there is
every reason to be optimistic. The theories and technologies for building a human-
level robot seriously elude us at the present time, but we have, I think the correct
foundational theoryÐcomputationalism (I have argued for this many times in
various papers, so I will spare you the arg uments here). Assuming that computa-
tionalism is correct, then it is only a matter of time before we ®gure out what the
algorithms for being human are and how to implement them in machines. When this
happens, if we merely cut out the algorithms we have for behaving abominably,
implementing only those that tend to produce the grandeur of humanity, we will
have produced the better robots of our nature and made the world a better place.
After that, we will be at best anachronistic, otioseÐour presence at best unnecessary.
But after building such a race of robots, perhaps we could exit with some dignity,
and with the thought that we had ®nally done the best we could do.
Homo sapiens 2.0 327
4. Objections
The most common objection to my proposal is that the robots will have their own
evil behaviour. For one thing, we will have to program in self-preservation. So, for
example, it seems likely that eventually a robot or group of robots will one day
erroneously conclude that their lives are somehow in some sort of danger and react
accordingly, harming innocent robots.
Yes, probably this would happen. Probably the robots I’m advocating would have
their own suite of bad behaviours. But even if we co uld not eliminate all evil and
harm, we should still eliminate what we ca n. And eliminating everything from abuse
through murder to discrimination and rudeness is eliminating quite a lot.
Another objection is that we cannot eliminate emotions like envy, jealousy and
rage without also eliminating all the good emotions like love, caring and sympathy. I
think this i s a worrisome objection because I think good and evil might be two sides
of the same coin, or di erent arcs of the sam e circle. However, we are ignorant
enough of how emotions work and why they evolved to take seriously the idea that it
is quite possible to have only good e motions. After all, many conceive of Hea ven as
just such a place: a place where there are no negative emotions, not even sa dness. (I
am not imagining that our robots won’t be sa d.) All I a m suggesting that we
plausibly ha ve the power to implement a kind of Heaven on Earth by implementing
very moral robots.
Am I suggesting that we eliminate emotions altogether? I am not. But it is not
obvious that this is a bad idea, assuming, of course it is even possible, for certain
cognitive activity may, for all we know now, require certain emotions. Here, I am not
just referring to the cognitive activity of ours of thinking about our emotions. It may
be that one cannot do science without loving knowledge or curiosity something of
the sort.
A third objection is not to build the robots, but change humans instead via genetic
engineering, so that they commit either no evil or much less evil. To which I say,
`whatever’. Humanoid creatures who did not discriminate, did not rape, did not
murder, would not be human. The fact that such creatures would be made out of
carbon and not silicon does not really matter that much; the fundamental na ture of
my proposal remains intact: repla ce humans with better beings.
What’s in it for us? As I say: virtually nothing , for we will become worse than
uselessÐwe will be like a disease. There is this though: we will have the satisfaction
of knowing that we have eliminated a lot of evil from planet Earth and increased the
amount of good signi®cantly. That would remain a unique, and uniquely beautiful,
legacy.
5. Conclusion
In his ®rst inaugural address, President Abraham Lincoln said:
We must not be enemies. The mystic chords of memory, stretching from every battle®eld to
every living heart, will yet swell the chorus of the Union, when again touched by the better
angels of our nature.
It will not happen, ever. The mystic chords of memory will never swell the chorus of
the Union, and certainly not of the World, because, for evolutionary reasons, we
hate, and we are me an. But we aren’t mean through and through. The better angels
of our nature can be implemented as better robots for the future.
E. Dietrich328
... We find existing proposals typically abstract away from the concrete mechanisms of moral improvement known to date-namely, our current socio-technical system of moral enhancement comprised of people connected by information technology engaged in moral discourse. At the extremes, these proposals (e.g., Dietrich, 2001) conceive the AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the 'right' answers to our moral problems. We think this is a mistaken way to frame the project; it presumes that we already know many things that we do not know already and may never know. ...
... Some proposals for technological moral enhancement (e.g., Dietrich, 2001) measure moral enhancement in terms of getting the 'right' answers, ultimately reducing morality to the output of some algorithm. We contend this is misguided. ...
... Exhaustive enhancement imagines machines morally superior to humans, such that just doing as the machine says constitutes moral improvement. Perhaps the clearest example of the exhaustive approach comes in Dietrich (2001), who contends machines will outperform us morally to such a degree that humans should (morally) choose our own extinction, handing the planet over to our morally superior non-human descendants, whom he calls "homo sapiens 2.0". Certain transhumanist projects of moral enhancement advise that we integrate with and become the new species that replaces humans; morally enhanced post-humans will have transcended the various limitations defining humanity. ...
Article
Full-text available
Several proposals for moral enhancement would use AI to augment (auxiliary enhancement) or even supplant (exhaustive enhancement) human moral reasoning or judgment. Exhaustive enhancement proposals conceive AI as some self-contained oracle whose superiority to our own moral abilities is manifest in its ability to reliably deliver the ‘right’ answers to all our moral problems. We think this is a mistaken way to frame the project, as it presumes that we already know many things that we are still in the process of working out, and reflecting on this fact reveals challenges even for auxiliary proposals that eschew the oracular approach. We argue there is nonetheless a substantial role that ‘AI mentors’ could play in our moral education and training. Expanding on the idea of an AI Socratic Interlocutor, we propose a modular system of multiple AI interlocutors with their own distinct points of view reflecting their training in a diversity of concrete wisdom traditions. This approach minimizes any risk of moral disengagement, while the existence of multiple modules from a diversity of traditions ensures pluralism is preserved. We conclude with reflections on how all this relates to the broader notion of moral transcendence implicated in the project of AI moral enhancement, contending it is precisely the whole concrete socio-technical system of moral engagement that we need to model if we are to pursue moral enhancement.
... De igual modo que el uso de psicofármacos o de la ingeniería genética es propuesto para la mejora moral de los seres humanos, también ha sido sugerido para este fin el uso de robots autónomos y sistemas de inteligencia artificial. La idea que hay tras estas propuestas es que las limitaciones de nuestra psicología moral pueden ser mitigadas o suprimidas por medio de herramientas robóticas o computacionales que tengan una mayor capacidad de análisis de información y evaluación de diferentes cursos de acción; por lo que el uso de estas para mejorarnos moralmente puede ser éticamente permisible o incluso obligatorio (Gips, 1995;Dietrich, 2001; Savulescu y Maslen, 2015; Borenstein y Arkin, 2016). Francisco Lara (2021: 5-9) identifica tres modelos de IAmejora: ...
... 1. Ética de las máquinas: el objetivo de este modelo es desarrollar máquinas y robots autónomos que puedan servir para dirigir el comportamiento humano, bien como ayudantes o consejeros (Gips, 1995: 250) o bien sustituyéndonos completamente (Dietrich, 2001: 326-328). 2. Empujoncitos (nudges) decisionales: emplear empujoncitos, entendidos como «cualquier aspecto de una arquitectura de elección, o entorno de toma de decisiones, que tiene como objetivo influir en las personas para que supuestamente tomen mejores decisiones para su bienestar, dejando siempre intacta su libertad de elección» (Lara, 2021: 6, traducción propia), para guiar a los seres humanos sin reducir la autonomía individual. ...
Article
Full-text available
En este artículo argumento que el empleo de tecnologías biomédicas para la mejora moral es inadecuado para evitar riesgos globales, debido a que puede provocar problemas similares a los que se pretenden resolver por medio de su uso. Por este motivo, defiendo la conveniencia de explorar otras tecnologías para la mejora moral que no supongan tantos peligros y argumento en favor de un modelo de mejora moral por medio de la inteligencia artificial. En concreto, defiendo el empleo de SocrAI, un asistente moral artificial diseñado para mejorar nuestra deliberación moral. Para ello, propongo tres criterios que permitan evaluar y aumentar su seguridad y eficacia. Asimismo, señalo la importancia de tener en cuenta las cuestiones estructurales e institucionales —i. e., las normas o los incentivos políticos, económicos, sociales y culturales— en las propuestas de mejora moral, y muestro cómo SocrAI puede tener impacto en ellas.
... For ease of presentation, we adopt a classification provided by Lara and Deckers (2020) in which they group extant proposals into three categories: "exhaustive enhancement"; "auxiliary enhancement"; and their own "Socratic assistant". AMAs in the first category seek to supplant or supplement human moral decision-making by providing fully formed judgments for action, either to be implemented directly (Dietrich, 2001) or offered as suggestions for human action (Gips, 1995). These proposals assume moral values and reasoning can be encoded in algorithms that lack human biases. ...
Article
Full-text available
In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users’ past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.
... For ease of presentation, we adopt a classification provided by Lara and Deckers (2020) in which they group extant proposals into three categories: "exhaustive enhancement"; "auxiliary enhancement"; and their own "Socratic assistant". AMAs in the first category seek to supplant or supplement human moral decision-making by providing fully formed judgments for action, either to be implemented directly (Dietrich 2001) or offered as suggestions for human action (Gio 1995). These proposals assume moral values and reasoning can be encoded in algorithms that lack human biases. ...
Preprint
Full-text available
In this paper, we suggest that personalized LLMs trained on information written by or otherwise pertaining to an individual could serve as artificial moral advisors (AMAs) that account for the dynamic nature of personal morality. These LLM-based AMAs would harness users' past and present data to infer and make explicit their sometimes-shifting values and preferences, thereby fostering self-knowledge. Further, these systems may also assist in processes of self-creation, by helping users reflect on the kind of person they want to be and the actions and goals necessary for so becoming. The feasibility of LLMs providing such personalized moral insights remains uncertain pending further technical development. Nevertheless, we argue that this approach addresses limitations in existing AMA proposals reliant on either predetermined values or introspective self-knowledge.
... An intelligent virus tracking APP reminds you that you might have come into contact with high-risk groups and that it is now necessary to consider home isolation. Some researchers are very pessimistic about the moral nature of humans while Dietrich is very optimistic about the possibilities of AI and believes that robots would have achieved that "Copernican turn" inaccessible to most humans by their biological conditioning [16]. AI could monitor physical and environmental factors that affect moral decision-making, could identify and make agents aware of their biases, and could advise agents on the right course of action, based on the agent's moral values [11]. ...
Conference Paper
Full-text available
Facial Beauty Prediction (FBP) aims to develop a machine that can automatically evaluate facial attractiveness. In the past, these results were highly correlated with human ratings, and therefore also related to their bias in annotations. Everyone will have biases that are usually unconscious and not easy to notice. Unconscious bias is more worthy of attention than explicit discrimination. It affects moral judgment and may evade moral responsibility, and we cannot completely eliminate it. Thanks to the powerful training data and AI algorithms that can resist biased information, this is a new challenge for scientists. In this study, our experiments proved that human aesthetic judgments are usually biased. Asians think that Asian faces are more attractive, and vice versa, so the intelligent system trained from this also makes the same judgment. We propose an unbiased convolutional neural network for FBP to ensure that the model is not biased. In this work, we introduce AestheticNet, the most advanced attractiveness prediction network, with a Pearson correlation coefficient of 0.9601, which is significantly better than the competition. In addition, we propose a new method to generate an unbiased CNN to improve the fairness of machine learning. Artificial intelligence systems based on FBP technology are widely used in various fields of various industries, such as intelligent recruitment systems, security systems, etc. Therefore, the fairness of the image processing systems is very important. Our research will provide a practical example of how to build a fair and trustable AI.
Article
Full-text available
This paper aims to clear up the epistemology of learning morality from artificial moral advisors (AMAs). We start with a brief consideration of what counts as moral enhancement and consider the risk of deskilling raised by machines that offer moral advice. We then shift focus to the epistemology of moral advice and show when and under what conditions moral advice can lead to enhancement. We argue that people’s motivational dispositions are enhanced by inspiring people to act morally, instead of merely telling them how to act. Drawing upon these insights, we claim that if AMAs are to genuinely enhance people morally, they should be designed as inspiration and not authority machines. In the final section, we evaluate existing AMA models to shed light on which holds the most promise for helping to make users better moral agents.
Chapter
This chapter aims to challenge the cognitivist bias in debates surrounding AI-based and general moral enhancement and introduces the concept of moral identity as an alternative. The primary objective is to establish moral identity as an important and empirically adequate focal point for moral enhancement interventions. While an emphasis on cognition might preserve autonomy, it fails to present an adequate picture of moral self-governance. As the chapter argues, cognition does not play the central role in moral conduct we may desire. This also applies to higher-order abilities. The focus on identity aligns with philosophical perspectives that place identity at the core of moral agency and recognizes that often it is our will, not reasoning ability, that affects our moral behavior. Furthermore, moral identity captures the agential dimension of enhancement in a potentially morally neutral and autonomy-respecting manner. As such, it offers a promising approach for improving moral conduct through AI-based interventions while being grounded in evidence from state-of-the-art empirical research.
Chapter
Full-text available
The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, by virtue of its new agential capabilities, can be understood as a moral agent. Discussions between those defending mentalist and anti-mentalist positions have revealed many nuances and particularly relevant theoretical aspects. Second, given that an AI system can hardly be an entity qualified to be responsible, we have delved into the responsibility gap and the different ways of understanding and addressing it. Third, we have provided an overview of the current and potential patientist capabilities that AI systems possess. This has led us to analyze the possibilities of AI possessing moral patiency. In addition, we have addressed the question of the moral and legal rights of AI. Finally, we have introduced the two most relevant authors of the relational turn on the moral status of AI, Mark Coeckelbergh and David Gunkel, who have been led to defend a relational approach to moral life as a result of the problems associated with the ontological understanding of moral status.
Chapter
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.
ResearchGate has not been able to resolve any references for this publication.