ArticlePDF Available

Abstract and Figures

Artificial Intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI Ethics is still in the infancy stage. AI Ethics is the field related to the study of ethical issues in AI. To address AI Ethics, one needs to consider the Ethics of AI and how to build Ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate Ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). In this paper, we will discuss AI Ethics by looking at the Ethics of AI and Ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an Ethical AI? How to adhere to the Ethics of AI to build Ethical AI?
Content may be subject to copyright.
DOI: 10.4018/JDM.2020040105

Volume 31 • Issue 2 • April-June 2020
Copyright © 2020, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
74


Keng Siau, Missouri University of Science and Technology, Rolla, USA
Weiyu Wang, Missouri University of Science and Technology, Rolla, USA

Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition,
medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth,
social development, as well as human well-being and safety improvement. However, the low-level of
explainability, data biases, data security, data privacy, and ethical problems of AI-based technology
pose significant risks for users, developers, humanity, and societies. As AI advances, one critical
issue is how to address the ethical and moral challenges associated with AI. Even though the concept
of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is
the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the
ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines,
policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically.
One must recognize and understand the potential ethical and moral issues that may be caused by AI
to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e.,
Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior
(i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI.
What are the perceived ethical and moral issues with AI? What are the general and common ethical
principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical
and moral issues with AI? What are some of the necessary features and characteristics of an ethical
AI? How to adhere to the ethics of AI to build ethical AI?

AI Ethics, Artificial Intelligence, Ethical AI, Ethics, Ethics of AI, Machine Ethics, Roboethics

Some researchers and practitioners believe that artificial intelligence (AI) is still a long way from
having consciousness and being comparable to humans, and consequently, there is no rush to consider
ethical issues. But AI, combined with other smart technologies such as robotics, has already shown
its potential in business, healthcare, transportation, and many other domains. Further, AI applications
are already impacting humanity and society. Autonomous vehicles can replace a large number of
jobs, and transform the transportation and associated industries. For example, short-haul flights and

Volume 31 • Issue 2 • April-June 2020
75
hospitality services along highways will be impacted if driverless cars enable passengers to sleep
and work during the journey. AI-recruiters are known to exhibit human biases because the training
data inherits the same biases we have as humans. The wealth gap created by the widening differences
between return on capital and return on labor is posed to create social unrest and upheavals. The
future of work and future of humanity will be affected by AI and plans need to be formulated and put
in place. Building AI ethically and having ethical AI are urgent and critical. Unfortunately, building
ethical AI is an enormously complex and challenging task.

Ethics is a complex, complicated, and convoluted concept. Ethics can be defined as the moral principles
governing the behaviors or actions of an individual or a group of individuals (Nalini, 2019). In other
words, ethics are a system of principles or rules or guidelines that help determine what is good or
right. Broadly speaking, ethics can be defined as the discipline dealing with right versus wrong, and
the moral obligations and duties of entities (e.g., humans, intelligent robots, etc.).
Ethics has been studied by many researchers from different disciplines. Most humans are familiar
with virtue ethics since very young because it is a behavior guide instilled by parents and teachers to
help children practice good conduct. Aristotle (Yu, 1998) believes when a person acts in accordance
with virtue, this person will do well and be content. Virtue ethics is part of normative ethics, which
studies what makes actions right or wrong. It can be viewed as overarching moral principles that help
people resolve difficult moral decisions. As the interaction between humans, between humans and
animals, between humans and machines, and even between machines is increasing, ethical theories
have been applied to real-life situations, such as business ethics, animal ethics, military ethics,
bioethics, and machine ethics. The study of ethics and ethical principles is constantly evolving and
developing. Table 1 lists several ethics definitions given by researchers.
In the context of AI, the ethics of AI specifies the moral obligations and duties of an AI and its
creators. Researchers have done much work studying human ethical issues. Many ethical frameworks
can be used to direct human behaviors, such as actions and activities related to respect for individuals,
Table 1. Definition of ethics
Normative Ethics Reference
Ethics is the capacity to think critically about moral values and direct our actions in
terms of such values. Churchill, 1999
Ethics is a set of concepts and principles that guide us in determining what behavior
helps or harms sentient creatures. Paul & Elder, 2006
Ethics is the norm for conduct that distinguishes between acceptable and
unacceptable behavior.
Ethics is the discipline that studies standards of conduct, such as philosophy,
theology, law, psychology, or sociology.
Ethics is a method, procedure, or perspective for deciding how to act and for
analyzing complex problems and issues.
Resnik, 2011
Applied Ethics
Computer ethics is the analysis of the nature and social impact of computer
technology and the corresponding formulation and justification of policies for the
ethical use of such technology.
Moor, 1985, p. 266
Machine ethics is concerned with giving machines ethical principles or a procedure
for discovering a way to resolve the ethical dilemmas they might encounter, enabling
them to function in an ethically responsible manner through their own ethical
decision making.
Anderson and Anderson, 2011,
p. 1

Volume 31 • Issue 2 • April-June 2020
76
beneficence, justice, privacy, accuracy, ownership/property, accessibility, fairness, accountability,
and transparency (Wang & Siau, 2018).
One of the best-known ethical frameworks is developed by Ken Blanchard and Norman Vincent
Peale (Blanchard & Peale, 2011). The framework consists of three main questions: Is it legal? Is
it fair? How does it make me feel? Another framework is the Markkula Center Framework, which
identifies five approaches to dealing with ethical issues, including the utilitarianism approach, rights
approach, fairness or justice approach, common good approach, and virtue approach (Markkula
Center for Applied Ethics, 2015).
AI ethics, however, is a relatively new field and the subsequent parts of the paper will discuss
the AI ethics – ethics of AI and ethical AI.

The ethics of AI is part of the ethics of advanced technology that focuses on robots and other artificially
intelligent agents. It can be divided into roboethics (robot ethics) and machine ethics.
Roboethics is concerned with the moral behaviors of humans as they design, construct, use, and
interact with AI agents, and the associated impacts of robots on humanity and society. In this paper,
we consider it as ethics of AI, which deal with ethical issues related to AI, including ethical issues
that may arise when designing and developing AI (e.g., human biases that exist in data, data privacy,
and transparency), and ethical issues caused by AI (e.g., unemployment and wealth distribution).
Further, as machines become more intelligent and may one day gain consciousness, we should consider
robot rights -- the concept that people should have moral obligations towards intelligent machines. It
is similar to human rights and animal rights. For instance, whether it is ethical to deploy intelligent
military robots to dangerous battlefields or assign robots to dirty environments. The rights of liberty,
freedom of expression, equality, and having thought and emotion belong to this category.
Machine ethics deals with the moral behaviors of Artificial Moral Agents (AMAs), which is
the field of research addressing the design of artificial moral agents. As technology advances and
robots become more intelligent, robots or artificially intelligent agents should behave morally and
exhibit moral values. We consider the ethical behaviors of AI agents as ethical AI. Currently, the
best known proposed rules for governing AI agents are the Three Laws of Robotics put forth by Issac
Asimov in the 1950s (Asimov, 1950). First Law, a robot may not injure a human being or, through
inaction, allow a human being to come to harm. Second Law, a robot must obey the orders given to
it by human beings except when such orders would conflict with the First Law. Third Law, a robot
must protect its existence as long as such protection does not conflict with the First or Second Law.
Table 2 depicts the two dimensions of AI ethics (i.e., ethics of AI and ethical AI) and how the
two dimensions interact with AI, Human, and Society. The ethical interaction between AIs is new
in this paper. This is especially important for AIs with consciousness. Not only should the AIs do
no harm to humans and self-preserve, but it also should do no harm to other intelligent agents.
Thus, the Three Laws of Robotics may need to be extended to take into account the interaction
between intelligent AIs.
Understanding the ethics of AI will help to establish a framework for building ethical AI. Figure
1 shows the initial framework for building ethical AI.
Table 2. AI ethics
AI Human Society
Ethics of AI Principles of developing AI to
interact with other AIs ethically
Principles of developing AI to
interact with human ethically
Principles of developing AI to
function ethically in society
Ethical AI How AI should interact with
other AIs ethically?
How AI should interact with
humans ethically?
How AI should operate
ethically in society?

Volume 31 • Issue 2 • April-June 2020
77

Recently, Criminals used AI-based voice technology to impersonate a chief executive’s voice and
demand a fraudulent transfer of $243,000 (Stupp, 2019). This is not an isolated incident. PINDROP
reported a 350% rise in voice fraud between 2013 and 2017 (Livni, 2019). AI voice impersonation
being used for fraud is not the only concern. Deepfake, which is an approach to superimpose and
synthesize existing images and videos onto source images or videos using Machine Learning (ML), is
also becoming common. With deepfake, human faces could be superimposed on pornographic video
content and political leaders can be portrayed in videos to incite violence and panic. Deepfake may
also be used in the election cycle to influence and bias the American electorate (Libby, 2019). In 2017,
researchers from the University of Washington created a synthetic Obama, using a neural network
AI to model the shape of Obama’s mouth (BBC News, 2017). Although there was no security threat
from the University of Washington experiment, the demonstration illustrates what is possible with
AI-altered videos. Fake news is another concern. For example, an AI fake text generator was deemed
too dangerous to be released by its creators, Open AI, for fear of misuse. Undoubtedly, advanced AI
agents could put individuals, companies, and societies at increased risk.
Human rights, such as privacy, freedom of association, freedom of speech, right to work, non-
discrimination, and access to public services, should always be put in the first place. However, the
growing use of AI in the criminal justice system may have a discrimination concern. The recidivism
risk-scoring software used across the criminal justice system shows incidents of discrimination
based on race, gender, and ethnicity. For instance, some defendants are falsely labeled as high risk
because of their ethnicity.
The right to privacy, which is essential to human dignity, can also be affected by AI. As big data
technology developed, the collection of data interferes with the rights to privacy and data security.
For instance, ML models can accurately synthesize data and estimate personal characteristics, such
as gender, age, marital status, and occupation, from cell phone location data. Another example is
government surveillance. In the U.S., half of all adults are already in law enforcement facial recognition
databases (Telcher, 2018), which threatens to end anonymity. Rights to freedom of expression,
assembly, and association may accordingly be affected. Last but not least, the right to work and an
adequate standard of living (Access Now, 2018) would be affected. Automation has resulted in job loss
and job displacement in certain industries and the rapid advancement in AI would accelerate this trend.

Advanced AI will spark unprecedented business gains, but along the way, government and industry
leaders will have to grapple with a smorgasbord of ethical dilemmas such as data privacy issues,
machine learning bias, public safety concerns, as well as job replacement and unemployment
rate problems. To guide their strategies in developing, adopting, and embracing AI technologies,
organizations should consider establishing AI ethics frameworks/guidelines. Some institutions have
started work on this issue and published some guidelines. Table 3 shows eight institutions that work
on AI ethical frameworks or principles and their objectives. Table 4 shows the content of those ethical
frameworks and principles.
Figure 1. Initial framework for building ethical AI

Volume 31 • Issue 2 • April-June 2020
78
In Table 5, we summarized the frequency of each factor in those frameworks shown in Table
4. We can see that different frameworks may include the same or similar factors, but also include
different considerations. The study of ethical issues of AI is still a new area and more discussion is
needed to finally establish the framework of building ethical AI. In the next section, we will discuss
each ethical issue in detail.

AI, at the present stage, is referred to as Narrow AI or Weak AI. Weak AI can do well in a narrow
and specialized domain. The performance of narrow AI depends much on the training data and
programming, which is closely related to big data and humans. The ethical issues of Narrow AI,
thus, involve human factors.
A different set of ethical issues arises when we contemplate the possibility that some future
AI systems might be candidates for having the moral status” (Bostrom and Yudkowsky, 2014, p.5).
Table 3. Institutions’ works on AI ethics and their objectives
Resource Objective
Future of Life Institute
(2017)
This report emphasizes “do no harm”. It requires the development of AI to benefit society,
foster trust and cooperation, and avoid competitive racing.
International
Association of Privacy
Professionals (IAPP,
2018)
The proposed framework explores risks to privacy, fairness, transparency, equality, and
many other issues that can be amplified by big data and artificial intelligence. They
provide an overview of how organizations can operate data ethics and how to reflect ethical
considerations in decision making.
Institute of Electrical
and Electronics
Engineers (IEEE,
2019)
The proposed design lays out practices for setting up AI governance structure, including
pragmatic treatment of data management, affective computing, economics, legal affairs, and
other areas. One key priority is to increase human well-being as a metric for AI progress.
Besides, the IEEE principle requires everyone involved in the design and development of AI is
educated to prioritize ethical considerations.
The Public Voice
(2018)
The proposed guidelines aim to improve the design and use of AI, maximize the benefits
of AI, protect human rights, and minimize risks and threats associated with AI. They claim
that the guidelines should be incorporated into ethical standards, adopted in national law and
international agreements, and built into the design of systems.
European
Commission’s High-
Level Expert Group
on AI (European
Commission, 2019)
The guidelines are designed to guide the AI community in the development and use of
“trustworthy AI” (i.e., AI that is lawful, ethical, and robust). The guidelines emphasize four
principles: respect for human autonomy, prevention of harm, fairness, and explicability.
AI4People (Floridi et
al., 2018)
This framework introduces the core opportunities and risks of AI for society; present a
synthesis of five ethical principles that should undergird its development and adoption; and
offer 20 concrete recommendations—to assess, to develop, to incentivize, and to support
good AI—which in some cases may be undertaken directly by national or supranational
policymakers.
United Nations
Educational,
Scientific, and
Cultural Organization
(UNESCO, 2017)
The proposed ethical principle aims to provide decision-makers with criteria that extend
beyond purely economic considerations.
Australia’s Ethics
Framework (Dawson et
al., 2019)
This ethics framework highlights the ethical issues that are emerging or likely to emerge in
Australia from AI technologies and outlines the initial steps toward mitigating them. The goal
of this document is to provide a pragmatic assessment of key issues to help foster ethical AI
development in Australia.

Volume 31 • Issue 2 • April-June 2020
79
They adopt the definition of moral status that “X has moral status = because X counts morally in its
own right, it is permissible/impermissible to do things to it for its own sake.” From this perspective,
once AI has moral status, we should treat it not as a machine/system, but an object that has equal
rights as humans. The technological singularity, when technological growth becomes uncontrollable
and irreversible, is hypothesized to come as AI advances. If it happens, human civilization would
be affected, and robot rights and consciousness should be considered. But these issues are beyond
the consideration of this paper. The following discussion mainly focuses on ethical issues related
to narrow AI.
Research on ethical issues of AI falls into three categories: features of AI that may give rise to
ethical problems (Timmermans et al., 2010), human factors that cause ethical risks (Larson, 2017),
and social impact of ethical AI issues.

3.1.1. Transparency
Machine learning is a brilliant tool, but it is hard to explain the inner processing of machine
learning, which is usually called the “black box”. The “black box” makes the algorithms
mysterious even to its creators. This limits people’s ability to understand the technology, leads
to significant information asymmetries among AI experts and users, and hinders human trust
in the technology and AI agents. Trust is crucial in all kinds of relationships and a prerequisite
reason for acceptance (Siau & Wang, 2018).
Table 4. Ethical frameworks and principles from eight institutions
Resource Ethical Framework/Principle
Future of Life Institute (2017)
Safety, Failure Transparency, Judicial Transparency, Responsibility, Value
Alignment, Human Values, Personal Privacy, Liberty and Privacy, Shared
Benefit, Shared Prosperity, Human Control, Non-subversion, AI Arms Race
International Association of Privacy
Professionals (IAPP, 2018) Data ethics, Privacy, Bias, Accountability, Transparency, Human Rights
Institute of Electrical and Electronics
Engineers (IEEE, 2019)
Human Rights, Well-being, Data Agency, Effectiveness, Transparency,
Accountability, Awareness of Misuse, Competence
The Public Voice (2018)
Right to Transparency; Right to Human Determination; Identification
Obligation; Fairness Obligation; Assessment and Accountability Obligation;
Accuracy, Reliability, and Validity obligation; Data Quality Obligation; Public
Safety Obligation; Cybersecurity Obligation; Prohibition on Secret Profiling;
Prohibition on Unitary Scoring; Termination Obligation.
European Commission’s High-Level
Expert Group on AI (European
Commission, 2019)
Human Agency and Oversight, Technical Robustness and Safety, Privacy and
Data Governance, Transparency, Diversity, Societal and Environmental Well-
being, Accountability
AI4People (Floridi et al., 2018)
Beneficence: promoting well-being, preserving dignity, sustaining the planet;
Non-maleficence: privacy, security, monitoring AI advancement/capability;
Autonomy: the power to decide; Justice: promoting prosperity, preserving
solidarity; Explicability: enabling the other principles through intelligibility
and accountability
United Nations Educational,
Scientific, and Cultural Organization
(UNESCO, 2017)
Human Dignity, Value of Autonomy, Value of Privacy, “Do no harm”
Principle, Principle of Responsibility, Value of Beneficence, Value of Justice
Australia’s Ethics Framework
(Dawson et al., 2019)
Generates Net-benefits, Regulatory and Legal Compliance, Fairness,
Contestability, Do No Harm, Privacy Protection, Transparency and
Explainability, Accountability

Volume 31 • Issue 2 • April-June 2020
80
Further, because of the black box that humans are not able to interpret, AI may evolve without
human monitoring and guidance. For example, in 2017, Facebook shut down an AI engine because
they found that the AI had created its own unique language and humans could not understand the
language (Bradley 2017). Whether humans can control AI agents is a big concern. Humans prefer
AI agents to always do exactly what we want them to do. For instance, if a guest asks a self-driving
taxi to drive to the airport as fast as possible, the taxi may not follow the traffic rules, but reach the
airport at the fastest speed. This is not what the customer wants but what the customer asked for
literally. However, considering this problem from another perspective, if we treat AI agents ethically,
is it ethical that we control what actions they take and how they make decisions?
3.1.2. Data Security and Privacy
The development of AI agents relies heavily on the huge amount of data, including personal data and
private data. Almost all of the application domains in which deep learning is successful, such as Apple
Siri and Google Home, have access to mountains of data. With more data generated in societies and
businesses, there is a higher chance to misuse these data. For instance, a health record always contains
sensitive information, which if not adequately protected, a rogue institution could gain access to that
information and harm the patients personally and financially. Thus, data must be managed properly
to prevent misuse and malicious use (Timmermans et al., 2010). To keep data safe, each action to
the data should be detailed and recorded. Both the data per se and the transaction record may cause
privacy-related risks. It is, therefore, important to consider what should be recorded and who should
take charge of the recording action, and who can have access to the data and records.
Table 5. Summary of factors of ethical frameworks
Factors\Institutions FLI IAPP IEEE TPV EUCE AI4P UNESCO AEF Total
Responsibility/Accountability 1 1 1 1 1 1 1 1 8
Privacy 1 1 1 1 1 1 1 7
Transparency 1 1 1 1 1 1 6
Human Values/Do No Harm 1 1 1 1 1 1 6
Human Well-Being/Beneficence 1 1 1 1 4
Safety 1 1 1 3
Liberty/Autonomy 1 1 1 3
Human Control 1 1 1 3
Bias/Fairness 1 1 1 3
Shared Benefit 1 1 2
AI Arms Race 1 1 2
Justice 1 1 2
Prosperity 1 1
Effectiveness 1 1
Accuracy 1 1
Reliability 1 1
Diversity 1 1
Human Dignity 1 1
Regulatory And Legal Compliance 1 1

Volume 31 • Issue 2 • April-June 2020
81
3.1.3. Autonomy, Intentionality, and Responsibility
Whether the robots are regarded as moral agents affect the interactions (Sullins, 2011). To be seen
as real moral agents, robots have to meet three criteria: autonomy, intentionality, and responsibility
(Sullins, 2011). Autonomy means that the machines are not under the direct control of any other
agents. Intentionality means that machines “act in a way that is morally harmful or beneficial and
the actions are seemingly deliberate and calculated.” Responsibility means the machines fulfill some
social role that carries with it some assumed responsibilities. In the very classic trolley case, the one
who controls the trolley is the ethical producer (Allen et al., 2006). To continue to run on the current
track and kill five workers or to turn to another track and kill a lone worker is a hard-ethical choice
for humans. What choice should or would AI make? Who should be responsible for the AI’s choice?
The military robots that take charge of bomb disposal are ethical recipients. Is it ethical that humans
decide the destiny of these robots? Human ethics and morality today may not be seen as perfect by
future civilizations (Bostrom and Yudkowsky, 2014). One reason is that humans cannot solve all the
recognized ethical problems. The other reason is that humans cannot recognize all the ethical problems.

Human Bias, such as gender bias (Larson, 2017) and race bias (Koolen and Cranenburgh, 2017), may
be inherited by AI. AI agents are only as good as the data human put into them.
AI agents are being trained by humans and using datasets made by humans, existing biases may be
learned by AI agents and exhibited in real applications. Once biased data are used by the AI agent, the
bias will become an ongoing problem. For instance, software used to predict future criminals showed
bias against a certain race (Bossmann, 2016). The bias comes from the training data that contains
human biases. Thus, figuring out how to program and train AI agents without human biases is critical.
3.2.1. Accountability
When an AI agent fails at a certain assigned task, who should be responsible. This may lead to what
is referred to as “the problem of many hands” (Timmermans et al., 2010). When using an AI agent,
an undesirable consequence may be caused by the programming codes, entered data, improper
operation, or other factors. Who should be the responsible entity for the undesirable consequence
-- the programmer, the data owner, or the end-users?
3.2.2. Ethical Standards
“The ultimate goal of machine ethics is to create a machine that itself follows an ideal ethical principle
or set of principles” (Anderson and Anderson, 2007 p. 15). It is theoretically easy but practically hard
to formulate ethical principles for AI agents. Without comprehensive and unbiased ethical standards,
how can humans train a machine to be ethical? Further, how can we make certain that intelligent
machines understand ethical standards in the same way that we do? (Wang and Siau, 2019a). For
instance, if we program robots to always perform no harm, we should first make sure that the robots
understand what harm is. This results in another problem -- what should be the ethical standards for
harm? A global or universal level of ethics is needed. To put such ethics into machines, it is necessary
to reduce the information asymmetries between AI programmers and creators of ethical standards.
While attempting to formulate ethical standards for AI and intelligent machines, researchers and
practitioners should try to better understand existing ethical principles so that they will be able to
apply the ethical principles to research activities and help train developers to build ethical AIs (Wang
and Siau, 2018).
3.2.3. Human Rights Laws
Without training in human rights laws, software engineers may write codes that violate and breach
key human rights without even knowing it. It is crucial to teach human rights laws to software

Volume 31 • Issue 2 • April-June 2020
82
engineers. Ensuring privacy-by-design is important and more cost-efficient than the alternatives.
A better knowledge of human rights laws can help AI designers and engineers eliminate or at least
alleviate the discrimination and invasion of privacy issues in AI.

3.3.1. Automation and Job Replacement
The debate on whether AI age and Industry 4.0 (Siau, Xi, and Zou, 2019; Wang and Siau, 2019b)
will create more jobs or eliminate some jobs is still heated. Stories of factory workers being
replaced by automated systems and robots abound. Some argue that AI will also create millions
of new jobs with many of these jobs that are non-existence today. Nevertheless, the concern
is still there about the future workforce disruptions in the age of AI, such as the cooperation
between humans and AI agents. The labor market will be disrupted and transformed with AI.
What is not entirely clear is the speed and scope of the change. The term “useless class” has been
suggested by Harari (2016). Universal Basic Income (UBI) has been piloted in some countries
and Freedom Dividend, which is a universal basic income for all American adults with no strings
attached, is the campaign platform for a 2020 U.S. Presidential candidate Andrew Yang. The
original intention of technology development is to assist humans and improve human lives. If
automation and AI cause huge job replacement and unemployment, should we keep the rapid
pace of technology development? Also, how can we protect human rights and human well-being
while keeping up with the rapid evolutions and revolutions of technology?
3.3.2. Accessibility
Accessibility, as an ethical principle, refers to whether systems, products, and services are available,
accessible, and suitable for all people, including the elderly, the handicapped, and the disabled.
Considering the complexity of new technology and high-tech products, as well as the aging
populations in some countries, the accessibility of new technology will directly affect human well-
being. Technology development should benefit humans. But if only a portion of people benefit, is it
ethical and fair? Consideration must be given to developing systems, products, and services that are
accessible to all, and the benefits of advanced technology should be fairly distributed to all (Wang
and Siau, 2019b).
3.3.3. Democracy and Civil Rights
Unethical AI results in the fragmentation of truth and eventual loss of trust, and loss of societal
support for AI technology. The loss of informed and trusting communities dents the strengths of
democracies. As democracies suffer and structural biases amplified, the free exercise of civil rights
no longer remains uniformly available to all. AI ethics needs to take into consideration democracy
and civil rights.
Figure 2 summarizes the above ethical issues in AI. Solving these issues properly will help to
establish a framework for building ethical AI.


Figure 2 establishes the framework for AI ethics listing the factors that need to be considered in defining
the ethics of AI in order to build ethical AI. Even though defining the ethics of AI is multifaceted
and convoluted, putting the ethics of AI into practice to build ethical AI is no easy feat too. What
should ethical AI look like? In the simplest form, we may define that ethical AI should do no harm
to humans. But, what is harm? What constitutes human rights? Many questions need to be answered
before we can design and build ethical AI. ethical sensitivity training is required to make good ethical

Volume 31 • Issue 2 • April-June 2020
83
decisions. In theory, AI should be able to recognize ethical issues. If AI is capable of making decisions,
how can we design and develop an AI that is sensitive to ethical issues? Unfortunately, it is not easy
to implement in practice and to realize. Long-term and sustained efforts are needed. Nonetheless,
understanding and realizing the importance of developing ethical AI and starting to work on it step
by step are positive steps forward.
Many institutions, such as Google, IBM, Accenture, Microsoft, and Atomium-EISMD, have
started working on building ethical principles to guide the development of AI. In November 2018,
the Monetary Authority of Singapore (MAS), together with Microsoft and Amazon Web Services,
launched the FEAT principles (i.e., fairness, ethics, accountability, and transparency) for the use of
AI. Academics, practitioners, and policymakers should work together to widen the engagement to
establish ethical principles for AI design, development, and use.
With the frameworks and principles, protective guardrails to ensure ethical behaviors are needed.
Good governance is necessary to enforce the implementation and adherence to those ethical principles,
and a legal void is waiting to be filled by regulatory authorities (Hanna, 2019). Either based on case
law or accomplished via legislative and regulatory obligations, these legal and regulatory instruments
will be critical to the good governance of AI, which helps to implement and enforce ethics of AI to
enable the development of ethical AI.
To protect the public, the U.S. has long enacted regulatory instruments, such as rules against
discrimination, equal employment opportunity, HIPAA Title II, Commercial Facial Recognition
Privacy Act, and Algorithmic Accountability Act. All these instruments would be useful in guiding
the development of legal and regulatory policies and frameworks for AI ethics.
In addition to the legal and government rules, self-regulation plays an important role as well.
Communication and information disclosure can help society as a whole to ensure the development
and deployment of ethical AI. Fostering discussion forums and publishing ethical guidelines by
companies, industries, and policymakers can help to educate and train the public in understanding the
benefits of AI and dispelling myths and misconceptions about AI. Besides, having a better knowledge
of legal frameworks on human rights, strengthening the sense of security, and understanding the
ethical issues related to AI, can foster trust in AI and enable the development of ethical AI more
efficiently and effectively.

Moor (2006) indicates three potential ways to transfer AI to ethical agents: to train AI into “implicit
ethical agents”, “explicit ethical agents”, and “full ethical agents”. Implicit ethical agents mean
Figure 2. AI Ethics: Framework of building ethical AI

Volume 31 • Issue 2 • April-June 2020
84
constraining the machine’s actions to avoid unethical outcomes. Explicit ethical agents mean stating
explicitly what action is allowed and what is forbidden. Full ethical agents mean machines, as humans,
have consciousness, intentionality, and free will. An implicit ethical agent can restrict the development
of AI. An explicit ethical agent is currently getting the most attention and is considered to be more
practical (Anderson and Anderson, 2007). A full ethical agent is still an R&D initiative and one is
not sure when a full ethical agent will be a reality.
When a full ethical agent is realized, how to treat an AI agent that has consciousness, moral
sense, emotion, and feelings will be another critical consideration. For instance, is it ethical to “kill”
(shut down) an AI agent if it replaces human jobs or even endangers human lives? Is it ethical to
deploy robots into a dangerous environment? These questions are intertwined with human ethics
and moral values.

President-elect of the European Commission made clear in her recently unveiled policy agenda that
the cornerstone of the European AI plan will be to ensure that “AI made in Europe” is more ethical
than AI made anywhere else in the world. The European Commission is not the only one that is
concerned about AI ethics. Many countries are also working on AI ethics. U.S. agencies such as
the Department of Defense and the Department of Transportation have launched their initiatives
to ensure the ethical use of AI within their respective domains. In China, the government-backed
Beijing Academy of Artificial Intelligence has developed the Beijing AI Principles that rival those
of other countries, and the Chinese Association for Artificial Intelligence has also developed its own
ethics guidelines. Many non-European countries, including the United States, have signed on to the
Organization for Economic Co-operation and Development’s (OECD) AI Principles focusing on
“responsible stewardship of trustworthy AI.
However, the makers and researchers of AI at this time are likely to pay more attention to hard
performance metrics, such as safety and reliability, or softer performance metrics, such as usability
and customer satisfaction. More nebulous concepts like ethics are not yet the most urgent consideration
– especially with the intense competition between companies and between nations.
Further, while some consumers may pay lip service to ethical design, their words do not match
their actions. For example, among consumers who said they distrust the Internet, only 12% report using
technological tools, such as virtual private network, to protect their data, according to a worldwide Ipsos
survey (CIGI-Ipsos, 2019). Instead, the most important factors influencing consumers’ purchasing
decisions are still the price and quality. Right now, consumers care more about what AI can do rather
than whether all AI’s actions are ethical.
This situation may put companies and institutions which are developing AI in a tradeoff situation
-- whether to focus on AI advancement to realize profit maximization, or to focus on AI ethics to
ensure societal benefits from AI innovations.

Understanding and addressing ethical and moral issues related to AI is still in the infancy stage. AI
ethics is not simply about “right or wrong”, “good or bad”, and “virtue and vice”. It is not even a
problem that can be solved by a small group of people. However, ethical and moral issues related to
AI are critical and need to be discussed now. This research aims to call attention to the urgent need
for various stakeholders to pay attention to the ethics and morality of AI agents. While attempting to
formulate the ethics of AI to enable the development of ethical AI, we will also understand human
ethics better, improve the existing ethical principles, and enhance our interactions with AI agents in this
AI age. AI ethics should be the central consideration in developing AI agents and not an afterthought.
The future of humanity may depend on the correct development of AI ethics!

Volume 31 • Issue 2 • April-June 2020
85

Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE Intelligent Systems, 21(4), 12–17.
doi:10.1109/MIS.2006.83
Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine,
28(4), 15–26.
Anderson, M., & Anderson, S. L. (2011). Machine ethics. Cambridge University Press. doi:10.1017/
CBO9780511978036
Asimov, I. (1950). Runaround. In I, Robot (The Isaac Asimov Collection Ed.). New York City: Doubleday.
Blanchard, K., & Peale, N. V. (2011). The power of ethical management. Random house.
Bossmann, J. (2016). Top 9 ethical issues in artificial intelligence. World Economic Forum. Retrieved from
https://www.weforum.org/ethical-issues-in-AI
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In The Cambridge handbook of
artificial intelligence (pp. 316-334). Cambridge Press. doi:10.1017/CBO9781139046855.020
Bradley, T. (2017). Facebook AI Creates Its Own Language In Creepy Preview of Our Potential Future. Forbes.
Retrieved from https://www.forbes.com/sites/tonybradley/2017/0 7/31/facebook-ai-creates-its-own-language-
in-creepy-preview-of-our-potential-future/#45c65554292c
Churchill, L. R. (1999). Are We Professionals? A Critical Look at the Social Role of Bioethicists. Daedalus,
253–274. PMID:11645877
CIGI-Ipsos. (2019). 2019 CIGI-Ipsos Global Survey on Internet Security and Trust. Retrieved from http://www.
cigionline.org/internet-survey-2019
Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., . . . Hajkowicz, S. (2019).
Artificial Intelligence: Australia’s Ethics Framework. Data61 CSIRO, Australia.
European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://ec.europa.eu/digital-
single-market/en/news/ethics-guidelines-trustworthy-ai
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Vayena, E. etal. (2018).
AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations.
Minds and Machines, 28(4), 689–707. doi:10.1007/s11023-018-9482-5 PMID:30930541
Future of Life Institute. (2017). Asilomar AI Principles. Retrieved from https://futureoflife.org/ai-principles/?cn-
reloaded=1
Hanna, M. (2019). We don’t need more guidelines or frameworks on ethical AI use. It’s time for regulatory
action. Brink the Edge of Risk. Retrieved from https://www.brinknews.com/we-dont-need-more-guidelines-or-
frameworks-on-ethical-ai-use-its-time-for-regulatory-action/
Harari, Y. N. (2016). Homo Deus: a brief history of tomorrow. Random House.
IAPP. (2018). White Paper -- Building Ethics into Privacy Frameworks for Big Data and AI. Retrieved from
https://iapp.org/resources/article/building-ethics-into-privacy-frameworks-for-big-data-and-ai/
IEEE. (2019). Ethically aligned Design. Retrieved from https://ethicsinaction.ieee.org/
Koolen, C., & van Cranenburgh, A. (2017). These are not the Stereotypes you are looking For: Bias and Fairness in
Authorial Gender Attribution. Proceedings of the First ACL Workshop on Ethics in Natural Language Processing
(pp. 12-22). Academic Press. doi:10.18653/v1/W17-1602
Larson, B.N. (2017). Gender as a variable in natural-language processing: Ethical considerations.
Libby, K. (2019). This Bill Hader Deepfake video is Amazing. It’s also Terrifying for our Future. Popular
Mechanics. Retrieved from https://www.popularmechanics.com/technology/security/a28691128/deepfake-
technology/

Volume 31 • Issue 2 • April-June 2020
86
Livni, E. (2019). A new kind of cybercrime uses AI and your voice against you. Quartz. Retrieved from https://
qz.com/1699819/a-new-kind-of-cybercrime-uses-ai-and-your-voice-against-you/
Markkula Center for Applied Ethics. (2015). A framework for ethical decision making. Santa Clara University.
Retrieved from https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/a-framework-for-ethical-
decision-making/
Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275. doi:10.1111/j.1467-9973.1985.
tb00173.x
Moor, J. H. (2006). The nature, importance, and difficulty of machine ethics. IEEE Intelligent Systems, 21(4),
18–21. doi:10.1109/MIS.2006.80
Nalini, B. (2019). The Hitchhiker’s Guide to AI Ethics. Medium. Retrieved from https://towardsdatascience.
com/ethics-of-ai-a-comprehensive-primer-1bfd039124b0
BBC News. (2017). Fake Obama created using AI video tool [YouTube video]. BBC News. Retrieved from
https://www.youtube.com/watch?v=AmUC4m6w1wo
Now, A. (2018). Human rights in the age of artificial intelligence. Accessnow.org. Retrieved from https://www.
accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf
Paul, R., & Elder, L. (2006). The Miniature Guide to Understanding the Foundations of Ethical Reasoning.
United States: Foundation for Critical Thinking Free Press. P. NP.
Resnik, D. B. (2011). What is ethics in research and why is it important. National Institute of Environmental
Health Sciences, 1(10), 49–70.
Siau, K., & Wang, W. (2018). Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter
Business Technology Journal., 31(2), 47–53.
Siau, K., Xi, Y., & Zou, C. (2019). Industry 4.0: Challenges and Opportunities in Different Countries. Cutter
Business Technology Journal., 32(6), 6–14.
Stupp, C. (2019). Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. The Wall Street
Journal. Retrieved from https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-
cybercrime-case-11567157402
Sullins, J. P. (2011). When is a robot a moral agent. Machine ethics, 151-160.
Telcher, J. G. (2018). What do facial recognition technologies mean for our privacy? The New York Times.
Retrieved from https://www.nytimes.com/2018/07/18/lens/what-do-facial-recognition-technologies-mean-for-
our-privacy.html?nytapp=true&smid=nytcore-ios-share
The Public Voice. (2018). Universal Guidelines for Artificial Intelligence. Retrieved from https://thepublicvoice.
org/ai-universal-guidelines/
Timmermans, J., Stahl, B. C., Ikonen, V., & Bozdag, E. (2010). The ethics of cloud computing: A conceptual
review. Proceedings of the IEEE Second International Conference Cloud Computing Technology and Science
(pp. 614-620). IEEE Press. doi:10.1109/CloudCom.2010.59
UNESCO. (2017). Report of World Commission on the Ethics of Scientific Knowledge and Technology on
Robotics Ethics. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000253952
Wang, W., & Siau, K. (2018). Ethical and moral issue with AI – a case study on healthcare robots. AMCIS 2019
proceedings. Academic Press.
Wang, W., & Siau, K. (2019a). Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work
and Future of Humanity: A Review and Research Agenda. Journal of Database Management, 30(1), 61–79.
Wang, W., & Siau, K. (2019b). Industry 4.0: Ethical and moral Predicaments. Cutter Business Technology
Journal., 32(6), 36–45.
Yu, J. (1998). Virtue: Confucius and Aristotle. Philosophy East & West, 48(2), 323–347. doi:10.2307/1399830

Volume 31 • Issue 2 • April-June 2020
87
Keng Siau is Chair of the Department of Business and Information Technology at the Missouri University of Science
and Technology. Previously, he was the Edwin J. Faulkner Chair Professor and Full Professor of Management at
the University of Nebraska-Lincoln (UNL), where he was Director of the UNL-IBM Global Innovation Hub. Dr. Siau
also served as VP of Education for the Association for Information Systems. He has written more than 300 academic
publications, and is consistently ranked as one of the top information systems researchers in the world based on
the h-index and productivity rate. Dr. Siau’s research has been funded by the U.S. National Science Foundation,
IBM, and other IT organizations. He has received numerous teaching, research, service, and leadership awards,
including from the International Federation for Information Processing Outstanding Service Award, the IBM Faculty
Award, and the IBM Faculty Innovation Award. Dr. Siau received his Ph.D. in Business Administration from the
University of British Columbia. He can be reached at siauk@mst.edu.
Weiyu Wang holds a Master of Science degree in Information Science and Technology and an MBA from the
Missouri University of Science and Technology. Her research focuses on the impact of artificial intelligence (AI)
on economy, society, and mental well-being. She is also interested in the governance, ethical, and trust issues
related to AI. She can be reached at wwpmc@mst.edu.
... The process of attributing moral values and ethical principles to machines to resolve ethical issues they encounter, and enabling them to operate ethically is a form of applied ethics [3]. 'AI ethics' refers to "the principles of developing AI to interact with other AIs and humans ethically and function ethically in society" [4]. ...
... AI practitioners and researchers seem to have mixed perspectives about AI ethics. Some believe that there is no rush to consider AI related ethical issues as AI has a long way from being comparable to human capabilities and behaviours [4]. While others conclude that AI systems must be developed by considering ethics as they can have enormous societal impact [5], [22]. ...
... Critically, there are copious numbers of guidelines on AI ethics, making it challenging for AI developers to decide which guidelines to follow. Unsurprisingly, studies have been conducted to analyse the ever-growing list of specific AI principles [G10], [G17], [4]. For example, Jobin et al. [24] reviewed 84 ethical AI principles and guidelines and concluded that only five AI ethical principles -transparency, fairness, nonmaleficence, responsibility and privacy -are mostly discussed and followed. ...
Preprint
Full-text available
The term 'ethics' is widely used, explored, and debated in the context of developing Artificial Intelligence (AI) systems. In recent years, there have been numerous incidents that have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives. But what do we know about the views and experiences of those who develop these systems - the AI developers? We conducted a Socio-Technical Grounded Theory Literature Review (ST-GTLR) of 30 primary empirical studies that included AI developers' views on ethics in AI to derive five categories that discuss AI developers' views on AI ethics: developer's awareness, perception, needs, challenges, and approach. These are underpinned by multiple codes and concepts that we explain with evidence from the included studies. Through the steps of advanced theory development, we also derived a set of relationships between these categories and presented them as five hypotheses, leading to the 'theory of ethics in AI through the developer's prism' which explains that developers' awareness of AI ethics directly leads to their perception about AI ethics and its implementation as well as to identifying their needs, and indirectly leads to identifying their challenges and coming up with approaches (applied and potential strategies) to overcome them. The theory provides a landscape view of the key aspects that concern AI developers when it comes to ethics in AI. We also share an agenda for future research studies and recommendations for developers, managers, and organisations to help in their efforts to better implement ethics in AI.
... Thus, ethics can be defined as the discipline that is concerned with understanding what is correct versus what is incorrect, and establishes moral obligations and duties of the agents it studies (e.g. humans, robots, etc.) (Siau and Wang 2020). But, at the root of ethics as a human practice, it is important to consider that it does not offer absolute solutions to specific dilemmas and that is why the emphasis should be placed on being concerned with understanding, what ethics does is to initiate the necessary debates to guide decision making, it should not have the obligation to close or conclude the discussion (Savater 1991). ...
... humans, robots, etc.) (Siau and Wang 2020). But, at the root of ethics as a human practice, it is important to consider that it does not offer absolute solutions to specific dilemmas and that is why the emphasis should be placed on being concerned with understanding, what ethics does is to initiate the necessary debates to guide decision making, it should not have the obligation to close or conclude the discussion (Savater 1991). But, how to instruct creators and developers to have the ability to open these discussions? ...
Article
Full-text available
Theories related to cognitive sciences, Human-in-the-loop Cyber-physical systems, data analysis for decision-making, and computational ethics make clear the need to create transdisciplinary learning, research, and application strategies to bring coherence to the paradigm of a truly human-oriented technology. Autonomous objects assume more responsibilities for individual and collective phenomena, they have gradually filtered into routines and require the incorporation of ethical practice into the professions related to the development, modeling, and design of algorithms. To make this possible, it is pertinent and urgent to bring them closer to the problems and approaches of the humanities. Increasingly transdisciplinary research must be part of the construction of systems that provide developers and scientists with the necessary bases to understand the ethical debate and therefore their commitment to society. This article considers two theories as articulating axes: Blu-menberg's, coming from the field of philosophy, for whom the process of technification and especially the implementation of mathematical models in their algorithmic form leads to an emptying of meaning and therefore makes programmers who implement their functions to be alien to the concerns that gave them origin; Daston's, belonging to the field of the history of science and according to which the division of labor in the processes of technification of the calculation implies a kind of subordination in which those who implement the inventions of a small group of privileged mathematicians ignore the procedures that put them into operation. Given these two theories, the black box models prevalent in AI development, and the urgency of establishing explanatory frameworks for the development of computational ethics, this article exposes the need to give a voice to the cultural history of consciousness for promoting the discussions around the implementation of mathematical algorithms. The paper takes as a reference the different points of view that have emerged around the study of technological ethics, its applicability, management, and design. It criticizes the current state of studies from a humanistic perspective and explains how the historical perspective allows promoting the training of software engineers, developers and creators so that they assume intuitions and moral values in the development of their work. Specifically, it aims to expose how cultural history, applied to the study of consciousness and its phenomena, makes those involved in this technological revolution aware of the effect that they, through their algorithms, have on society in general and on human beings in particular.
... Artificial intelligence (AI) applications in health care, education, finance, mining, communications, and arts have brought about rapid and dramatic advances in these fields (Hyder et al., 2019). The rapidly expanding potential of AI in the economy and society has raised a set of legal and ethical issues (Wang and Siau, 2019;Siau and Wang, 2020). In-depth research into the ethical and legal aspects of AI to enable policymakers to introduce effective legislation and regulate AI development and applications is needed. ...
... Ethics and ethics regulation requires that the ethical and ethical norms that it needs to follow be incorporated into the programming at the beginning of AI research and development and solve problems from the source. Legal regulation necessitates that the entire process of AI research, development, and application, as well as all relevant individuals involved, be restrained on the basis of solid legislation and effective law enforcement, which is an effective measure to avoid social problems created by AI [16]. In general, foreign AI development strategies mainly include consolidating the theoretical foundation of AI, building an AI security system, accelerating supporting legislation, and promoting international cooperation in AI research and risk regulation [17]. ...
Article
Full-text available
With the continuous deepening of artificial intelligence (AI) in the medical field, the social risks brought by the development and application of medical AI products have become increasingly prominent, bringing hidden worries to the protection of civil rights, social stability, and healthy development. There are many new problems that need to be solved in our country’s existing risk regulation theories when dealing with such risks. By introducing the theory of risk administrative law, it analyzes the social risks of medical AI, organically combines the principle of risk prevention with benefit measurement, and systematically and flexibly reconstructs the theoretical system of medical AI social risk assessment. This paper has completed the following work: (1) reviewed and sorted out the works and papers related to medical AI ethics, medical AI risk, etc., and sorted out the current situation of medical AI social risk regulation at home and abroad to provide help for follow-up research. (2) The related technologies of artificial neural network (ANN) are introduced, and the risk assessment index system of medical AI is constructed. (3) With the self-designed dataset, the trained neural network model is utilized to assess risk. The experimental results reveal that the created BPNN model’s error is relatively tiny, indicating that the algorithm model developed in this research is worth popularizing and applying.
... This is where AI ethics has a solid role to play. Broadly speaking, the AI ethics literature can be broadly clustered by the ethics of AI (i.e., ethical issues related to or caused by AI) and ethical AI (i.e., machine ethics / the ethical and moral behaviour of AI) (Siau and Wang 2020). Essentially, this boils down to the What and How of AI. ...
Article
Full-text available
As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their creators beyond those commonly discussed (e.g., trolley problems and variants of it) and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the “wrong” choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake (e.g., health, well-being, finance, law) or when major individual or household decisions are taken. Doing so requires opening up the “black box” of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, explainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society (and avoid the negatives), we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterparts also.
... Other key themes included privacy, safety and security, transparency and explainability, and accountability. Siau and Wang (2020) analyzed eight institutionally published ethical frameworks. Their findings indicated that the most frequently found themes were responsibility/ accountability, privacy, transparency, and no harm (non-maleficence). ...
Article
Full-text available
As artificial intelligence (AI) becomes more prevalent, so does the interest in AI ethics. To address issues related to AI ethics, many government agencies, non-governmental organizations (NGOs), and corporations have published AI ethics guidelines. However, only a few test instruments have been developed to assess students’ attitudes toward AI ethics. A related instrument is required to effectively prepare lecture curricula and materials on AI ethics, as well as to quantitatively evaluate the learning effect of students. In this study, we developed and validated the instrument (AT-EAI) to assess undergraduate students’ attitudes toward AI ethics. The instrument’s reliability, content validity, and construct validity were evaluated following its development and application in a sample of 1,076 undergraduate students. Initially, the instrument comprised five dimensions that totaled 42 items, while the final version had 17 items. When it came to content validity, experts (n = 8) were involved in the process. Exploratory factor analysis identified five dimensions, and confirmatory factor analysis found that the model was good-fitting. The reliability analysis using Cronbach’s alpha and the corrected item-total correlation were both satisfactory. Considering all the results, the developed instrument possesses the psychometric properties necessary to be considered a valid and reliable instrument for measuring undergraduate students’ attitudes toward AI ethics. This study also found that there were gender differences in fairness, privacy, and non-maleficence dimensions. Furthermore, it revealed the difference in students’ attitudes toward fairness based on their prior experience with AI education.
... The benefit of improving users' performance in the learning environment, however, serves the subprinciple of provable benefice (Ong, 2021). Several ethical guidelines emphasize the benefit of the system in the development of affective AI (Floridi et al., 2018;Jobin et al., 2019;Siau and Wang, 2020). Although employing affect-adaptive tutoring systems entails some limitations regarding emotion recognition and adapting emotional user states, the benefits of promoting learning and maintaining a high performance outweigh their costs. ...
Article
Full-text available
Affect-adaptive tutoring systems detect the current emotional state of the learner and are capable of adequately responding by adapting the learning experience. Adaptations could be employed to manipulate the emotional state in a direction favorable to the learning process; for example, contextual help can be offered to mitigate frustration, or lesson plans can be accelerated to avoid boredom. Safety-critical situations, in which wrong decisions and behaviors can have fatal consequences, may particularly benefit from affect-adaptive tutoring systems, because accounting for affecting responses during training may help develop coping strategies and improve resilience. Effective adaptation, however, can only be accomplished when knowing which emotions benefit high learning performance in such systems. The results of preliminary studies indicate interindividual differences in the relationship between emotion and performance that require consideration by an affect-adaptive system. To that end, this article introduces the concept of Affective Response Categories (ARCs) that can be used to categorize learners based on their emotion-performance relationship. In an experimental study, N = 50 subjects (33% female, 19–57 years, M = 32.75, SD = 9.8) performed a simulated airspace surveillance task. Emotional valence was detected using facial expression analysis, and pupil diameters were used to indicate emotional arousal. A cluster analysis was performed to group subjects into ARCs based on their individual correlations of valence and performance as well as arousal and performance. Three different clusters were identified, one of which showed no correlations between emotion and performance. The performance of subjects in the other two clusters benefitted from negative arousal and differed only in the valence-performance correlation, which was positive or negative. Based on the identified clusters, the initial ARC model was revised. We then discuss the resulting model, outline future research, and derive implications for the larger context of the field of adaptive tutoring systems. Furthermore, potential benefits of the proposed concept are discussed and ethical issues are identified and addressed.
... The breakthrough in Artificial Intelligent has enabled the development of biometric facial recognition systems to control access to establishments. However, a challenge arises for technology companies: the ethical use of the stored data so that it is not considered a loss of privacy (Siau and Wang, 2020). This paper contributes to the literature showing a practical low-cost use case of CNN application in biometric check in and out at a company guaranteeing safety hygienic measures needed to prevent COVID-19. ...
Conference Paper
In recent years, biometric systems have been used to provide access control security, to help identify and recognize people. However, today due to the advance of the pandemic some of the biometric systems are considered as sources of transmission and contagion of COVID-19. This situation motivates us to the development of a facial recognition access control system through the Application of Convolutional Neural Networks (CNN) that, by not having physical contact with the devices, safeguards against COVID-19. A pre-trained VGG16 model was used and the new CNN model was trained with the Transfer Learning application. The recognition system was tested using a public database such as celebA, obtaining an accuracy of 84%, this isn't the best possible accuracy as some authors report better results. However, our objective was not to provide the best accuracy with random data, we only need to achieve good accuracy with the company's controlled data, our CNN model achieves an accuracy of 98% in controlled conditions with an average identification time of 80 milliseconds. It has a low implementation cost that allows it to be competitive in low-income countries, like Ecuador, compared to international costs of state-of-the-art systems.
Article
Full-text available
Purpose The presented research explored artificial intelligence (AI) application in the learning and development (L&D) function. Although a few studies reported AI and the people management processes, a systematic and structured study that evaluates the integration of AI with L&D focusing on scope, adoption and affecting factors is mainly absent. This study aims to explore L&D-related AI innovations, AI’s role in L&D processes, advantages of AI adoption and factors leading to effective AI-based learning following the analyse, design, develop, implement and evaluate approach. Design/methodology/approach The presented research has adopted a systematic literature review method to critically analyse, synthesise and map the extant research by identifying the broad themes involved. The review approach includes determining a time horizon, database selection, article selection and article classification. Databases from Emerald, Sage, Francis and Taylor, etc. were used, and the 81 research articles published between 1996 and 2022 were identified for analysis. Findings The result shows that AI innovations such as natural language processing, artificial neural networks, interactive voice response and text to speech, speech to text, technology-enhanced learning and robots can improve L&D process efficiency. One can achieve this by facilitating the articulation of learning module, identifying learners through face recognition and speech recognition systems, completing course work, etc. Further, the result also shows that AI can be adopted in evaluating learning aptitude, testing learners’ memory, tracking learning progress, measuring learning effectiveness, helping learners identify mistakes and suggesting corrections. Finally, L&D professionals can use AI to facilitate a quicker, more accurate and cheaper learning process, suitable for a large learning audience at a time, flexible, efficient, convenient and less expensive for learners. Originality/value In the absence of any systematic research on AI in L&D function, the result of this study may provide useful insights to researchers and practitioners.
Article
Full-text available
The advancements in software technology and data science are enabling Industry 4.0, aka the Fourth Industrial Revolution or the Industrial Internet of Things (IIoT). While the first three industrial revolutions have brought about immense change, the impact of Industry 4.0 will be much wider and far greater, especially with regard to the easily overlooked ethical and moral aspects. Widening wealth gaps between countries and among classes of people within countries, a potential growing unemployment rate, data privacy and accessibility issues, and the treatment of intelligent agents (e.g., military robots) present new and complex ethical and moral dilemmas. In this article, we discuss Industry 4.0 ethical and moral predicaments from the perspective of different business and technical forces. We present ethical and moral issues related to data privacy, data ownership, system accessibility, cybersecurity, the future of work, and the future of humanity. Our aim is to present various challenges and discuss ethical and moral considerations from different perspectives. We hope this discussion will give business executives and technical designers/ developers a better understanding and appreciation of the ethical and moral challenges Industry 4.0 presents.
Article
Full-text available
Along with the rapid development of artificial intelligence (AI), cyber-physical systems (CPSs), big data analytics, and cloud computing, Industry 4.0 — a subset of the fourth Industrial Revolution — has started to emerge and take root in many countries. Many expect that Industry 4.0 will be transformative and revolutionary for multiple industries and countries. Its impact will be much more significant than those of Industry 1.0, 2.0, and 3.0. Most studies and papers on Industry 4.0 have examined its impact on various industries, jobs, and organizations. In this article, we investigate the impact of Industry 4.0 on countries and groups of countries (e.g., developed countries, developing countries). Business executives will find this article informative as they contemplate whether to invest in and outsource to other countries. In addition, policymakers will learn about the challenges and opportunities of Industry 4.0 and how governments in various groups of countries should strategize to sidestep the dangers and realize the potential opportunities to transform and enhance their economies.
Article
Full-text available
The exponential advancement in artificial intelligence (AI), machine learning, robotics, and automation are rapidly transforming industries and societies across the world. The way we work, the way we live, and the way we interact with others are expected to be transformed at a speed and scale beyond anything we have observed in human history. This new industrial revolution is expected, on one hand, to enhance and improve our lives and societies. On the other hand, it has the potential to cause major upheavals in our way of life and our societal norms. The window of opportunity to understand the impact of these technologies and to preempt their negative effects is closing rapidly. Humanity needs to be proactive, rather than reactive, in managing this new industrial revolution. This article looks at the promises, challenges, and future research directions of these transformative technologies. Not only are the technological aspects investigated, but behavioral, societal, policy, and governance issues are reviewed as well. This research contributes to the ongoing discussions and debates about AI, automation, machine learning, and robotics. It is hoped that this article will heighten awareness of the importance of understanding these disruptive technologies as a basis for formulating policies and regulations that can maximize the benefits of these advancements for humanity and, at the same time, curtail potential dangers and negative impacts.
Article
Full-text available
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
Article
Full-text available
In this article, we look at trust in artificial intelligence, machine learning (ML), and robotics. We first review the concept of trust in AI and examine how trust in AI may be different from trust in other technologies. We then discuss the differences between interpersonal trust and trust in technology and suggest factors that are crucial in building initial trust and developing continuous trust in artificial intelligence.
Book
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.