ChapterPDF Available

Digital Ethics: Human Dignity vs Artificial Intelligence

Authors:

Abstract and Figures

Ethics, which deals with the study of moral principles governing human behaviour, addresses fundamental questions such as ‘How should people behave?’ and ‘What is the good life?’ Digitalisation and the development of information and communication technologies (ICTs) have created new challenges for ethics. These technologies raise different problem areas, especially in the fields of digital communication, cyberspace and artificial intelligence technologies. Cybersecurity issues, disinformation/manipulation in digital communication and situations that harm human dignity in the context of artificial intelligence technologies. In the face of these challenges, digital ethics has the potential to make a systematic contribution to the problems posed by the digital ecosystem by providing values and norms that can be categorised under the categories of beneficence, non-maleficence, explicability, justice and autonomy. The aim of this study is to evaluate the level of risk posed to human dignity by the design, use and social effects of artificial intelligence within the aforementioned categories. In this respect, it analyses the interaction between human dignity and artificial intelligence through a series of cases in which risks to human dignity are identified as surveillance and privacy concerns, discrimination and bias, injustice and due process, devaluation of human life, erosion of trust and authenticity. In this context, the research seeks to answer the question ‘How and to what extent do the design, use and social impacts of artificial intelligence pose risks to human dignity?’. Case analysis, one of the qualitative research methods, was used in the research. The research population consists of news on the violation of human dignity by artificial intelligence. The research sample was formed by selecting a case from each theme by purposive sampling method. In this context, the relevant news were examined in detail, summarised and tabulated by finding the ethical violations caused by artificial intelligence technologies in the news. Finding and analysing one case as an example for each theme also constitutes the limitation of our study. As a result, it is possible to say that with the development of artificial intelligence, humanity may face many unforeseen risks. With the research, it was concluded that artificial intelligence technologies are constantly collecting data in homes and privacy concerns are increasing, discrimination is reduced to algorithmic level, technological innovations cause injustice in academic development, human life is devalued due to the trust in technology, and finally, artificial intelligence technologies are used for social manipulation. In the light of these findings, it is necessary to reduce the risks created by the focus of artificial intelligence on utility and pragmatism and not to sacrifice human honour and dignity to any interest or benefit for an ethical use.
Content may be subject to copyright.
Proceedings of the 16 Annual Conference
of Global Communication Association, 2024,
Marmara University, Istanbul, Türkiye:
th
9 786256 103979
Proceedings of the 16th Annual Conference of Global
Communication Association, 2024, Marmara University,
Istanbul, Türkiye:
“Future(s) of Communication: Promises and Predicaments”
Proceedings of the 16th Annual Conference of Global Communication Association, 2024,
Marmara University, Istanbul, Türkiye: “Future(s) of Communication: Promises and
Predicaments”
Editors:
Süheyla Nil MUSTAFA, Alaattin ASLAN, Safa Görkem AKTAŞ, Atakan GÖKTEPE
ISBN: 978-625-6103-97-9
First Edition: Kriter Yayınevi 2024 İstanbul
Publisher Certificate No: 45353
First Edition, 366 p. 16 cm x 23,5 cm
Cover Design: Ahmed Baran
Layout: Kriter Yayınevi
Printing: Çözüm Baskı Merkezi Ticaret Limited Şirketi
Emniyetevleri Mahallesi Güvercin Sokak No:7/1 Kağıthane / İstanbul
Printing Press Certificate no:
© Kriter Yayınevi
All rights reserved. Except for the quotation of short passages, no part of this publication of this
may be reproduced without the prior permission of the Publisher: Kriter Basım Yayın Dağıtım
Film Müzik Reklamcılık Yapım Sanayi ve Tic. ve Ltd. Şti.
Contact Information:
Hobyar Mah. Ankara Cad. Güncer Han No: 17 Daire 306
Fatih / İSTANBUL
Tel: 0 212 527 31 89
info@kriteryayinevi.com
www.kriteryayinevi.com
All legal responsibility of the references, quotes, ideas, findings, and contents used in the
published articles in Future(s) of Communication: Promises and Predicaments belongs to the
authors. Financial and legal responsibility that may be subject to national and international
copyrights belongs to the authors.
This publication has been peer-reviewed.
Proceedings of the 16th Annual Conference of
Global Communication Association, 2024, Marmara
University, Istanbul, Türkiye:
“Future(s) of Communication: Promises and Predicaments”
Editors
Süheyla Nil MUSTAFA, Alaattin ASLAN,
Safa Görkem AKTAŞ, Atakan GÖKTEPE
First Edition
2024 – Istanbul
vii
İÇİNDEKİLER
PREFACE ................................................................................................................... v
ABSTRACTS
AUTHORITARIANISM ON THE MARCH - DEMOCRACIES IN CRISIS:
RETHINKING POLITICAL COMMUNICATION ECOLOGIES IN
INTERNATIONAL COMPARATIVE PERSPECTIVE ........................................... 3
Kai HAFEZ
SOCIAL MEDIA’S ROLE IN THE STATE OF DEMOCRACY:
AN ANALYSIS OF THE 2020 ELECTION IN THE UNITED STATES ................ 5
John Allen HENDRICKS, Dan SCHILL
CRITICAL MEDIA APPROPRIATION IN TIMES OF RIGHT-POPULIST
CRISIS: THE ROLE OF INTERPERSONAL COMMUNICATION FOR THE
DEFENSE/EROSION OF DEMOCRATIC PRINCIPLES ........................................ 7
Sabrina SCHMIDT
COMMUNICATION TECHNOLOGIES AND THE COLLAPSE OF TIME
AND DISTANCE – THE CASE OF THESIS MANAGEMENT SYSTEM ........... 11
Agnes Lucy LANDO, Anthony WAMBUA
NEWS PRODUCTION PROCESS MODEL IN AI JOURNALISM ...................... 13
Aysel AY
USING GOOGLE TRENDS AS A BIG DATA TOOL IN DIGITAL
JOURNALISM ......................................................................................................... 15
Safa Görkem AKTAŞ, Mehmet ÖZÇAĞLAYAN
HEALTH AS A TOOL OF PUBLIC DIPLOMACY: EFFECT OF TURKEY’S
CORONAVIRUS PERFORMANCE TO ITS SOFT POWER ................................ 17
Aslıhan BEDİER, Mehmet Emin BABACAN
JOURNALISM IN THE AGE OF IMMERSIVE TECHNOLOGIES:
METAMEDIA .......................................................................................................... 19
Oğuz GÜLLEB
xi
ADVENTURES IN ADLAND: DISCUSSING THE AI REVOLUTION IN
ADVERTISING'S CREATIVE AND STRATEGIC LANDSCAPE ....................... 83
Ayşe BİNAY KURULTAY
INFLUENCE OF ONLINE REVIEWS IN APP STORES ON THE PURCHASE
INTENTION TECHNOLOGY ACCEPTANCE OF M-LEARNING APPS ........... 85
Azade Asadi DAMAVANDI, Hyacinth Balediata BANGERO
ARTIFICIAL INTELLIGENCE AND THE FUTURE OF COMMUNICATION:
A GLIMPSE FROM TURKEY ................................................................................ 87
Ali Murat KIRIK
SUSTAINABLE OPEN-SOURCE CONTENT CREATION BY ARTIFICIAL
INTELLIGENCE AND ITS IMPACT ON CONSUMPTION ................................. 89
Sinem GÜDÜM
DIGITAL ETHICS: HUMAN DIGNITY VERSUS AI ........................................... 91
Alaattin ASLAN, Muhammed Akif ALBAYRAK
ARTICLES
ARTIFICIAL INTELLIGENCE AND THE FUTURE OF COMMUNICATION:
A GLIMPSE FROM TURKEY ................................................................................ 95
Ali Murat KIRIK
FOSTERING INTERCIVILIZATIONAL COOPERATION AND
INTERCULTURAL COMMUNICATION IN THE ISLAMIC WORLD ............. 119
Sidra Tariq JAMIL
NEWS PRODUCTION PROCESS MODEL IN AI JOURNALISM .................... 123
Aysel AY
RADICALLY- INDEPENDENT GLOBAL NEWS AGENCIES ......................... 149
Lee ARTZ
JOURNALISM IN THE AGE OF IMMERSIVE TECHNOLOGIES:
METAMEDIA ........................................................................................................ 177
Oğuz GÜLLEB
93
ARTICLES
223
DIGITAL ETHICS: HUMAN DIGNITY VS ARTIFICIAL
INTELLIGENCE
Alaattin ASLAN*, Muhammed Akif ALBAYRAK**
Abstract
Ethics, which deals with the study of moral principles governing human
behaviour, addresses fundamental questions such as ‘How should people behave?’
and ‘What is the good life?’ Digitalisation and the development of information
and communication technologies (ICTs) have created new challenges for ethics.
These technologies raise different problem areas, especially in the fields of digital
communication, cyberspace and artificial intelligence technologies. Cybersecurity
issues, disinformation/manipulation in digital communication and situations that
harm human dignity in the context of artificial intelligence technologies. In the
face of these challenges, digital ethics has the potential to make a systematic
contribution to the problems posed by the digital ecosystem by providing values
and norms that can be categorised under the categories of beneficence, non-
maleficence, explicability, justice and autonomy.
The aim of this study is to evaluate the level of risk posed to human dignity
by the design, use and social effects of artificial intelligence within the
aforementioned categories. In this respect, it analyses the interaction between
human dignity and artificial intelligence through a series of cases in which risks
to human dignity are identified as surveillance and privacy concerns,
discrimination and bias, injustice and due process, devaluation of human life,
erosion of trust and authenticity. In this context, the research seeks to answer the
question ‘How and to what extent do the design, use and social impacts of artificial
intelligence pose risks to human dignity?’. Case analysis, one of the qualitative
research methods, was used in the research. The research population consists of
* Asst. Prof., Marmara University, Faculty of Communication, Department of Journalism, İstanbul,
Türkiye, E-mail: alaattin.aslan@marmara.edu.tr ,ORCID: 0000-0001-5053- 9256
** Asst. Prof., Marmara University, Faculty of Communication, Department of Journalism, İstanbul,
Türkiye, E-mail: muhammed.albayrak@marmara.edu.tr ,ORCID: 0000-0002-1946-1638
224
news on the violation of human dignity by artificial intelligence. The research
sample was formed by selecting a case from each theme by purposive sampling
method. In this context, the relevant news were examined in detail, summarised
and tabulated by finding the ethical violations caused by artificial intelligence
technologies in the news. Finding and analysing one case as an example for each
theme also constitutes the limitation of our study.
As a result, it is possible to say that with the development of artificial
intelligence, humanity may face many unforeseen risks. With the research, it was
concluded that artificial intelligence technologies are constantly collecting data in
homes and privacy concerns are increasing, discrimination is reduced to
algorithmic level, technological innovations cause injustice in academic
development, human life is devalued due to the trust in technology, and finally,
artificial intelligence technologies are used for social manipulation. In the light of
these findings, it is necessary to reduce the risks created by the focus of artificial
intelligence on utility and pragmatism and not to sacrifice human honour and
dignity to any interest or benefit for an ethical use.
Keywords: Ethics, digital ethics, digital values, artificial intelligence,
human dignity.
Introduction
Digitalisation continues to deeply affect daily life practices and transform
everything about human beings. We are learning to live in a world dominated by
data, which is the raw material of information operating systems. We continue our
lives in coordination with mobile phones, computers, smart wristbands-watches,
security cameras, autonomous driving vehicles, devices that track health records,
robots that help clean houses and the like. The technology with the most profound
effects in this transformation is undoubtedly the inclusion of Artificial Intelligence
(AI) and Intelligent Systems in human life. With AI technologies, the autonomy
of machines, the acceleration of data-based decision-making processes and the
support of human abilities by automation point to many areas of use of AI. With
the expansion of the usage area, its effects on daily life cause many different
problems. In this regard, it is necessary to draw a framework for the ethical use of
AI in order to prevent possible problems in the relations of AI technologies with
human beings. While trying to determine the areas of human rights and
responsibilities through ethical reasoning, two approaches are basically used. The
first of these is teleological approaches that focus on the consequences of
behaviours, and the second is deontological approaches that emphasise processes.
225
Ethical approaches are undoubtedly guiding for engineers and technology
designers. However, philosophical approaches that try to move from theory to
practice have difficulty in keeping up with the speed of rapidly developing
technology. For this reason, it is important to understand digitalisation and to
consider ethical approaches to the digital field as a whole.
In addition to ethical approaches, human dignity and honour, which are
essential to be protected in the Declaration of Human Rights, are tried to be taken
as a basis in this research. The sentence ‘All human beings are born free and equal
in dignity and rights’ in the first article of the Declaration refers to the fact that
human beings deserve a dignified life from birth. However, technological
developments cause us to encounter many problems regarding the potential harm
of systems that compete with human beings. The main problematic arises from
the fact that for the first time, human beings are faced with a decision-making
mechanism that can be compared with themselves and technology with the ability
to learn. When the problems caused by AI are taken into consideration; the
processes of instrumentalization of human beings and transforming them into data
without attributing importance to human beings as anything else are observed.
Such approaches can only be prevented by introducing the human being as a being
with dignity and honour to AI and AI technologies and by programming with
attention to human dignity and honour.
In this study, firstly, the development of AI technologies on the axis of
digitalisation is mentioned. Then, both conceptually and theoretically, it focuses
on the relationship of digitalisation, which philosophy and social sciences have
difficulty in keeping up with the speed, with ethics. In particular, the status of the
notion of human dignity in the face of AI technologies is problematised. In this
direction, human dignity as a human right and the problem areas where artificial
intelligence affects human dignity constitute the following sections of the study.
The study continues with the research part that reveals the practical
justifications/conditions of the theoretical approach. The study is then concluded
with a conclusion in which both the theoretical and practical situation is discussed.
Digitalisation and the Rise of Artificial Intelligence Technologies
In the literature, the concepts of digitalisation and digitisation are used
interchangeably and are often confused (Brennen & Kreiss, 2014) (Lash, 2002)
(Negroponte, 1996). The concept of digitisation represents the conversion of
analogue data into the (0-1) language that computers can understand. In a sense,
the entire process of making whatever is in our lives understandable and
transferable to a computing machine can be called digitisation. On the other hand,
226
digitalisation refers to cultural structures as a social phenomenon and can be
understood as the adoption and increase in the use of digital technologies or, in its
simplest form, computer technology. Digitalisation is the concept that represents
transformation with the help of computer technologies. Following the distinction
between digitisation and digitalisation, digitisation is defined as the material
process of converting analogue information streams into individual digital bits.
Digitalisation, on the other hand, is understood as the restructuring of many areas
of social life around digital communication and media structures.
Technological development and social transformation provide a simple
contextual map for reading human history. We are trying to make sense of the flow
of history from the use of iron to steam-powered machines, then to the discovery
of electricity and the use of internal combustion engines, to the integration of
electronic devices in daily life with the impact of petrochemicals, and to the
concepts of globalisation and global village (McLuhan & Fiore, 2012, p. 67)
thanks to digital networks in the last 30 years. In the last 5 years, especially
artificial intelligence and intelligent systems point to technological developments
that stand out in terms of societal impact.
Figure 1. History of Innovation Cycles (Neufeld & Ma, 2021)
In the wave theory put forward by Joseph Shumpter, the distance between
the waves has shortened with digitalisation and it is quite easy to predict that it
will progress even faster (Hobikoğlu, 2011, s. 301). When the factors in the last
two waves are taken into consideration, the traces of technological innovations
including computers and the Internet are more apparent. Considering the areas of
227
use of visual and digital technologies above, Digital Communication, Cyberspace
and AI-Intelligent Systems are seen as the three areas where digital transformation
is experienced the most. The reason for this is that all of the transactions made in
digital environments are possible through these three ways. Mutual
communication of people and communication in social media constitute the
digital communication dimension. The fact that all kinds of transactions for the
continuation of daily life are carried out by companies, public and individuals via
the internet constitutes cyberspace. In addition to the digitalisation of human data,
AI and intelligent systems that digitally enter the field of existence point to the
last ring of digitalisation. While moving from the analogue universe to the world
of zeros and ones, the effect of an algorithm set produced in the digital world on
the analogue world occurs.
There are many studies indicating that AI technologies, which constitute the
last link of digitalisation, cause privacy or confidentiality violations in order to
fulfil the ‘decision-making’ situations or tasks defined to it like a human with a
will (Shahrıar, Allana, Hazratıfard, & Dara, 2023) (Mühlhoff, 2023, s. 11).
Although every technology that is opened to social use provides many benefits to
humanity, it also opens the door to different problem areas. Apart from the lack of
a guideline at the basis of the problems, there are problem areas overlooked by the
manufacturer and developer. In addition to these problem areas, there are also
problems caused by the end user or consumer. Undoubtedly, problems arise when
the boundaries of rights and responsibilities are not clear. The regulation of
responsibilities and rights is possible through ethical reasoning.
Digitalisation and Ethics
The digital revolution is truly historical… So much is changing so
dramatically that a lot of the conceptual tools that we have need to be completely
rethought and others need to be designed. (Floridi, 2024).
The changing social structure of today's world points to a period in which
all debates on the necessities of coexistence need to be rethought. The speed and
unlimited process of technological development causes change and
transformation of all the structures it affects. How moral action, which is one of
the foundations of social coexistence, should be in this changing world emerges
as a problematic. In determining the boundaries, ethics, which is the philosophy
of morality, should gain a new dimension and a new action system for digital
environments should be determined.
228
When the place of the concept of ethics in the literature is examined, it is
seen that it is confused with morality, religion and law, but it is frequently used
interchangeably with morality. The reason for this is that the questions that the
concept of ethics seeks answers to find answers in these three areas. Ethics is seen
as a systematic answer to the questions ‘How should man behave?’ and ‘What is
a good life for man?’ (Türkeri, 2014, s. 11). If these questions are answered in
terms of law, we can get the answer that human beings should obey the laws and
the good life for humanity will be provided by laws. From the point of view of
religions, we come across the answer that people should obey the commands of
religion and that a good life for humanity can only be achieved through a religious
life. In terms of morality, it can be seen as an answer to comply with the social
and to ensure the continuation of social life. It is seen that the concept of ethics is
more confused with morality because law and religion are social structures. Just
as there cannot be a religion without a society, there cannot be a legal structure
that does not take its basis from the social traditions and customs to which it
belongs.
In this context, when we look at the origin of the word ethics, its
relationship with morality stands out. The word morality in Turkish comes from
the Arabic word hulk, meaning custom, tradition, custom, temperament,
temperament. Similarly, English morality, French moralite, German moralitat and
Greek ethos, which derive from the Latin mos-moris, correspond to the Turkish
word ahlak. The reason why the basis of the concept of ethics, which derives from
the Greek word ethos, is understandable with morality and why it is confused with
morality in its social use is due to the reference to the same world of meaning.
Alain Badiou, in his definition of ethics, which he tries to make from the context
of life, explains that it is an endeavour to create consensual laws concerning
people in general, their needs, their life and death, and, by extension, to draw a
clear, universal limit to evil, to things that are incompatible with human essence
(2013). In one sense, references to the norms of sociality in the practical field of
life constitute the field of morality, while on the other hand, the systematisation of
these norms is understood as the subject of ethics. Given this distinction, morality
is the totality of behavioural structures based on custom, whereas ‘ethics’
describes behaviour according to moral philosophy, reason, or reflection on the
foundations and principles of behaviour.
Ethics is often thought of as a rational process that takes into account
established principles when two obligations conflict. The most difficult ethical
dilemmas arise when two correct moral obligations come into conflict. Ethics
therefore provides a balancing of competing rights, often when there is no ‘right’
229
answer (Day, 2006, s. 3). Ethics is the questioning of situations that reveal the
morality of an action. In this respect, while morality deals with concepts such as
good, duty, duty, necessity, permission, ethics is a philosophical discipline that
deals with human actions. However, it is not a theory of action that gives direct
action instructions such as you should do B when situation A occurs. Because
ethics is an inquiry into whether all kinds of human activities, actions and
behaviours are moral (Pieper, 2012, s. 17-18). Transferring these theoretical
approaches to digital environments and determining the ethical codes of
digitalisation, it is necessary to determine what the scope of digital ethics is as an
applied ethics field.
The speed of technological development and its capacity to transform
humanity require the creation of a new form of behaviour both in the current
physical world and in the virtual world. In this sense, it is essential to determine
ethical codes in order to aim for the best, the most beautiful and the most correct
in any situation. The main commodity on which this technological development
is based is data. Especially the processing of data and in addition to this, the
dataisation of the physical world is important. How the ethical codes of this
situation, which is expressed as digitalisation, will be formed is an area that needs
to be carefully considered. How the behavioural form of sociality, which we
conceptualise as digital ethics, is digitalised with technology is described in the
literature as follows.
Digital ethics refers to the examination of the technological impact on the
social, political and moral sphere (Marghalani & AlQahtani, 2019, s. 3). Digital
ethics is concerned with the question of what norms and values we want to achieve
in a digital world to positively shape society through technological innovations
(Mackert, 2020, s. 4). Digital ethics is about complex adaptive systems, chaotic
phenomena and a world where technology affects society. One way to deal with
this complexity is to be honest and open about what we can know about what we
create. We cannot be held responsible for not knowing what is inherently
epistemically inaccessible (Hoven, 2015, s. 59). Referring to the responsibility
side of technological development, Hoven states that everyone suffers from the
multiplicity of unknowns in this world of 1s and 0s that digitalisation creates
outside of physical life. In line with sociologist Ulrich Beck's risk society, the
growing nature of uncontrollable risks creates different, unpredictable risks,
similar to the increasing uncertainty in the way we construct our understanding of
society and the questions related to it (1992, s. 20-23). This notion of a risk society
seems appropriate to the study of digital ethics, as we try to predict what effects
different technologies will have on human existence.
230
Digitalisation is by its very nature a step into an unknown world where it is
not possible to predict what society and the individual will do. Rafael Capurro, a
pioneer in the study of digital ethics, emphasises that digital ethics, or more
broadly information ethics, is concerned with the impact of information and
communication technologies on our societies and our living space in general
(2009). At a basic level, digital ethics can be seen as ‘principles’ and ‘concepts’
that can be used to manage technology and data, including factors such as risk
management and individual rights. In fact, these can be used to understand and
resolve moral issues related to the development and application of different
approaches to solutions about data issues for different technologies in the face of
various ethical challenges (Barker & Ferguson, 2022). Digital ethics is the field
of study concerned with how technology shapes and will shape our political, social
and moral existence (Henshall, 2018). In order to analyse digital ethics as an
applied ethical proposal, it is very important to determine its scope and which
concepts it is related to. The reason for this is that since it is understood that an
applied ethical theory consists of discussing the problems arising in the field of
activity and presenting a list of results, it is necessary to determine the scope of
the activity of digital ethics and the concepts that the field of activity is related to
(Aslan, 2022, s. 83).
The definitions given above by many authors on what digital ethics is point
to both personal and social dimensions. In order to avoid this confusion, in order
to clarify the definition, the following scope diagram for digital ethics will be
explanatory: design, use and societal impact (Barker & Ferguson, 2022).
i. Design: It focuses on the design phase of digital technologies and data
tools. It refers to the step of solving the problems that are likely to arise
at the design stage with an ethics (homework ethics, etc.) established in
the knowledge of engineers, software developers, programme
developers, etc. Indeed, it is observed that algorithms can reproduce
human prejudices, reveal new distinctions (or reproduce them on a larger
scale) and lead to injustices. The way to avoid this is to act ethically at
the design stage.
ii. Usage: It aims to examine how service users and employees, as well as
an organisation's managers and partners, use emerging technology and
data. This requires an ethical assessment of how people use the
technological resources at their disposal.
iii. Societal Impact: It is necessary to examine the impact of digital
technology and data analytics on wider society. It is therefore concerned
with the acceptability of digital innovations and solutions, human rights
231
and representation, the environmental/energy footprint of digital tools,
and the inclusion of the wider social environment.
To put these three ethical scopes in a cluster diagram, digital ethics spans
all three scopes and is centralised at the intersection of the three scopes.
Figure 2. The Context of Digital Ethics
It is understood that a deontological task ethics prescriptivism guided by
general ethical theories should guide the design phase of digital ethics. Although
it seems to be a solution for companies to design by considering ethical values in
product development processes, profitability concerns cause many procedures to
be ignored. However, companies should be obliged to provide ethical norms to
their employees to guide each step from the beginning of algorithm/product
development and to ensure the responsible application of digital ethics during
product development.
How do we embed ethics in the machines, systems, software, platforms,
etc. we develop? People making decisions and taking responsibility for their
decisions is an important part of social life. But machines and computer systems
have no morality, only decision-making patterns that they are programmed or
trained to follow. This means that a machine cannot be held responsible. It is
therefore important to incorporate ethical principles into the design and
development of digital technologies. Key aspects of human-centred design and
Design
Societal
Impact
Usage
Digital
Ethics
232
data protection should guide the development process from the outset. For
example, it is unacceptable to impose bias on machines before they start using
algorithms. Commitment to digital ethics is important from the data selection and
design stages.
Adhering to ethical principles at the design stage allows us to prevent risks
at an early stage, rather than reacting to risks as they arise. The Ethically Aligned
Design (EAD1e) report prepared by the Institute of Electrical and Electronics
Engineers (IEEE) for its global initiative is particularly noteworthy. This report,
prepared by experts in the field, comprehensively addresses the value-driven
development of autonomous and intelligent systems. A values-based product
design process should be supported by experts who are involved early in the
development process and check products for social acceptability and compliance
with established standards (e.g. human rights, ecological sustainability) (PWC,
2020, p. 26). Technological developments should include certain rules regardless
of the benefits they will generate from the design stage. A deontological approach
should be taken in production/development processes and design should be done
by following ethical rules.
The basic rule in the use dimension of digital ethics is the understanding
that ‘just because you can does not mean you should’. When considered with
general ethical theories, the rules of virtue ethics should operate in the dimension
of use. As a user, humans are expected to exhibit virtuous behaviour in digital
environments. Especially the fact that robots and autonomous systems are
increasingly entering into daily life and the unlimited data acquisition algorithms
of the learning structure underlying AI technology give the users of the systems a
great superiority, that is, the power of information. Controlling this power or
subjecting it to a legal limitation seems to be quite difficult for the decentralised
nature of digitalisation. Therefore, the principle that every user should remember
can be summarised as follows:
Ethics is all about asking yourself the question: Just because we can,
should we?’ (Andersen, 2022)
The last dimension, societal impact, is important for understanding the
function of digital ethics. Guided by general theories of ethics, what is socially
good is questioned and evaluations in the context of utilitarianism come to the
fore. The biggest challenge in digital ethics is the investigation of elements that
are invisible or non-existent to the naked eye, which have varying effects and
consequences on social morality and established traditions. Uncertainty created
by new technology and uncontrollable risks due to questions about new
233
technology are inherent in the nature of technology. The theoretical nature of the
perceived consequences makes it difficult to predict the effects of new
technologies on society. This leads to uncontrolled possibilities and consequences
in digital ethics (Floridi & Taddeo , 2016, s. 2). For example, the development of
artificial intelligence-supported computers and interactive robots with human-like
abilities is becoming a reality that requires new ethical standards. In today's
society, digital technology more closely resembles the technology previously
found in science fiction literature, with technological applications in the social,
political and even moral spheres of life. Due to new technology products such as
smartphones, the social sphere is being disrupted with the emphasis on online
social relationships instead of real-life interactions (Ashok, Madan , Joha, &
Sivarajah, 2021, s. 2). In order to identify the ethical challenges posed by digital
technology, the construction of today's digital ethical environment on the basis of
total benefit should be seen as the main goal of the social scope.
Determining the fields of activity for digital ethics requires analysing the
digital technologies used under main headings. Digital ethics can be considered
as a guiding conceptual framework for the problems arising in the following three
main areas:
Digital Communication: It covers many interactive communication
channels such as social media, internet news sites, blogs, mobile communication
applications, etc. It is an area where interactive communication brought about by
digitalisation comes to the fore and deals with problems such as protection of
personal data, disinformation and hate speech.
Cyberspace: It covers the practices of doing business in digital
environments established in parallel with daily life. It is a field where structures
such as business, education, citizenship and finance come together and issues such
as cyber security, privacy and equality gain importance.
AI and Intelligent Systems: Ethical issues arise in an environment where
humans and machines are similar to each other and algorithms interact with the
physical structure. Issues such as bias, transparency and responsibility should be
addressed in this field. It covers the autonomous functioning of machines and
robots and the control of computational work systems by artificial intelligence and
machine learning.
234
Figure 3. Digital Ethics Activity Areas
In this sense, digital ethics should be seen both as a social phenomenon,
pointing to a common intersection area in the context of design, use and impact,
and as an umbrella concept in terms of the structures contained in technologies.
On the other hand, it does not seem possible for digital ethics to have a definitional
boundary. This is due to the multidisciplinary nature of digital ethics and the speed
and capacity of technological development. As stated by Hoven above, ‘perhaps
the only thing we can do in a man-made world is to be honest. Because it would
be an error of logic to be held responsible for many things that it is not
epistemically possible to know’ may be the most accurate approach. To explain
this situation, we can see how much the technologies that have emerged even in
the last 10 years have changed people's behavioural patterns. As an example, the
active use of face recognition systems involves many situations that people may
not know about the risks arising from this issue. This simple technological
development, which seems to be the use of our phone only by recognising our
face, brings with it many questions about the legal consequences of the use of this
simple technological development by international companies by giving human
private information (human face and many biometric data expressed with 0 and
1s) and how this data will be used and stored. As an aim of this study, digital ethics
can be defined as what kind of moral behaviour will be followed in response to
technological development and what values this behaviour will be based on and
how it will become a norm.
Digital ethics is a field of applied ethics. In their study, Floridi and Cowls
suggested five main values in total for this field and stated that especially AI
Digital
Ethics
Communication Cyber Space
Artificial
Intelligence/
Intelligent
Systems
235
studies should be carried out taking into account Beneficence, Non-Maleficence,
Explicability, Justice, Autonomy (2021).
These five main values and the twenty-four values and norms under them
show the values to be applied to solve the problems to be experienced in the fields
of scope and effectiveness in terms of digital ethics through ethical reasoning.
When the context and values are considered together, it is possible to express
digital ethics with the following model.
Figure 4. Digital Ethics Model
As a Human Right: Human Dignity
Although there is no clear definition of the concept of human dignity
(Donnelly, 2015) (O’Mahony, 2012) (Rodriguez, 2015), it has an important place
in many human rights documents (Barak, 2015) (McCrudden, 2008) (Waldron,
2015), although it causes some confusion regarding its use (Feldman, 1999).
According to Sue Anne Teo (2023), although the origins of the concept date back
to Ancient Rome, it became philosophically important with the works of
Immanuel Kant, and as a result of the deep suffering caused by the Second World
War, it was included in the preamble of the 1945 United Nations Charter:
236
‘... to reaffirm faith in fundamental human rights, in the dignity and worth
of the human person, in the equal rights of men and women and of nations
large and small, and...’
In the first article of the 1948 Universal Declaration of Human Rights
(UDHR), which constitutes the motivation of our study, human dignity is used
again as ‘All human beings are born free and equal in dignity and rights’. It
basically states that the value of human beings as a right derives only and only
from being born as human beings and that a dignified life does not depend on
anyone's grace. Similarly, the 1966 International Covenant on Civil and Political
Rights states in its preamble that recognition of human dignity is the foundation
of freedom, justice and peace as follows
‘Considering that, in accordance with the principles proclaimed in the
Charter of the United Nations, recognition of the inherent dignity and of the
equal and inalienable rights of all members of the human family is the
foundation of freedom, justice and peace in the world...’
Article 1 of the EU Charter of Fundamental Rights (2000) also includes
human dignity: ‘Human dignity is inviolable. It must be respected and protected.’
The European Data Protection Supervisor (2015, s. 12), says that human dignity
should be at the centre of digital ethics as a counterweight to the surveillance and
power asymmetry that individuals commonly face. The European Union Agency
for Fundamental Rights (FRA) (2020, s. 60), which emphasises the need to focus
on people rather than technology, stresses respect for human dignity in the
processing of personal data: AI-driven processing of personal data must be
carried out in a manner that respects human dignity. This puts the human at the
centre of all discussions and actions related to AI.
AI technologies continuously collect data with their learning algorithms
and make inferences from this data. It uses every state of the human being as a
data store and accelerates the instrumentalization of the human being in this
respect. Biometric data collected from human beings carry serious risks.
Malicious people can use these data to steal people's identity, and privacy risks
arise when governments use them to spy on their citizens. On the other hand, we
use facial recognition or fingerprint unlocking to unlock our smartphones, which
is much more secure than a password. Human data can even be used to prevent
and respond to terrorist attacks (Floridi, 2024). The Universal Declaration of
Human Rights mentions in its first title that man is a being with dignity and honour
and that this is a right. Ethics assumes that there is a right and a responsibility
based on this right. Today, AI shows human-like behaviours and in some cases it
237
is not in a decision-making position. This situation does not assign responsibility
to artificial intelligence while defining human-like rights. The responsible user,
software developer, manufacturer company or public authority is assumed. This
is precisely why it requires an approach that will take into account human honour
and dignity when considering the dimensions of design, use and societal impact.
Identification of Problem Areas where Artificial Intelligence Affects
Human Dignity
Today, the problems created by AI systems on individuals and society cause
us to question how well these technologies are designed in accordance with ethical
principles. For example, an AI system following a teleological approach may
neglect individual rights by focusing on optimising outcomes (Fumagalli &
Ferrario, 2019, s. 6-7). A deontological approach, on the other hand, places more
emphasis on the protection of human rights and dignity, as AI systems need to
follow certain ethical rules (Ozone, 2019). However, both approaches may be
insufficient to solve the ethical problems associated with the technology; these
problems require careful regulation of the consideration and use of AI in the
societal context.
After evaluating the relationship of AI with ethical theories, it is necessary
to focus on the concrete ethical problems caused by technology in daily life.
Floridi argues that the risks are overstated because humans have always used
propaganda and disinformation techniques to manipulate public opinion. This may
be true, but the quantity, quality and cheap nature of the misinformation generated
by AI seems quite frightening. It is both easy and highly cost-effective to produce
industrially (Floridi, 2024).
The ethical test of AI cannot be limited to theoretical discussions; on the
contrary, as these systems transform our daily lives, ethical issues become more
concrete and urgent. Technological development causes new problems that affect
all segments of society. For example, while the use of AI in surveillance
technologies leads to privacy issues, the involvement of algorithms in decision-
making processes can trigger issues such as discrimination and prejudice. In this
context, it becomes even more important to evaluate the effects of AI on human
dignity in the light of ethical theories.
Surveillance and Privacy Concerns
One of the biggest risks of AI technologies is threats to the privacy of
individuals. Especially facial recognition systems and big data analyses lead to
continuous monitoring of people. This situation violates the right to privacy of
238
individuals and damages human dignity (McStay, 2020, s. 3). In the context of
digital ethics, the use of surveillance technologies in a way that respects human
dignity and the protection of personal data is vital.
This problem may arise as a natural consequence of a teleological approach
to technological development. Although the goal of AI is to increase security in
society, the privacy rights of individuals may be ignored in result-oriented
decision-making processes (Fox , Clohessy , Werff, Rosati , & Lynn, 2021, s. 12).
A deontological approach can ensure that ethical rules are developed to ensure
that such systems do not violate the right to privacy.
Discrimination and Bias
The potential for AI systems to make discriminatory judgements poses a
serious challenge to human dignity. Algorithmic biases disadvantage minority
groups in particular and lead to unfair outcomes (Varona & Juan, 2022, s. 8). In a
world where people are judged solely on the basis of statistical data, these biases
lead to violations in areas concerning human dignity. AI systems should therefore
be carefully designed and operated in accordance with the principle of fairness.
The problem of algorithmic bias also arises in the teleological design of AI.
Individual differences may be ignored when aiming to optimise the results
(Kordzadeh & Ghasemaghaei, 2022, s. 404). A deontological perspective can
contribute to the solution of this problem by arguing that each individual should
be evaluated equally and without discrimination.
Injustice and Due Process
In legal processes where AI is used, there is a risk that justice will not be
fully achieved and individuals will not be represented in an equitable manner. AI
tools used in judicial systems may violate the rights of individuals and prevent the
fair operation of legal processes (Ali, 2023) (Wang N. , 2020, s. 61-62). This
situation leads to the damage of human dignity and undermines the credibility of
the legal system.
Devaluation of Human Life
The widespread use of AI in certain sectors, especially in health and
employment, may result in the reduction of human life to mere data. For example,
automation and layoffs by AI can lead to people being treated as mere economic
agents and the devaluation of human life (Floridi & Taddeo , 2016, s. 3). This is a
situation that jeopardises human dignity and is a problem that needs to be
addressed within the framework of digital ethics. Particularly in critical areas such
239
as health, AI making decisions based solely on statistical truths may see people as
mere data assets and disregard their individual value (Gillies & Smith, 2022, s.
45-46). This may mean surrendering the meaning and value of human life to
technology. Deontological ethical perspective can help prevent such problems.
Because it emphasises that each individual has an intrinsic value and this value
should never be ignored. That is, every human life is intrinsically valuable,
regardless of the consequences.
Erosion of Trust and Authenticity
AI and digital technologies profoundly affect not only human life but also
social ties. Fake content, misleading information and artificial realities created by
AI cause erosion of social trust. People may find it difficult to trust the information
they encounter in the digital environment, which may lead to the weakening of
relationships between individuals. Authenticity and trust are the cornerstones of a
healthy society, and their erosion can have serious consequences for human
dignity in the long run (Choung, David, & Ross, 2023, s. 734). Deontological
approaches advocate that AI systems should be transparent and developed in
accordance with ethical rules in order to maintain trust.
Teleological ethical theory creates a more complex situation here. Because
the damage to trust in the short term can be ignored in order to improve social
outcomes. However, the long-term effects of such an approach can be quite
destructive. The loss of trust weakens both individuals' relationship with digital
environments and society's trust in technology.
Research
Ethics assumes that if there is a right, there is also a responsibility based on
this right. Today, AI shows human-like behaviours and in some cases is in a
decision-making position. While this situation defines human-like rights to
artificial intelligence, it does not assign responsibility to artificial intelligence.
Therefore, the user, software developer, manufacturer company or public
authority is assumed to be responsible in the legislation.
This research has two main objectives:
1) To identify the tensions between artificial intelligence and human
dignity.
2) To propose a method of enquiry into a problem situation encountered
through a digital ethics approach.
240
This study analyses the interaction between human dignity and AI through
a series of cases. The study uses case study as one of the qualitative research
methods. The universe of the study is all news content created related to AI. At
this point, a sample news covering each violation area was taken as a study case.
In other words, the sample of the study is the problems related to AI subject to the
news. Purposive sampling was preferred as the sampling method. In the research,
a case study suitable for the theme of violation was analysed. At this point, it was
tried to make suggestions for the design, use and social impact of AI technologies
in a way that does not threaten human dignity.
Cleaning robots, an application of AI technology, facilitate daily life tasks
and increase data collection activities. These devices raise potential privacy
concerns due to the various technological components they contain. LiDAR (Light
Detection and Ranging) sensors used in the design phase of cleaning robots are
capable of mapping the entire house in detail. This mapping process allows the
device to ‘know’ the internal structure of the house more comprehensively than
most of its inhabitants. Cameras, added as part of AI-assisted obstacle recognition
systems, provide the ability to take images from every point of the house.
Manufacturers claim that these images are only used for instant object recognition
and motion planning and are not stored in databases. However, the common belief
that no data is completely lost in digital systems has been supported by various
examples.
In this context, the news that the robot vacuum cleaner took intimate photos
of people and these photos were published on social media platforms was analysed
as an exemplary case. As a result, concerns about surveillance and privacy have
been justified.
241
News number 1
News Headline A Roomba recorded a woman on the toilet. How did screenshots end
up on Facebook?
News link https://www.technologyreview.com/2022/12/19/1065306/roomba-
irobot-robot-vacuums-artificial-intelligence-training-data-privacy/
Summarised version
of the news
A journalist found that images captured by robot vacuums in people's
homes were shared with human workers for labelling. These images
included sensitive information like a woman on the toilet and a young
boy on the floor.
This raises privacy concerns because people don't expect robot
vacuums to record them in their homes. The data is also shared with
multiple companies and countries, making it difficult to control.
News Date 19.12.2022
Which theme the
problem is caused by
surveillance and privacy concerns
Digital values
breached Non-Maleficence, Autonomy
The context that
caused the
infringement
Design
Table 1. Surveillance and privacy concerns
Figure 6. Image captured by Robot Vacuum
In both photos in Figure 6, the image captured by iRobot development
devices is annotated by data labellers. In the photo on the left, where faces are
visible, they are obscured by a grey box by MIT Technology Review. In the photo
on the right, the woman's face was initially visible, but has been obscured by MIT
Technology Review.
242
From a design perspective, AI technologies need to be developed in a way
that respects human dignity and limits the use of cameras. One of the new
generation features of robot vacuum cleaners is that they allow users to remotely
(or closely) monitor their children, pets and homes when they are not at home.
However, this feature poses a potential surveillance threat for guests or service
providers.
In general, it is observed that individuals are reluctant to allow 24-hour
surveillance by placing an internet-connected camera in their homes. However,
marketing activities that robot vacuum cleaners and similar technologies will
make life easier lead users to consent to such surveillance technologies. Moreover,
consumers purchase these devices by paying high sums of money. This situation
points to the ethical dilemmas and privacy issues brought by technological
innovations.
Applications of AI technologies in the field of security, especially facial
recognition systems, are becoming increasingly widespread. These systems are
used primarily in China, but also in the United States and other Western countries,
and allow individuals to be tracked through their biometric data. However, the use
of these technologies also raises serious ethical and legal issues. The use of facial
recognition systems to identify criminals has led to various disruptions and cases
of false arrest. The most striking examples of this situation have been observed
especially in the United States of America. In statements made by the American
Civil Liberties Union (ACLU), it was emphasised that facial recognition
technology causes false arrests and that this situation cannot be prevented with a
simple warning. (2021).
Problems arising from the use of this technology are particularly acute for
certain ethnic groups. For example, it has been reported that African-American
citizens in the USA are more likely to be misidentified by these systems. This
situation highlights the potential discrimination and prejudice inherent in the
technology (Hill, 2020) (Anderson, 2020). According to a report in the New York
Times, a case of false arrest using facial recognition technology has been recorded
(Hill, 2020). Similarly, a report in the Detroit Free Press documented a similar
incident in the state of Michigan (Anderson, 2020). These cases illustrate the
potential risks and negative consequences of the use of facial recognition
technology in forensic processes.
243
News number 2
News Headline Facial Recognition Technology and False Arrests: Should Black People
Worry?
News link https://capitalbnews.org/facial-recognition-wrongful-arrests/
Summarised
version of the news
Facial recognition technology, while advancing, raises concerns about
racial bias in law enforcement, particularly affecting Black individuals due
to inaccuracies in identifying darker skin tones. Critics argue that without
proper regulation and empirical research, its deployment risks perpetuating
discrimination in the criminal justice system, despite its potential for aiding
investigations. Calls for moratoriums and regulations persist as wrongful
arrests and privacy violations prompt legal and ethical debates,
highlighting the technology's complex societal implications.
News Date 14.09.2023
Which theme the
problem is caused
by
discrimination and bias
Digital values
breached Non-Maleficence, Justice, Autonomy
The context that
caused the
infringement
Usage, Design, Societal impact
Table 2. Discrimination and bias
Figure 7. A person is photographed with their face projected onto a 3D facial
recognition monitor used for secure entry and other applications. (Photo by
Gerald Martineau/The The Washington Post via Getty Images)
244
Figure 8. A person poses for a photo while demonstrating the Transportation
Security Administration’s facial recognition technology (Photo by Julia
Nikhinson/Associated Press)
AI technologies are becoming more adept at generating text. This situation
whets the appetite of many content producers. Some of them use their own minds
while producing texts, some get help from AI technologies, and some live
completely dependent on AI technologies. However, when looking at the texts
created unless they are declared by the producer, it is sometimes difficult and
sometimes impossible to determine who created them. Academic texts are also
affected by this situation. Although there are many plagiarism programmes and
intensive studies on AI technologies, there is no definite result (at least for now)
regarding the use of AI in these texts. This situation leads to injustice between
those who use AI in academic studies and those who do not. It also poses new
challenges in terms of academic integrity, production of original ideas and
evaluation of scientific contribution.
245
News number 3
News Headline Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult
to Detect
News link https://www.wired.com/story/use-of-ai-is-seeping-into-academic-
journals-and-its-proving-difficult-to-detect/
Summarised
version of the news
Academic journals are facing challenges in detecting the undisclosed use
of generative AI in scientific writing, raising concerns about transparency
and credibility. While some journals require disclosure of AI use, there's
no foolproof method to identify such instances, highlighting the need for
improved detection methods and stricter policies to maintain scientific
integrity. Researchers are developing tools to differentiate between human
and AI-generated content, but further efforts are needed to address the
evolving landscape of AI use in academic publishing.
News Date 17.08.2023
Which theme the
problem is caused
by
injustice and due process
Digital values
breached Beneficence, Justice, Autonomy
The context that
caused the
infringement
Usage, Societal impact
Table 3. Injustice and due process
AI technologies are rapidly moving towards integrating autonomous
systems into human life. Both UCAVs/UAVs that perform their flights
autonomously and cars that perform autonomous driving have started to take place
in our lives. In addition, overconfidence in AI technologies has the potential to
bring some negativities. At this point, the news we are dealing with is about an
autonomous Uber car used under driver supervision hitting a pedestrian and
ending the pedestrian's life. It is understood from the court minutes that the driver's
overconfidence in the vehicle using AI technology cost a person's life. This is
clearly an example of a violation of the devaluation of human life.
246
News number 4
News Headline Uber self-driving car test driver pleads guilty to endangerment in
pedestrian death case
News link https://edition.cnn.com/2023/07/29/business/uber-self-driving-car-death-
guilty/index.html
Summarised
version of the news
In 2018, Rafaela Vasquez, the test driver of an Uber self-driving car,
pleaded guilty to endangerment after the vehicle struck and killed a
pedestrian, Elaine Herzberg, in Tempe, Arizona. Vasquez, who was
watching TV on her smartphone during the incident, was sentenced to three
years of supervised probation. Herzberg's death was the first involving a
fully autonomous vehicle. Investigations revealed Vasquez had looked
away from the road for over a third of the trip, and the crash could have
been avoided if she had been attentive. Uber was found to have an
inadequate safety culture, but the company did not face criminal charges
and settled with the victim’s family.
News Date 29.07.2023
Which theme the
problem is caused
by
devaluation of human life
Digital values
breached Justice, Autonomy
The context that
caused the
infringement
Design, Societal impact
Table 4. Devaluation of Human Life
Another advanced feature of AI technologies is the ability to create images
and videos. Artificially produced images and videos are extremely difficult to
distinguish from reality by non-experts. The fact that an event in the remotest
corner of the world can be spread in seconds through social media is another
reality of our age. When these two technological developments are combined, the
dissemination of content that is edited and served by malicious actors even though
it is not real causes the devaluation of human life and the damage to social trust.
Limited regulations and inadequate enforcement mechanisms fail to prevent the
use of AI to produce deceptive media content, raising concerns about election
manipulation and disinformation. The proliferation of AI-enabled deepfake
technology poses a serious threat to democratic processes. This problem can be
considered under the theme of the devaluation of human life and violates the
digital ethics of non-maleficence, fairness and autonomy. The source of the
problem can be addressed in the context of the use, design and societal impact of
AI technologies.
247
News number 5
News Headline AI deepfakes threaten to upend global elections. No one can stop them.
News link https://www.washingtonpost.com/technology/2024/04/23/ai-deepfake-
election-2024-us-india/
Summarised
version of the news
As global elections approach in 2024, the proliferation of AI-powered
deepfakes poses a significant threat to democratic processes, with
politicians and tech companies struggling to contain their spread. Despite
limited regulation and enforcement, the use of AI to create deceptive media
continues, raising concerns about electoral manipulation and
misinformation. While some states have implemented laws to address the
issue, the lack of comprehensive solutions underscores the urgent need for
both legislative action and public awareness to mitigate the impact of AI-
induced election chaos.
News Date 23.04.2024
Which theme the
problem is caused
by
erosion of trust and authenticity
Digital values
breached Non-Maleficence, Justice , Autonomy
The context that
caused the
infringement
Usage, Design, Societal impact
Table 5. Erosion of Trust And Authenticity
Figure 9. Screenshot of a video in which an expert uses artificial intelligence to
transform Indian Prime Minister Modi's voice into personalised greetings for a
Hindu holiday (Video: Divyendra Singh Jadoun).
248
News
number
News Headline News
Date
Which theme
the problem
is caused by
Digital
values
breached
The context
that caused the
infringement
1
A Roomba recorded a
woman on the toilet.
How did screenshots
end up on Facebook?
19.12.2022
surveillance
and privacy
concerns
Non-
Maleficence,
Autonomy
Design
2
Facial Recognition
Technology and False
Arrests: Should Black
People Worry?
14.09.2023
discrimination
and bias
Non-
Maleficence,
Justice,
Autonomy
Usage, Design,
Societal impact
3
Use of AI Is Seeping
Into Academic
Journals—and It’s
Proving Difficult to
Detect
17.08.2023
injustice and
due process
Beneficence,
Justice,
Autonomy
Usage, Societal
impact
4
Uber self-driving car
test driver pleads guilty
to endangerment in
pedestrian death case
29.07.2023
devaluation of
human life
Justice,
Autonomy
Design,
Societal impact
5
AI deepfakes threaten
to upend global
elections. No one can
stop them.
23.04.2024
erosion of
trust and
authenticity
Non-
Maleficence,
Justice,
Autonomy
Usage, Design,
Societal impact
Table 6. Summaries of Case Studies
Conclusion
Digitalisation permeates the finest details of everyday life. We live with AI
technologies in our homes, in our political decisions, in disease detection, in
knowledge production or in public safety. The fact that these technologies have
become an integral part of our lives raises the question of how they should be
handled ethically. Especially when human dignity is taken into account, the
investigation of human beings, who are seen as data storage, disregarding their
privacy, being subjected to algorithmic discrimination or being deceived by
disinformation are events that can be encountered at any time. It is possible to say
that the basis of these problems lies in an approach that does not sufficiently
consider human dignity in product design, usage and societal impact. It is
important to prioritise user privacy and ethical values in technological
developments.
In the case analyses conducted in the research part of the study, many
violations related to AI technologies were observed. It is focused on how the
249
violations in 5 news articles, which are examined under 5 categories accepted in
the literature; surveillance and privacy concerns, discrimination and bias, injustice
and due process, devaluation of human life and erosion of trust and authenticity,
can be analysed in terms of digital ethics. According to the inferences obtained in
the main findings, elements such as user privacy, ethical values and human dignity
should be prioritised in the design and implementation processes of AI-supported
home technologies and security systems. This approach will ensure the
development of technological progress in harmony with social benefit. The
development of text production capabilities of AI technologies offers important
opportunities in the field of content production and academic studies, and the
development of visual and video production capabilities offers important
opportunities on social and democratic processes. However, it also brings along
ethical and justice issues.
In conclusion, it is possible to say that humanity may face many unforeseen
risks with the development of artificial intelligence. Today, it is seen that the
priorities of many technology companies and the algorithms they create are
profitability and pragmatism, so ethical concerns are not sufficiently taken into
account. In order to prevent this, society, academia, etc. should be invited to adopt
approaches that will prioritise human dignity. This is precisely why, when the
design, usage and societal impact dimensions of artificial intelligence are taken
into account, an approach that will take into account human honour and dignity is
required. Only in this way will it be possible to propose an ethical solution for the
5 problem areas put forward in the literature.
The balance between user comfort and technological progress and personal
privacy has become a critical issue of debate in today's digital age. Future research
should focus on how this balance can be achieved and how user rights can be
protected. Legal regulations and oversight mechanisms for the use of AI
technologies need to be strengthened. To mitigate the effects of AI-induced
electoral chaos, a multi-pronged approach is needed, including international co-
operation, technology companies taking responsibility and improving media
literacy. To address the problems arising from the use of AI in text production, it
is important that academic institutions, publishers and researchers work
collaboratively to develop standards and codes of ethics for the use of AI. In
conclusion, AI technologies have to respect the human being, who deserves
honour and dignity from birth. As a result, while utilising the opportunities offered
by AI technologies, the protection of human dignity and honour should be an
indispensable element of ethical principles in the design and use of technology.
250
References
Ali, A. (2023). https://www.researchgate.net. Retrieved from AI, Due Process, and
Access to Justice: https://www.researchgate.net/publication/375597972_
AI_Due_Process_and_Access_to_Justice
American Civil Liberties Union [ACLU]. (2021). Police say a simple warning
will prevent face recognition wrongful arrests. That's just not true.
Retrieved from https://www.aclu.org/news/privacy-technology/police-say-
a-simple-warning-will-prevent-face-recognition-wrongful-arres
Andersen, T. F. (2022, 1 28). How are your Digital Ethics doing? Retrieved from
timfrankandersen.medium.com: https://timfrankandersen.medium.com/
how-are-your-digital-ethics-doing-15a7319f0249
Anderson, E. (2020, 7 10). Controversial Detroit facial recognition got him
arrested for a crime he didn’t commit. Retrieved from Detroit Free Press:
https://www.freep.com/story/news/local/michigan/detroit/2020/07/10/faci
al-recognition-detroit-michael-oliver-robert-williams/5392166002/
Ashok, M., Madan , R., Joha, A., & Sivarajah, U. (2021). Ethical framework for
Artificial Intelligence and Digital technologies. International Journal of
Information Management, 3-17.
Aslan, A. (2022). Dijital Etik. İstanbul: Akıl Fikir Yayınları.
Badiou, A. (2013). Etik Kötülük Kavrayışı Üzerine Bir Deneme . İstanbul: Metis
Yayınları.
Barak, A. (2015). Human dignity as a framework right (mother-right). In Human
Dignity: The Constitutional Value and the Constitutional Right (pp. 156-
169). Cambridge: Cambridge University Press.
Barker, W., & Ferguson, M. (2022, 1 23). Understanding digital ethics. Retrieved
from socitm.net/: https://socitm.net/resource-hub/collections/digital-
ethics/digital-ethics-in-context/
Beck, U. (1992). Risk Society: Towards a New Modernity. London: Sage
Publications.
Brennen, S., & Kreiss, D. (2014). Digitalization and Digitization. Retrieved from
Culture Digitally: https://culturedigitally.org/2014/09/digitalization-and-
digitization/
Capurro, R. (2009). Digital Ethics. 2009 Global Forum on Civilization and Peace.
Seoul: The Academy of Korean Studies.
251
Choung, H., David, P., & Ross, A. (2023). Trust and ethics in AI. AI & SOCIETY,
733-745.
Day, L. A. (2006). Ethics In Media Communications. United Kingdom: Thomson
Learning High Holborn House.
Donnelly, J. (2015). Normative Versus Taxonomic Humanity: Varieties of Human
Dignity in the Western Tradition. Journal of Human Rights, 14(1), 1–22.
doi:10.1080/14754835.2014.993062
European Data Protection Supervisor. (2015). Opinion 4/2015: Towards a New
Digital Ethics: Data, dignity and technology. Retrieved from
https://edps.europa.eu/sites/edp/files/publication/15-09-
11_data_ethics_en.pdf
European Union Agency for Fundamental Rights. (2020). Getting the Future
Right Artificial Intelligence and Fundamental Rights. Luxembourg.
doi:10.2811/774118
Feldman, D. (1999). Human dignity as a legal value: Part 1. Public Law.
Floridi, L. (2024, February 21). ‘Uncovered, unknown, and uncertain’: Guiding
ethics in the age of AI . (M. Cummings, Interviewer)
Floridi, L., & Cowls, J. (2021). A Unified Framework of Five Principles for AI in
Society. In L. Floridi, & J. Cowls, Ethics, Governance, and Policies in
Artificial Intelligence (pp. 5-17). Springer.
Floridi, L., & Taddeo , M. (2016). What is data ethics? Philosophical Transactions
of The Royal Society A Mathematical Physical and Engineering Sciences
374(2083):20160360, 1-3.
Fox , G., Clohessy , T., Werff, L. v., Rosati , P., & Lynn, T. (2021). Exploring the
competing influences of privacy concerns and positive beliefs on citizen
acceptance of contact tracing mobile applications. Computers in Human
Behavior , 1-15.
Fumagalli, M., & Ferrario, R. (2019). Representation of Concepts in AI: Towards
a Teleological Explanation. Joint Ontology Workshops, CAOS, (pp. 1-13).
Graz.
Gillies, A., & Smith, P. (2022). Can AI systems meet the ethical requirements of
professional decision-making in health care? AI and Ethics, 41-47.
Henshall, A. (2018, September 24). What is Digital Ethics?: 10 Key Issues Which
Will Shape Our Future. Retrieved from process.st:
https://www.process.st/digital-ethics/
252
Hill, K. (2020, 06 24). Wrongfully accused by an algorithm. Retrieved from The
New York Times: https://www.nytimes.com/2020/06/24/technology/facial-
recognition-arrest.html
Hobikoğlu, E. H. (2011). Yeni Ekonomide Konjonktür Dalgalanmaları
Bağlamında Shumpterci Yaklaşım ve İnovasyon İlişkisi. Istanbul Journal
of Sociological Studies, 289-305.
Hoven, J. v. (2015). Ethics for the Digital Age: Where Are the Moral Spaces. In
H. Werthner, & F. Harmelen, Informatics in the Future (pp. 65-77). Vienna:
SpringerOpen.
Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis,
and future research directions. EUROPEAN JOURNAL OF
INFORMATION SYSTEMS, 387-409.
Lash, S. (2002). Critique of Information. London: Sage.
Mühlhoff, R. (2023). Predictive privacy: Collective data protection in the context
of artificial intelligence and big data. Big Data & Society SAGE, 1-14.
Mackert, M. (2020, Nisan). Digital Ethics Orientation, Values and Attitudes for a
Digital World.
Marghalani, A., & AlQahtani, Y. (2019). Digital Ethics and Privacy: A study about
digital ethics issues, implications, and how to solve them.
McCrudden, C. (2008). Human dignity and judicial interpretation of human rights.
European Journal of International Law, 19(4), 655–724. doi:10.1093/
ejil/chn043
McLuhan, M., & Fiore, Q. (2012). Medya Mesajı, Medya Mesajdır. (S. Semerci,
Ed., & İ. Haydaroğlu, Trans.) İstanbul: MediaCat.
McStay, A. (2020). Emotional AI, soft biometrics and the surveillance of
emotional life: An unusual consensus on privacy. Big Data & Society, 1-12.
Negroponte, N. (1996). Being Digital. Vintage.
Neufeld, D., & Ma, J. (2021, 6 30). Visiual Capitalist . Retrieved from
https://www.visualcapitalist.com/the-history-of-innovation-cycles/
O’Mahony, C. (2012). There is no such thing as a right to dignity. International
Journal of Constitutional Law, 10(2), 551-574. doi:10.1093/icon/mos010
Ozone, T. (2019, January 4). Deontological AI Ethics. Retrieved from
medium.com: https://medium.com/@tim_ozone/deontological-ai-ethics-
c8de98211497
Pieper, A. (2012). Etiğe Giriş. İstanbul: Ayrıntı Yayınları.
253
PWC. (2020). Digital Ethics Orientation, Values and Attitudes for a Digital World.
PricewaterhouseCoopers.
Rodriguez, P. A. (2015). Human dignity as an essentially contested concept.
Cambridge Review of International Affairs, 28(4), 743–756.
doi:10.1080/09557571.2015.1021297
Shahrıar, S., Allana, S., Hazratıfard, S. M., & Dara, A. R. (2023). A Survey of
Privacy Risks and Mitigation Strategies in the Artificial Intelligence Life
Cycle. IEEE Access, 61829-61854.
Türkeri, M. (2014). Etik Teorileri. Antalya: Lotus Yayınevi.
Teo, S. A. (2023). Human dignity and AI: mapping the contours and utility of
human dignity in addressing challenges presented by AI. Law, Innovation
and Technology, 15(1), 241–279. doi:10.1080/17579961.2023.2184132
Varona, D., & Juan, S. L. (2022). Discrimination, Bias, Fairness, and Trustworthy
AI. Applied Sciences, 1-13.
Waldron, J. (2015). Is dignity the foundation of human rights? In Philosophical
Foundations of Human Rights (pp. 117–137). Oxford University Press
eBooks. doi:10.1093/acprof:oso/9780199688623.003.0006
Wang, N. (2020). “Black Box Justice”: Robot Judges and AI-based Judgment
Processes in China’s Court System. 2020 IEEE International Symposium
on Technology and Society (ISTAS) (pp. 58-65). Tempe, AZ, USA: IEEE.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Over the decades, Artificial Intelligence (AI) and machine learning has become a transformative solution in many sectors, services, and technology platforms in a wide range of applications, such as in smart healthcare, financial, political, and surveillance systems. In such applications, a large amount of data is generated about diverse aspects of our life. Although utilizing AI in real-world applications provides numerous opportunities for societies and industries, it raises concerns regarding data privacy. Data used in an AI system are cleaned, integrated, and processed throughout the AI life cycle. Each of these stages can introduce unique threats to individual’s privacy and have an impact on ethical processing and protection of data. In this paper, we examine privacy risks in different phases of the AI life cycle and review the existing privacy-enhancing solutions. We introduce four different categories of privacy risk, including (i) risk of identification, (ii) risk of making an inaccurate decision, (iii) risk of non-transparency in AI systems, and (iv) risk of non-compliance with privacy regulations and best practices. We then examined the potential privacy risks in each AI life cycle phase, evaluated concerns, and reviewed privacy-enhancing technologies, requirements, and process solutions to countermeasure these risks. We also reviewed some of the existing privacy protection policies and the need for compliance with available privacy regulations in AI-based systems. The main contribution of this survey is examining privacy challenges and solutions, including technology, process, and privacy legislation in the entire AI life cycle. In each phase of the AI life cycle, open challenges have been identified.
Article
Full-text available
The ubiquitous use of artificial intelligence (AI) can bring about both positive and negative consequences for individuals and societies. On the negative side, there is a concern not only about the impact of AI on first-order discrete individual rights (such as the right to privacy, non-discrimination and freedom of opinion and expression) but also about whether the human rights framework is fit for purpose relative to second-order challenges that cannot be effectively addressed by discrete legal rights focused upon the individual. The purpose of this article is to map the contours and utility of the concept of human dignity in addressing the second-order challenges presented by AI. Four key interpretations of human dignity are identified, namely: non-instrumentalization of the human person; the protection of certain vulnerable classes of persons; the recognition and exercise of inherent self-worth (including the exercise of individual autonomy); and a wider notion of the protection of humanity. Applying these interpretations to AI affordances, the paper argues that human dignity should foreground three second-order challenges, namely: the disembodiment of empiric self-representation and contextual sense-making; the choice architectures for the exercise of cognitive autonomy; and, the experiential context of lived experiences using the normative framework of human vulnerability.
Article
Full-text available
Featured Application To understand the multiple definitions available for the variables “Discrimination”, “Bias”, “Fairness”, and “Trustworthy AI” in the context of the social impact of algorithmic decision-making systems (ADMS), pursuing to reach consensus as working variables for the referred context. Abstract In this study, we analyze “Discrimination”, ”Bias”, “Fairness”, and “Trustworthiness” as working variables in the context of the social impact of AI. It has been identified that there exists a set of specialized variables, such as security, privacy, responsibility, etc., that are used to operationalize the principles in the Principled AI International Framework. These variables are defined in such a way that they contribute to others of more general scope, for example, the ones studied in this study, in what appears to be a generalization–specialization relationship. Our aim in this study is to comprehend how we can use available notions of bias, discrimination, fairness, and other related variables that will be assured during the software project’s lifecycle (security, privacy, responsibility, etc.) when developing trustworthy algorithmic decision-making systems (ADMS). Bias, discrimination, and fairness are mainly approached with an operational interest by the Principled AI International Framework, so we included sources from outside the framework to complement (from a conceptual standpoint) their study and their relationship with each other.
Article
Full-text available
With the growing influence of artificial intelligence (AI) in our lives, the ethical implications of AI have received attention from various communities. Building on previous work on trust in people and technology, we advance a multidimensional, multilevel conceptualization of trust in AI and examine the relationship between trust and ethics using the data from a survey of a national sample in the U.S. This paper offers two key dimensions of trust in AI—human-like trust and functionality trust—and presents a multilevel conceptualization of trust with dispositional, institutional, and experiential trust each significantly correlated with trust dimensions. Along with trust in AI, we examine perceptions of the importance of seven ethics requirements of AI offered by the European Commission’s High-Level Expert Group. Then the association between ethics requirements and trust is evaluated through regression analysis. Findings suggest that the ethical requirement of societal and environmental well-being is positively associated with human-like trust in AI. Accountability and technical robustness are two other ethical requirements, which are significantly associated with functionality trust in AI. Further, trust in AI was observed to be higher than trust in other institutions. Drawing from our findings, we offer a multidimensional framework of trust that is inspired by ethical values to ensure the acceptance of AI as a trustworthy technology.
Article
Big data and artificial intelligence pose a new challenge for data protection as these techniques allow predictions to be made about third parties based on the anonymous data of many people. Examples of predicted information include purchasing power, gender, age, health, sexual orientation, ethnicity, etc. The basis for such applications of “predictive analytics” is the comparison between behavioral data (e.g. usage, tracking, or activity data) of the individual in question and the potentially anonymously processed data of many others using machine learning models or simpler statistical methods. The article starts by noting that predictive analytics has a significant potential to be abused, which manifests itself in the form of social inequality, discrimination, and exclusion. These potentials are not regulated by current data protection law in the EU; indeed, the use of anonymized mass data takes place in a largely unregulated space. Under the term “predictive privacy,” a data protection approach is presented that counters the risks of abuse of predictive analytics. A person's predictive privacy is violated when personal information about them is predicted without their knowledge and against their will based on the data of many other people. Predictive privacy is then formulated as a protected good and improvements to data protection with regard to the regulation of predictive analytics are proposed. Finally, the article points out that the goal of data protection in the context of predictive analytics is the regulation of “prediction power,” which is a new manifestation of informational power asymmetry between platform companies and society.
Article
The use of Artificial Intelligence (AI) in Digital technologies (DT) is proliferating a profound socio-technical transformation. Governments and AI scholarship have endorsed key AI principles but lack direction at the implementation level. Through a systematic literature review of 59 papers, this paper contributes to the critical debate on the ethical use of AI in DTs beyond high-level AI principles. To our knowledge, this is the first paper that identifies 14 digital ethics implications for the use of AI in seven DT archetypes using a novel ontological framework (physical, cognitive, information, and governance). The paper presents key findings of the review and a conceptual model with twelve propositions highlighting the impact of digital ethics implications on societal impact, as moderated by DT archetypes and mediated by organisational impact. The implications of intelligibility, accountability, fairness, and autonomy (under the cognitive domain), and privacy (under the information domain) are the most widely discussed in our sample. Furthermore, ethical implications related to the governance domain are shown to be generally applicable for most DT archetypes. Implications under the physical domain are less prominent when it comes to AI diffusion with one exception (safety). The key findings and resulting conceptual model have academic and professional implications.
Article
The ethical issues around the growing adoption of AI are many and varied. This article will focus on the growing use of AI in the context of professional decision-making within health care. It has been suggested that if automation and robotics threaten blue collar roles, then AI threatens the jobs of those in white collar or professional roles. This article will seek to consider the question “How well can AI meet the ethical requirements of being a health care professional?” The paper will begin by considering the fundamental technologies of AI and their limitations. It will then outline the fundamental ethical principles of professional codes of conduct which define what it means to be a professional and what professionals need to be able to do. This will be illustrated by the use of two case studies. Finally, it will consider whether AI systems can do this in light of their inherent limitations.