Conference PaperPDF Available

Defining Human-Centered AI: A Comprehensive Review of HCAI Literature

Authors:

Abstract

This paper investigates the evolution of Human-Centered Artificial Intelligence (HCAI) as an emergent perspective on the design, development, and deployment of Artificial Intelligence (AI). It provides an overview of HCAI definitions, from the most established to the less common definitions found in the literature, highlighting the variety of emphases as well as the shared understandings among them. Based on the review, the paper proposes a new comprehensive HCAI definition, synthesizing the main features of the different definitions. Our HCAI definition highlights the necessity to understand the involved and affected people. To identify and understand their needs and values, the new definition highlights the use of Human-Centered Design methods. In an HCAI context, needs and values are mainly manifested through the concepts of Augmentation, and Control. Augmentation refers to the idea of using AI to enhance human capabilities and performance, rather than replacing human beings with machines. Control, on the other hand, deals with the governance and management of AI systems to ensure that they operate ethically and safely. The paper highlights the importance of collaboration between AI and IS researchers to advance the HCAI agenda and ensure that AI serves the interests of society.
EasyChair Preprint
10833
Dening Human-Centered AI: a Comprehensive
Review of HCAI Literature
Stefan Schmager, Ilias Pappas and Polyxeni Vassilakopoulou
EasyChair preprints are intended for rapid
dissemination of research results and are
integrated with the rest of EasyChair.
September 5, 2023
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023
DEFINING HUMAN-CENTERED AI:
A COMPREHENSIVE REVIEW OF HCAI LITERATURE
Research full-length paper
Schmager, Stefan, University of Agder, Kristiansand, Norway, stefan.schmager@uia.no
Pappas, Ilias, University of Agder, Kristiansand, Norway, ilias.pappas@uia.no
Vassilakopoulou, Polyxeni, University of Agder, Kristiansand, Norway, polyxenv@uia.no
Abstract
This paper investigates the evolution of Human-Centered Artificial Intelligence (HCAI) as an emergent
perspective on the design, development, and deployment of Artificial Intelligence (AI). It provides an
overview of HCAI definitions, from the most established to the less common definitions found in the
literature, highlighting the variety of emphases as well as the shared understandings among them. Based
on the review, the paper proposes a new comprehensive HCAI definition, synthesizing the main features
of the different definitions. Our HCAI definition highlights the necessity to understand the involved and
affected people. To identify and understand their needs and values, the new definition highlights the use
of Human-Centered Design methods. In an HCAI context, needs and values are mainly manifested
through the concepts of Augmentation, and Control. Augmentation refers to the idea of using AI to
enhance human capabilities and performance, rather than replacing human beings with machines. Con-
trol, on the other hand, deals with the governance and management of AI systems to ensure that they
operate ethically and safely. The paper highlights the importance of collaboration between AI and IS
researchers to advance the HCAI agenda and ensure that AI serves the interests of society.
Keywords: Human-Centered AI, HCAI, Artificial Intelligence, Human-Centered Design, Augmentation,
Control.
1 Introduction
Day by day we encounter an abundance of news about novel AI technologies, breakthroughs, and scary
stories, both from popular media as well as scientific research. There is no shortage of alarming wake-
up calls, reminding us to pay close attention to how these technologies will evolve and to act accord-
ingly. Correspondingly, there is a growing number of practitioners and researchers addressing questions
on how to mitigate risks and make AI systems align with human needs and values (Dignum, 2019;
Google, 2019; IBM, 2020; Microsoft, 2020; Schmager, 2022; Vassilakopoulou et al., 2022). It is still
common to design and develop AI systems with the primary goal of creating algorithms that excel at
performing specific tasks, e.g., image recognition, natural language processing, or autonomous driving.
The current emphasis lies on optimizing performance metrics, such as accuracy, speed, or resource ef-
ficiency, rather than explicitly considering human values or societal impacts. As a consequence of these
prevalent practices, Human-Centered Artificial Intelligence (HCAI) emerged as a different point of view
on Artificial Intelligence (AI) design, development, and deployment that prioritizes human needs and
aspirations. HCAI acknowledges the impact of AI systems on individuals, societies, and the overall
human experience and puts humans at the center and its research strategies emphasize that the next
frontier of AI is not just technological but also humanistic and ethical.
HCAI is a crucial perspective for the responsible design, development, and deployment of AI. Digital
technologies may have a dual role, sometimes being part of the problem or facilitating solutions to ex-
isting problems (Dwivedi et al., 2022; Pappas et al., 2023). Placing human beings at the center allows
the creation of AI systems that are more inclusive, trustworthy, and aligned with human values and goals
(Schoenherr et al., 2023; Shneiderman, 2020a). It can guide AI design ensuring that AI can support and
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 2
augment human abilities and find ways to address ethical implications and unintended consequences of
AI (Xu, 2019). Yvonne Rogers calls HCAI “the new zeitgeist” (2022).
Different researchers from various disciplines have attempted to formulate their perspectives on HCAI
introducing different definitions. However, a widely agreed-upon definition of HCAI has not yet been
reached (Renz & Vladova, 2021). Having a shared and comprehensive definition as a conceptual bed-
rock could allow for clear and unambiguous communication and collaboration. It can help to avoid
vague or ambiguous language, reducing the potential for misunderstandings, and enabling the alignment
of strategies, actions, and common goals. It could promote consistency and coherence in discussions,
decision-making, and problem-solving. Furthermore, a shared definition encourages critical thinking, as
it provides a starting point for deeper exploration of the involved concepts, evaluating implications,
weaknesses, and strengths. Overall, a shared definition facilitates a meaningful debate and will contrib-
ute to advancing the scientific discourse about the responsible introduction of AI technologies. Against
this backdrop of ambiguity, this literature review aims to answer the research question: How is Human-
Centered AI defined in the existing literature?
The objective of this work is to trace the evolution of HCAI mapping the ever-growing landscape of
HCAI definitions in the literature and providing conceptual clarity by suggesting a comprehensive def-
inition. This paper aims to accelerate research on HCAI, helping to produce AI-infused products, sys-
tems, and services with widespread benefits for individual users and the whole of society, including
education, healthcare, environmental preservation, and community safety (Shneiderman, 2020b). The
rest of the paper is structured as follows. First, the research method is presented. Then, different HCAI
definitions are presented and synthesized into a new comprehensive definition. After that, a discussion
is provided before concluding the paper.
2 Research Method
For this systematic literature review we applied the methodological framework by Kitchenham (2004),
following her structured literature review process. The three steps within the framework consist of plan-
ning-, conducting-, and reporting the review. In the first step, we developed a detailed search protocol,
defining specific search terms as well as inclusion/exclusion criteria. In the second step, the review was
conducted. This includes identification, selection, appraisal of quality, evaluation, and synthesis of the
literature. In the last step, the findings of the literature review are summarized and reported.
For this literature review, we conducted a database search in the SCOPUS research database on July
7th, 2022. The database has been chosen for being one of the most comprehensive databases of scientific
literature and for its advanced search capabilities. In addition, SCOPUS employs rigorous quality control
measures to ensure the quality and accuracy of the indexed literature, which helps to minimize the risk
of low-quality or irrelevant articles. To collect resources as widely as possible, the defined search string
was deliberately kept broad. An automated search with the search string TITLE-ABS-KEY ( "human
cent* AI" OR "human cent* artificial intelligence" ) has been conducted. By this, we ensured the search
did consider American English as well as British English spellings of the search terms. The search was
not limited by a time frame, since it was assumed that due to the novelty of the concept, a time limitation
is not necessary. To ensure a high degree of relevance in the literature review corpus, the following
exclusion criteria have been defined before the initial search and screening phases:
Topic overviews not related to a conceptual understanding of AI.
Studies discussing purely technical improvements.
No AI relation, or AI only as an auxiliary aspect of the research.
Stage
Description
Number
Identification
Initial Results
215
1st Screening
After Abstract read
120
2nd Screening
After Full text read
109
Table 1. Review stages with the total number of sources at each stage
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 3
In the first screening stage, all abstracts from the initial list of 215 sources were read, which eliminated
95 sources as they matched the exclusion criteria. In the second screening phase, the remaining 120
sources have been fully read and assessed according to their suitability for the literature review. A total
of 109 eligible, non-duplicate documents related to HCAI were identified.
The analysis was performed on a SCOPUS database export in the form of a spreadsheet, including in-
formation about Authors, Title, Year, Source, Abstract, and Keywords. The analysis examined whether
each paper includes a definition for HCAI and if yes, if it reuses a pre-existing definition of HCAI or if
it introduces a new one. If existing definitions were used, the respective references were marked in the
spreadsheet. This coding was performed for all the papers in the corpus analyzed. This led to the iden-
tification of patterns and groupings within the literature, identifying the most used definitions, various
combinations of definitions, and common concepts within the different definitions as well as the dis-
covery that a significant number of publications don’t use a definition at all.
3 Findings
Approximately two out of three papers reviewed did not include a definition of the term Human-Cen-
tered Artificial Intelligence at all. In the remaining literature, we identified different definitions, with
authors and professionals construing their conceptions into various, maybe similar yet still diverse
meanings. In the paragraphs that follow, we first present the HCAI definitions by Shneiderman (2020a,
2020c, 2020d) which are the most widely used. After that, the paper provides a comprehensive overview
of other HCAI definitions found in the literature, highlighting their various emphases, and shared un-
derstandings. In the final subsection, we provide a comprehensive HCAI definition synthesizing the
literature.
3.1 HCAI as a paradigm shifting approach – Shneiderman’s definitions
The most widely used definition for HCAI is the one developed by Ben Shneiderman, a seasoned scholar
in the field of Human-computer interaction (HCI). This definition reads as: “HCAI focuses on amplify-
ing, augmenting, and enhancing human performance in ways that make systems reliable, safe, and trust-
worthy. These systems also support human self-efficacy, encourage creativity, clarify responsibility, and
facilitate social participation” (Shneiderman, 2020a). By following the progression of how the term
HCAI is used and described in Shneiderman’s topical publications, we can observe an evolution from
being a term used to describe a conceptual framework, towards becoming a name for a paradigm-shifting
approach for the development of AI technologies. Although the term HCAI has been used in the litera-
ture already from 1999 (Garcia, 1999), Shneiderman mentions it for the first time in his article Human-
Centered Artificial Intelligence: Reliable, Safe & Trustworthy” (2020a). In the article, the term HCAI
is used for a two-dimensional framework that aims to enable high levels of human control as well as
high levels of automation. The framework breaks the prevailing assumption of inverse proportionality
for these two dimensions. The article’s argument is that an increase in automation does not inevitably
implicate a decrease in human control or vice versa. Instead, systems should support both control and
automation in order to be reliable, safe, and trustworthy. Such systems will increase human performance
while supporting human self-efficacy, mastery, creativity, and responsibility.
Shneiderman develops this understanding of HCAI further in his article “Human-Centered Artificial
Intelligence: Three Fresh Ideas” (Shneiderman, 2020c). Besides the two-dimensional framework of au-
tomation and control, he calls for an overall shift in language, imagery, and metaphors. In his later work
“Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-
centered AI Systems” (Shneiderman, 2020d) he suggests 15 recommendations borrowing from software
engineering practices to create reliable, safe, and trustworthy HCAI by enabling designers to translate
widely discussed ethical principles into professional practices in large organizations with clear sched-
ules. From this paper, it becomes clear that for Shneiderman, HCAI is not just a two-dimensional con-
ceptual framework anymore, but it expands into considerations about processes and outcomes. Shnei-
derman refines his understanding of HCAI further in the paper “Human-Centered AI: A New Synthesis”
(Shneiderman, 2021b), where he states that building AI-driven technologies that serve human needs
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 4
requires combining AI-based algorithms with human-centered design (HCD) thinking. The fundamental
conviction is that the adoption of user-centered design methodologies will lead to HCAI systems that
support human goals, activities, and values. By that, Shneiderman indicates that HCAI is not the sole
responsibility of a single discipline. Designers, engineers, product managers, government agencies,
evaluators, and educators need to include HCAI ways of thinking. The goal is to enable a human-cen-
tered future with technologies that amplify, augment, and enhance human abilities and enhance human
performance.
Shneiderman’s work has been the conceptual foundation for many studies that take an HCAI perspec-
tive. Costabile et al. (2022) build upon the work of Shneiderman to explore three different interaction
strategies for HCAI. In their study, they aim to develop a new class of tools for the interactive explora-
tion of complex datasets and iterative meaning-making activities for humans with different levels of
expertise. These tools can amplify, augment, and enhance human performance, in ways that make sys-
tems reliable, safe, and trustworthy. Vassilakopoulou and Pappas (2022), in their study on Chatbot
Human Agent handovers, draw from Shneiderman’s work and define HCAI as the emerging discipline
for AI-enabled systems that amplify and augment human abilities while preserving human control and
ensuring ethically aligned design. Komischke (2021) uses Shneiderman’s framework of human control
and automation in the design and development of two digital productivity and collaboration applications
use cases. Nagitta et al. (2022) examine the role of public procurement and procurement professionals
in relation to HCAI principles and practical recommendations from Shneiderman (2020a, 2020d), high-
lighting the significance of HCAI for the benefit and safety of the public. Beckert (2021) uses the work
of Shneiderman in his analysis of the state of play of implementing Trustworthy AI.
3.2 HCAI definitions beyond Shneiderman
Beyond the work by Shneiderman, we also identified other HCAI definitions used in the reviewed liter-
ature. These include definitions by Xu (2019) and Xu et al. (2022), the Stanford Institute for Human-
Centered Artificial Intelligence (HAI, 2021), Riedl (2019), Auernhammer (2020), Dignum & Dignum
(2020), and Holzinger (2022a, 2022b).
The definition developed by Xu (2019) and Xu et al. (2022) reads as: [HAI] “includes three main com-
ponents: 1) ethically aligned design, which creates AI solutions that avoid discrimination, maintain
fairness and justice, and do not replace humans; 2) technology that fully reflects human intelligence,
which further enhances AI technology to reflect the depth characterized by human intelligence (more
like human intelligence); and 3) human factors design to ensure that AI solutions are explainable, com-
prehensible, useful, and usable”. Xu (2019) contributes to Human-Centered AI by proposing a frame-
work that combines three components: “Ethically Aligned Design”, “Technology Enhancement” and
“Human Factors Design”, which focuses on the intersection of AI and HCI. The main aim in Xu´s work
is to explore how the HCI community can contribute to delivering AI solutions that are explainable,
comprehensible, useful, and usable. This framework has been later refined showing that the individual
components of Human Factors, Technology, and Ethics need to create synergies (Xu et al., 2022). The
“Human Factors” component aims to ensure that AI solutions are comprehensible, useful, and usable to
support human-driven decision-making processes. The “Technology” component is about defining hu-
man needs, designing, prototyping, and testing solutions together with users. This can contribute to de-
veloping human-controlled AI and to augmenting human abilities rather than replacing humans. The
“Ethics” component relates to the creation of AI solutions that guarantee fairness, justice, and account-
ability. He et al. (2022) used Xu’s framework in their study on challenges and opportunities for Trust-
worthy Robots and Autonomous Systems. They concluded that AI human-centeredness requires con-
sideration of users and their cognition along with an understanding of reasoning processes and
knowledge at the human level.
The Stanford Institute for Human-Centered Artificial Intelligence (HAI) states that Human-Centered AI
aims “[...] to augment the abilities of, address the societal needs of, and draw inspiration from human
beings(HAI, 20121). The goal of HAI is to advance AI research, education, policy, and practice to
improve the human condition, augment human intelligence, and thereby enhance human welfare by
using machine intelligence. Stanford’s HAI institute follows three objectives: technical reflection about
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 5
the depth characterized by human intelligence; improving human capabilities rather than replacing them
and focusing on AI’s impact on humans (Stanford GDPi, 2018). In a New York Times article (2018),
HAI Co-Director Fei-Fei Li states an aim to extend the popularity of human-centered approaches to AI
toward more collaborative possibilities of mixed initiatives between human workers and AI agents. Li
gives an example of how AI automation should focus on enhancing the strengths of humans “like dex-
terity and adaptability” by “keeping tabs on more mundane tasks and protecting against human error,
fatigue, and distraction” (Wang et al., 2019).
Riedl (2019) proposed the following HCAI definition: “Human-centered AI is a perspective on AI and
ML [machine learning] that intelligent systems must be designed with awareness that they are part of a
larger system consisting of human stakeholders, such as users, operators, clients, and other people in
close proximity”. This means, an understanding of human sociocultural norms as part of a theory of
mind as well as capabilities to produce explanations that nonexpert end-users can understand, are
needed. For Riedl, HCAI means building systems to understand the often culturally specific expectations
and needs of humans and to help humans understand the systems in return. Riedl breaks human-centered
AI into two critical capacities, understanding humans, and being able to help humans understand AI.
Riedl´s work has served as the conceptual foundation for the study by Elahi et al. (2021) on improving
the privacy of older app users in smart cities. Also, Böckle et al. (2021) used the HCAI definition by
Riedl to guide the design of their study on the effect of personality traits on trust in AI-enabled user
interfaces.
According to Auernhammer’s (2020) definition, “Human-centered AI needs to focus on three integrated
perspectives when designing AI systems: rationalistic (technology), humanistic (people), and judicial
(policies).“ Auernhammer argues that pan-disciplinary research from fields like psychology, cognitive
science, computer science, engineering, business management, law, and design is required to develop a
genuinely human-centered approach for AI, since in essence, HCAI is about people. The work by Au-
ernhammer has been used by Subramonyam et al. (2021) for the development of a Process Model for
Co-Creating AI Experiences (AIX). Subramonyam and colleagues provide designers with practical
guidance on how to work with AI systems as a design material and offer design considerations for in-
corporating data probes.
Dignum & Dignum (2020) describe an AI system as “Human-centered” if the system does not operate
in isolation but is socially aware of performing its tasks for someone, within a local and temporal con-
text. It is argued that AI systems are socio-technical systems in the sense that the social context of how
these systems are developed, used, and acted upon is a fundamental consideration. This means, that for
a Human-Centered approach to AI, the technical component cannot be separated from the socio-tech-
nical system (Dignum, 2019; Schoenherr et al., 2023). A perspective that is shared with Riedl (2018).
The High-Level Expert Group of the European Commission (AI-HLEG, 2019) where Dignum takes
part, developed the AI-HLEG-AI guidelines that include human-centricity. Although the group states
that their ultimate ambition is to reach trustworthy AI, the formulated guidelines provide a definition for
HCAI. They define a human-centric approach to AI as one in which “humans enjoy a unique and inal-
ienable moral status of primacy in the civil, political, economic, and social fields. AI systems need to be
human-centric, resting on a commitment to their use in the service of humanity and the common good,
intending to improve human welfare and freedom”.
Holzinger (2022a, 2022b) defines HCAI as a synergistic approach of “artificial intelligence” and “nat-
ural intelligence” to empower, amplify, and augment human performance, rather than replace people.
Its goal is to promote the robustness of AI algorithms and to align AI solutions with human values,
ethical principles, and legal requirements to ensure safety and security, enabling trustworthy AI. Steels
(2020) argues that human-centric AI is only going to be possible when AI comes to grips with meaning
and understanding. They are building upon the work by Nowak et al. (2018) which points to HCAI as a
“possible path” to avoid dystopian developments. The authors distinguish the way AI is being built into
“Function-Oriented AI” and “Human-Centered AI”. HCAI is envisioned as synergistically working to-
gether with humans for the benefit of humans and human society, focusing on enhancing and empow-
ering humans rather than replacing and controlling them.
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 6
Several articles combine more than one definition of HCAI. For instance, Herrmann (2022) employs the
HCAI definitions of Shneiderman (2020d) and the framework by Xu (2019) in research on interaction
modes for promoting human capabilities. The identified interaction modes highlight both human and AI
strengths. Examples include the provision of explanations and possibilities for exploration, testing, and
re-training with human involvement and keeping humans in control by allowing for intervention and
vetoing. Another example of a combination of definitions is the research by Yang et al. (2021). Yang
and colleagues in their conceptual work on smart learning environments state that HCAI can be inter-
preted from two perspectives. The first is AI under human control, describing the interplay between
human control and AI automation (Shneiderman, 2020a). The other is AI on the human condition, which
refers to having explainable and interpretable computation and judgment processes and continuous ad-
justments of AI to societal phenomena (HAI, 2021).
3.3 A comprehensive HCAI definition based on the literature
The table that follows provides an overview of the most used definitions of Human-Centered AI identi-
fied within the reviewed literature (Table 2).
Definition
Source
HCAI focuses on amplifying, augmenting, and enhancing human performance in
ways that make systems reliable, safe, and trustworthy. These systems also support
human self-efficacy, encourage creativity, clarify responsibility, and facilitate social
participation
Shneiderman (2020a)
[HAI] includes three main components: 1) ethically aligned design, which creates AI
solutions that avoid discrimination, maintain fairness and justice, and do not replace
humans; 2) technology that fully reflects human intelligence, which further enhances
AI technology to reflect the depth characterized by human intelligence (more like hu-
man intelligence); and 3) human factors design to ensure that AI solutions are ex-
plainable, comprehensible, useful, and usable.
Xu (2019)
[Human-Centered AI aims] to augment the abilities of, address the societal needs of,
and draw inspiration from human beings.
HAI (2021)
Human-centered AI is a perspective on AI and ML that intelligent systems must be
designed with awareness that they are part of a larger system consisting of human
stakeholders, such as users, operators, clients, and other people in close proximity.
Riedl (2019)
Human-centered AI needs to focus on three integrated perspectives when designing
AI systems: rationalistic (technology), humanistic (people), and judicial (policies).
Auernhammer (2020)
Human-centered means that a system should have the human partner always as part
of the focus for deliberation. This means that any task of the AI system should not be
done in isolation, but the task should be done for someone, in some context (place
and time). And if the actions of the AI system affect people directly or indirectly it
should be aware of this and take it into consideration when deliberating.
Dignum & Dignum
(2020)
AI systems need to be human-centric, resting on a commitment to their use in the ser-
vice of humanity and the common good, intending to improve human welfare and
freedom.
AI-HLEG (2019)
Human-centered AI we define as a synergistic approach to align AI solutions with
human values, ethical principles, and legal requirements to ensure safety and secu-
rity, enabling trustworthy AI.
Holzinger (2022a)
By this [HCAI] we mean designing AI systems that enhance human capacities and
improve human experiences rather than replacing them through automation.
Rogers (2019)
Table 2. Overview of Human-Centered AI definitions in the literature
The literature review revealed much conceptual overlap among the identified definitions coming from
the different scholars of HCAI. At the same time, the review also highlights the diversity in emphases
and approaches toward an understanding of what Human-Centered Artificial Intelligence could entail.
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 7
Based on the different definitions, we are proposing a new comprehensive definition of HCAI to repre-
sent the richness of the scholarly understandings:
Human-Centered AI (HCAI) focuses on understanding purposes, human
values and desired AI properties in the creation of AI systems by applying
Human-Centered Design practices. HCAI seeks to augment human
capabilities while maintaining human control over AI systems, by
considering the necessity, context, and ethical and legal conditions of the AI
system as well as promoting individual and societal well-being.”
Our definition aims to emphasize the fundamentally humane character of HCAI while also encompass-
ing its contributing constituents. By incorporating Human-Centered Design methodologies, e.g., stake-
holder participation, HCAI underscores the constant reflection of whether an envisioned AI system is in
accordance with the pluralism of human needs and values. Further, our definition highlights context
sensitivity, including the acknowledgment of stakeholder diversity, a comprehension of the context of
use, and the awareness that an AI system is not a single entity, but rather a part of a larger structure.
Understanding the characteristics of an AI system, including scope, usage implications, and sociocul-
tural context are crucial factors of HCAI. Finally, our definition addresses the consideration of ethical
and legal requirements, to ensure a responsible and lawful design, development, and deployment of an
AI system. In essence, our definition delineates the overarching objective of HCAI to consider and pro-
mote the well-being of individuals as well as the whole of society.
4 Discussion
This literature review illustrates multiple takes on what “Human-Centered Artificial Intelligence” could
mean. This is not surprising, since defining a term that is linked to a constantly evolving technology like
AI, is like trying to hit a moving target. Deconstructing the term HCAI into its two parts “Human-
Centeredness - HC” and “Artificial Intelligence - AI” further illustrates this difficulty. While there are
definitions available for human-centeredness, for example, from HCI, Interaction-, and UX Design (Xu,
2019), a universally agreed definition of AI is yet to be found. And even if such a lack of consensus is
accepted, the question remains if HCAI just describes the intersection of HC and AI, or if it constitutes
something greater than the sum of its parts. As the literature review unveiled, for some, HCAI is under-
stood as the amalgamation of Human-Centered Design and AI. Several definitions highlight the neces-
sity of incorporating Human-Centered Design methods in the design and development processes of AI
systems. Yet for others, HCAI constitutes nothing less than a paradigm shift, moving beyond the prev-
alent technology-centered approaches towards AI driven by human values.
Developing a common and shared definition can play an important role in advancing scientific research
by promoting clarity, collaboration, and progress. In the realm of scientific inquiry having a shared
understanding of key concepts and terms is essential to foster clear communication among researchers,
minimizing misunderstandings. A shared understanding promotes a meaningful exchange of ideas that
allows scholars and practitioners to build upon each other's work, develop new hypotheses, and advance
the scientific discourse. A common conceptual ground can encourage collaboration among researchers
as well as with practitioners and help to align efforts, combine expertise, and work towards common
goals. Furthermore, such a shared understanding can enhance the reliability and reproducibility of re-
search findings. This is crucial for validating and building upon existing research, strengthening the
knowledge base, and fostering knowledge transfer within the scientific community. At the same time
agreed-upon definitions and a shared understanding of involved concepts facilitate critical thinking,
fostering intellectual growth and driving scientific progress. This allows for focused debates, evaluating
the strengths and weaknesses of different approaches, and critically analyzing the implications of re-
search outcomes.
Our analysis of existing HCAI definitions identified a common understanding that a human-centered
approach to AI foregrounds human needs and values. This is most notably manifested in the two
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 8
concepts Augmentation and Control. The maxim of “Augmentation instead of replacement” is based on
the understanding that technology is created with the purpose of supporting humans, not making them
redundant. Augmentation is ingrained in HCAI conceptualizations in different ways. For Shneiderman,
HCAI is about the creation of super tools, powered by advanced technologies like deep neural networks
but still considered tools because they come into existence to support their users (Shneiderman, 2020a,
2020c, 2020d). Xu et al. (2022) have included the postulation of not replacing humans in the “Ethics”
component of their model for the human-centered development of AI. Xu and colleagues shift the per-
spective from a purely technical question, i.e., “Can we?” towards an ethical one, i.e., “Should we?”.
Similarly, in the vision of Stanford’s Human-centered AI Institute, the improvement of human capabil-
ities rather than their replacement is one of three core objectives (HAI, 2021). The aim for human aug-
mentation is also evident in the synergistic approaches to HCAI by Holzinger (2022a) and Nowak et al.
(2018). These papers share the notions of empowering, amplifying, and augmenting human perfor-
mance, rather than replacing people.
Furthermore, the concept of control is also closely connected to HCAI in the literature. Shneiderman
argues that control and automation are not necessarily two ends of the same spectrum, but rather, two
separate dimensions. In his framework, high levels of control and high levels of automation are not
mutually exclusive. Shneiderman claims that both control and automation are needed for HCAI systems
(Shneiderman, 2020a). Xu et al. (2022) highlight a shift from human-centered automation to human-
controlled autonomy. The same understanding is implied in the definitions by Holzinger (2022a) and in
Xu’s earlier work (2019). The concept of control raises questions around the ultimate power of decision,
considering how human-beings are involved in decision making processes when AI is also involved.
To gauge appropriate levels of augmentation and control, our definition highlights the importance of
established Human-Centered Design (HCD) methods and practices. HCD describes a creative approach
to problem-solving that starts with understanding the people involved and designing around their needs
and values. An HCD approach is described as cultivating deep empathy with the people you’re designing
with, generating ideas, building different prototypes, sharing what you’ve made together, and eventually
putting your innovative new solution out in the world (IDEO, 2023). The US Office of Science and
Technology Policy in its Strategic Plan on National AI Research and Development, has recently explic-
itly favored human factors, usability, and human-centered design research methods (OSTP, 2023). In
particular, the report argues for the analysis of user needs and requirements through iterative design
methods to understand and address the ethical, legal, and societal implications of AI and to ensure safety
and security.
Enhancing human abilities with the help of technology, while exploring appropriate levels of automa-
tion, supervision, and decision-making are known objects of inquiry. Back in 1989, Banon and Schmidt
noted that by changing the allocation of functions between humans and their implements, changes in
technology induce changes in work organization (Banon & Schmidt, 1989). As AI becomes widespread
and ubiquitous across work settings, and more functions get delegated to AI-infused systems, the rele-
vance of HCAI becomes clear. Liikkanen (2019) describes that human-centered design will be crucial
in further defending humans, particularly underprivileged users at risk of being mistreated by AI.
5 Conclusion
This literature review provides an overview of HCAI definitions, from the most established to the less
common ones. It highlights the partly shared conceptual understanding, but also the existing diversity
of emphases among them. Based on the review, we are proposing a new comprehensive HCAI defini-
tion, synthesizing the main attributes of the different existing definitions. Our proposed HCAI definition
highlights the necessity to engage with and understand the involved and affected people. To identify and
understand their needs and values, our new definition also highlights the use of HCD methods. In regard
to such needs and values, a particular focus has been identified for the concepts of Augmentation, and
Control. Augmentation describes the idea of enhancing human capabilities and performance using AI,
rather than replacing human beings with machines. The concept of control deals with aspects of govern-
ance and management of AI systems to ensure they operate ethically and safely.
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 9
Overall, the variety of HCAI definitions indicates a steadily growing interest which gives an optimistic
outlook for the future. According to Rogers (2021), we are currently reimagining rather than revisiting
longstanding dystopian visions of AI. She describes the nascent HCAI research as an eclectic discipline
full of inclusive voices, doing exciting, enabling, and empowering work. The comprehensive definition
introduced can be used as a foundation for researchers and practitioners to ensure a common under-
standing of the concept enabling consistency, communication, and collaboration.
Analyzing the landscape of definitions for an emerging and constantly evolving concept doesn’t come
without limitations. The first limitation is of a rather practical nature, as the wealth of literature related
to Human-Centered AI is rapidly increasing. The pace of new academic output for this highly relevant
topic is only exceeded by the number of technological breakthroughs it tries to examine. Furthermore,
there might be literature that describes the same fundamental idea of HCAI, which has not been captured
by our keyword search if it uses different terminologies. Another limitation stems from criticism towards
the general HCD idea. Norman (2005) states that HCD has become such a dominant theme, that its
principles can be misleading, wrong, or at times even harmful. A more evolutionary criticism of HCD
has been formulated by scholars suggesting “More-Than-Human design” which extends the universe of
design beyond human needs and values (Giaccardi & Redström, 2020; Nicenboim et al., 2020; Coskun
et al., 2022).
Human involvement in the creation and critique of the design of AI technologies demonstrates how
society can benefit from having many kinds of human-machine interaction at its fingertips rather than
focusing on the consequences of a seismic shift in machine autonomy. Going forward, the field of AI
will have far-reaching impacts within the workplace and beyond. As wonderfully phrased by Yang et a.
(2021), “AI may be a current trend, but humanistic beauty is eternal”.
References
Auernhammer, J. (2020). Human-centered AI: The role of Human-centered Design Research
in the development of AI. In Boess, S., Cheung, M. and Cain, R. (eds.), Synergy - DRS In-
ternational Conference 2020, 11-14 August, Held online.
https://doi.org/10.21606/drs.2020.282
Bannon, L. J., & Schmidt, K. (1989). CSCW: Four characters in search of a context. In
ECSCW 1989: Proceedings of the First European Conference on Computer Supported Co-
operative Work. Computer Sciences Company, London.
Beckert, B. (2021, September). The European way of doing Artificial Intelligence: The state
of play implementing Trustworthy AI. In 2021 60th FITCE Communication Days
Böckle, M., Yeboah-Antwi, K., & Kouris, I. (2021). Can you trust the black box? The effect
of personality traits on trust in AI-enabled user interfaces. In Artificial Intelligence in HCI:
Second International Conference, AI-HCI 2021, Held as Part of the 23rd HCI International
Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings (pp. 3-20). Cham:
Springer International Publishing.
Coskun, A., Cila, N., Nicenboim, I., Frauenberger, C., Wakkary, R., Hassenzahl, M., ... &
Forlano, L. (2022). More-than-human Concepts, Methodologies, and Practices in HCI. In
CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-5).
Costabile, M. F., Desolda, G., Dimauro, G., Lanzilotti, R., Loiacono, D., & Matera, M and
Zancanaro, M. (2022). A Human-centric AI-driven Framework for Exploring Large and
Complex Datasets. Proceedings of CoPDA2022 - Sixth International Workshop on Cul-
tures of Participation in the Digital Age: AI for Humans or Humans for AI? June 7, 2022,
Frascati (RM), Italy
Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a re-
sponsible way. Cham: Springer.
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 10
Dignum, F., & Dignum, V. (2020). How to center AI on humans. In NeHuAI 2020, 1st Inter-
national Workshop on New Foundations for Human-Centered AI, Santiago de Compostela,
Spain, September 4, 2020 (pp. 59-62).
Dwivedi, Y. K., Hughes, L., Kar, A. K., Baabdullah, A. M., Grover, P., Abbas, R., ... &
Wade, M. (2022). Climate change and COP26: Are digital technologies and information
management part of the problem or the solution? An editorial reflection and call to action.
International Journal of Information Management, 63, 102456.
Elahi, H., Castiglione, A., Wang, G., & Geman, O. (2021). A human-centered artificial intel-
ligence approach for privacy protection of elderly App users in smart cities. Neurocompu-
ting, 444, (pp. 189-202).
Garcia, O. (1999). An Approach to Complexity from a Human-Centered Artificial Intelligence
Perspective. In Encyclopedia of Computer Science and Technology, Vol. 40 (A. Kent and
J. G. Williams, eds.), Marcel Dekker, New York (pp. 1-16).
Giaccardi, E., & Redström, J. (2020). Technology and more-than-human design. Design Is-
sues, 36(4), (pp. 33-44).
Google. (2019). Responsible AI practices. Retrieved 16 February from https://ai.google/re-
sponsibilities/responsible-ai-practices/
HAI Research. (2021). Guiding Human-Centered AI. Stanford Institute for Human-Centered
Artificial Intelligence. Retrieved January 28, 2023, from https://hai.stanford.edu/research
He, H., Gray, J., Cangelosi, A., Meng, Q., McGinnity, T. M., & Mehnen, J. (2020). The chal-
lenges and opportunities of artificial intelligence for trustworthy robots and autonomous
systems. In 2020 3rd International Conference on Intelligent Robotic and Control Engi-
neering (IRCE) (pp. 68-74). IEEE.
Herrmann, T. (2022). Promoting Human Competences by Appropriate Modes of Interaction
for Human-Centered-AI. In Artificial Intelligence in HCI: 3rd International Conference,
AI-HCI 2022, Held as Part of the 24th HCI International Conference, HCII 2022, Virtual
Event, June 26–July 1, 2022, Proceedings (pp. 35-50). Cham: Springer International Pub-
lishing.
High-Level Expert Group on Artificial Intelligence (AI-HLEG). (2019). Ethics Guidelines for
Trustworthy AI. Brussels: European Commission. https://ec.europa.eu/futurium/en/ai-alli-
ance-consultation/
Holzinger, A., Saranti, A., Angerschmid, A., Retzlaff, C. O., Gronauer, A., Pejakovic, V.,
Medel-Jimenez, F., Krexner, T., Gollob, C. & Stampfer, K. (2022a). Digital transfor-
mation in smart farm and forest operations needs human-centered AI: challenges and fu-
ture directions. Sensors, 22(8), 3043.
Holzinger, A., Kargl, M., Kipperer, B., Regitnig, P., Plass, M., & Müller, H. (2022b). Per-
sonas for artificial intelligence (AI) an open source toolbox. IEEE Access, 10, 23732-
23747.
IBM. (2020). AI ethics (IBM’s multidisciplinary, multidimensional approach helping advance
responsible AI). Retrieved 16 February from https://www.ibm.com/artificial-intelli-
gence/ethics
IDEO (2023). Design thinking frequently asked questions (FAQ). https://designthink-
ing.ideo.com/faq/whats-the-difference-between-human-centered-design-and-design-think-
ing. Retrieved at 29 Jun 2023.
Kitchenham, B. (2004). Procedures for performing systematic reviews. Keele, UK, Keele
University, 33(2004), 1-26.
Komischke, T. (2021). Human-centered artificial intelligence considerations and implemen-
tations: a case study from software product development. In Artificial Intelligence in HCI:
Second International Conference, AI-HCI 2021, Held as Part of the 23rd HCI International
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 11
Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings (pp. 260-268).
Cham: Springer International Publishing.
Li, F. (2018). Opinion | How to Make A.I. That’s Good for People. The New York Times.
https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html
Liikkanen, L. A. (2019). It ain’t nuttin’ new – interaction design practice after the ai hype. In
Human-Computer Interaction–INTERACT 2019: 17th IFIP TC 13 International Confer-
ence, Paphos, Cyprus, September 2–6, 2019, Proceedings, Part IV 17 (pp. 600-604).
Springer International Publishing.
Microsoft. (2020). Responsible AI (policies, practices, and tools that make up a framework
for Responsible AI by Design). Retrieved 16 February from https://www.microsoft.com/en-
us/ai/responsible-ai
Nagitta, P. O., Mugurusi, G., Obicci, P. A., & Awuor, E. (2022). Human-centered artificial
intelligence for the public sector: The gate keeping role of the public procurement profes-
sional. Procedia Computer Science, 200, (pp. 1084-1092).
US Office of Science and Technology Policy (OSTP) - Select Committee on Artificial Intelli-
gence. (2023). National Artificial Intelligence Research and Development Strategic Plan
2023. (https://digital.library.unt.edu/ark:/67531/metadc2114122/: accessed June 29, 2023),
University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu.
Nicenboim, I., Giaccardi, E., Søndergaard, M. L. J., Reddy, A. V., Strengers, Y., Pierce, J., &
Redström, J. (2020). More-than-human design and AI: in conversation with agents. In
Companion publication of the 2020 ACM designing interactive systems conference (pp.
397-400).
Norman, D. A. (2005). Human-centered design considered harmful. Interactions, 12(4), 14-
19.
Nowak, A., Lukowicz, P., & Horodecki, P. (2018). Assessing artificial intelligence for hu-
manity: Will AI be the our biggest ever advance? Or the biggest threat [Opinion]. IEEE
Technology and Society Magazine, 37(4), (pp. 26-34).
Pappas, I. O., Mikalef, P., Dwivedi, Y. K., Jaccheri, L., & Krogstie, J. (2023). Responsible
Digital Transformation for a Sustainable Society. Information Systems Frontiers, 1-9.
Renz, A., & Vladova, G. (2021). Reinvigorating the discourse on human-centered artificial
intelligence in educational technologies. Technology Innovation Management Review,
11(5).
Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human
Behavior and Emerging Technologies, 1(1), 33-36.
Rogers, Y., Brereton, M., Dourish, P., Forlizzi, J., & Olivier, P. (2021). The dark side of inter-
action design. Extended Abstracts of the 2021 CHI Conference on Human Factors in Com-
puting Systems. Association for Computing Machinery, New York, NY, USA, Article 152,
1–2. https://doi.org/10.1145/3411763.3450397
Rogers, Y. (2022). Commentary: human-centred AI: the new zeitgeist. Human–Computer In-
teraction, 37(3), (pp. 254-255).
Schmager, S. (2022). From commercial agreements to the social contract: human-centered AI
guidelines for public services. Proceedings of the 14th Mediterranean Conference on Infor-
mation Systems (MCIS 2022). Association for Information Systems (AIS).
Schoenherr, J. R., Abbas, R., Michael, K., Rivas, P., & Anderson, T. D. (2023). Designing AI
using a human-centered approach: Explainability and accuracy toward
trustworthiness. IEEE Transactions on Technology and Society, 4(1), 9-23.
Shneiderman, B. (23 Mar 2020a). Human-centered artificial intelligence: Reliable, safe &
trustworthy. International Journal of Human–Computer Interaction, 36(6), (pp. 495-504).
Schmager et al. /Defining Human-Centered AI
The 15th Mediterranean Conference on Information Systems (MCIS) and the 6th Middle East & North Africa
Conference on digital Information Systems (MENACIS), Madrid 2023 12
Shneiderman, B. (2020b). Design lessons from AI’s two grand goals: human emulation and
useful applications. IEEE Transactions on Technology and Society, 1(2), 73-82.
Shneiderman, B. (2020c). Human-centered artificial intelligence: Three fresh ideas. AIS
Transactions on Human-Computer Interaction, 12(3), (pp. 109-124).
Shneiderman, B. (2020d). Bridging the gap between ethics and practice: guidelines for relia-
ble, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive
Intelligent Systems (TiiS), 10(4), (pp. 1-31).
Shneiderman, B. (2021a). Tutorial: Human-centered AI: Reliable, safe and trustworthy. In
26th International Conference on Intelligent User Interfaces-Companion (pp. 7-8).
Shneiderman, B. (2021b). Human-centered AI: A new synthesis. In Human-Computer Interac-
tion–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–
September 3, 2021, Proceedings, Part I 18 (pp. 3-8). Springer International Publishing.
Stanford GDPi (2018). Human-Centered AI: Building Trust, Democracy and Human Rights
by Design. Medium. https://medium.com/stanfords-gdpi/human-centered-ai-building-trust-
democracy-and-human-rights-by-design-2fc14a0b48af
Steels, L. (2020). Personal dynamic memories are necessary to deal with meaning and under-
standing in human-centric AI. In NeHuAI@ ECAI (pp. 11-16).
Subramonyam, H., Seifert, C., & Adar, E. (2021). Towards a process model for co-creating
AI experiences. In Designing Interactive Systems Conference 2021 (pp. 1529-1543).
Vassilakopoulou, P., & Pappas, I. O. (2022). AI/Human augmentation: a study on chatbot–
human agent handovers. In Co-creating for Context in the Transfer and Diffusion of IT:
IFIP WG 8.6 International Working Conference on Transfer and Diffusion of IT, TDIT
2022, Maynooth, Ireland, June 15–16, 2022, Proceedings (pp. 118-123). Cham: Springer
International Publishing.
Vassilakopoulou, P., Parmiggiani, E., Shollo, A., & Grisot, M. (2022). Responsible AI: Con-
cepts, critical perspectives and an Information Systems research agenda. Scandinavian
Journal of Information Systems, 34(2), 3.
Wang, D., Weisz, J. D., Muller, M., Ram, P., Geyer, W., Dugan, C., Tausczik, Y., Samu-
lowitz, H. & Gray, A. (2019). Human-AI collaboration in data science: Exploring data sci-
entists' perceptions of automated AI. Proceedings of the ACM on human-computer interac-
tion, 3(CSCW), (pp. 1-24).
Xu, W. (2019). Toward human-centered AI: a perspective from human-computer interaction.
interactions, 26(4), (pp. 42-46).
Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2022). Transitioning to human interaction with AI
systems: New challenges and opportunities for HCI professionals to enable human-cen-
tered AI. International Journal of Human–Computer Interaction, 39(3), (pp. 494-518).
Yang, S. J., Ogata, H., Matsui, T., & Chen, N. S. (2021). Human-centered artificial intelli-
gence in education: Seeing the invisible through the visible. Computers and Education: Ar-
tificial Intelligence, 2, 100008.
... Digital technologies -such as AI -can either exacerbate existing challenges or provide innovative solutions, depending on how they are integrated into societal frameworks . A human-centered approach to AI is not merely about integrating AI into the daily lives of human beings but doing so in a manner that respects and enhances the human condition (Schmager et al., 2023). HCAI aims to enhance and safeguard human welfare, emphasizing the integration of AI systems that respect human values and amplify their abilities. ...
... Riedl (2019) notes that involving stakeholders helps to align AI technologies with the multifaceted spectrum of human needs and societal values. Similarly, Schmager et al. (2023) highlight that HCAI is committed to understanding and incorporating human purposes, values, and the desired properties of AI systems through HCD methodologies. Especially in the context of the public sector, HCAI not only addresses the augmentation of human capacities but can also contribute to the ethical governance of AI applications, balancing technological advancements with societal norms and individual rights. ...
Conference Paper
Full-text available
Most existing design and development guidelines for Human-Centered AI primarily cater to a commercial context, they are not tailored to the specific needs of public services. This paper presents an Action Design Research study proposing public service-specific design principles for Human-Centered AI. The design principles are informed by multiple iterations of empirical research with citizens and public service employees acknowledging the multi-stakeholder nature of Human-Centered AI. The study recognizes an evolving understanding towards prioritizing human values and well-being in AI technologies. Furthermore, it considers the relationship between technology features in the public sector and citizens' needs to ensure that AI systems are developed with a commitment to fostering public trust and welfare. The study contributes to theory and practice by advancing the scholarly discourse on Human-Centered AI and informing AI strategies and implementations within the public sector.
... In the context of the Fifth Industrial Revolution (5IR), where humanmachine collaboration is critical to creating customer value, a novel concept known as HCAI (Human-Centered Artificial Intelligence) has surfaced. Researchers from a variety of fields have attempted to express their opinions about HCAI by offering different definitions(Schmager et al. 66 ). Therefore, synthesizing and reviewing these definitions in many different domains is necessary. ...
... If only keyword "Human centered artificial intelligence" was used in Web of Science, more than three million results have been shown in all fields. If this result was sort by categories, large number of articles focused on Computer Science AI (see Figure 2) Very recently, Schmager et al. 66 gives an overview of Figure 2: Ten categories have most related work to HCAI the HCAI literature. Section 5 of Dom 1 provides a detailed analysis and summary of the relevant works in HCAI. ...
Chapter
The pervasive integration of Artificial Intelligence (AI) in various facets of human life, driven by increasingly sophisticated algorithms, underscores the importance of its safety and reliability. AI’s role in Industry 4.0, connecting machines and processes to solve complex issues, is paving the way for the 5.0 Industrial Revolution (5IR). This revolution addresses global challenges such as climate change, pandemics, and conflicts. Ensuring the safety and reliability of AI systems is crucial, as these technologies significantly impact society. This chapter provides an overview of AI safety and reliability, discussing major advancements, methodologies for reliability assessment, and practical examples of AI applications. It highlights the Human-Centered AI (HCAI) concept, which emphasizes aligning AI development with human values. The chapter also explores machine learning’s role in enhancing AI reliability and addresses the challenges and ethical concerns associated with AI deployment. Underscore the need for ongoing research and interdisciplinary collaboration to ensure AI systems are safe, reliable, and beneficial to humanity in the evolving landscape of Industry 5.0 with applications in many fields such as healthcare, manufacturing, human resources management, etc.
... Together, they can provide a richer, more human-centered user experience. When implementing solutions based on these technologies, it is crucial to consider critical factors to ensure that the approaches align with the interests of individuals [6]. Therefore, it is not only necessary to identify and optimize technical criteria or metrics but also consider those within the human and social domain. ...
Article
Full-text available
The enhancement of mechanisms to protect the rights of migrants and refugees within the European Union represents a critical area for human-centered artificial intelligence (HCAI). Traditionally, the focus on algorithms alone has shifted toward a more comprehensive understanding of AI’s potential to shape technology in ways which better serve human needs, particularly for disadvantaged groups. Large language models (LLMs) and retrieval-augmented generation (RAG) offer significant potential to bridging gaps for vulnerable populations, including immigrants, refugees, and individuals with disabilities. Implementing solutions based on these technologies involves critical factors which influence the pursuit of approaches aligning with humanitarian interests. This study presents a proof of concept utilizing the open LLM model LLAMA 3 and a linguistic corpus comprising legislative, regulatory, and assistance information from various European Union agencies concerning migrants. We evaluate generative metrics, energy efficiency metrics, and metrics for assessing contextually appropriate and non-discriminatory responses. Our proposal involves the optimal tuning of key hyperparameters for LLMs and RAG through multi-criteria decision-making (MCDM) methods to ensure the solutions are fair, equitable, and non-discriminatory. The optimal configurations resulted in a 20.1% reduction in carbon emissions, along with an 11.3% decrease in the metrics associated with bias. The findings suggest that by employing the appropriate methodologies and techniques, it is feasible to implement HCAI systems based on LLMs and RAG without undermining the social integration of vulnerable populations.
... Finally, with AI assistance, case workers can take better-informed and potentially more consistent decisions. Across these exemplars, AI serves as a supportive and complementary tool, augmenting humans that can benefit from automation without relinquishing control [3,17]. ...
Conference Paper
Full-text available
This paper presents insights for effectively deploying Human-Artificial Intelligence (AI) collaboration in the context of case management. Through an empirical study involving public sector organizations and case management enterprise systems providers, the research identifies specific tasks well-suited for augmenting human capabilities with AI. Additionally, the study points to capabilities required for organizations involved including regulators and suggests tactics for successful transition management. The findings highlight the potential of AI for efficiency improvements while emphasizing the importance of human involvement for trustworthy outcomes. By identifying opportunities for increasing automation while maintaining human control, this study contributes to both research and practice in the field of Human-AI collaboration.
... To address the challenges brought about by AI, a humancentered AI (HCAI) approach has been proposed to address the ignorance of humans and society as a priority in the current technology-driven approach to developing and deploying AI systems [1], [3], [17], [18], [19], [20], [21], [22], [23], [24]. For example, Shneiderman [1] and Xu [2] specifically proposed their HCAI frameworks. ...
Article
Full-text available
While artificial intelligence (AI) offers significant benefits, it also has negatively impacted humans and society. A human-centered AI (HCAI) approach has been proposed to address these issues. However, current HCAI practices have shown limited contributions due to a lack of sociotechnical thinking. To overcome these challenges, we conducted a literature review and comparative analysis of sociotechnical characteristics with respect to AI. Then, we propose updated sociotechnical systems (STS) design principles. Based on these findings, this paper introduces an intelligent sociotechnical systems (iSTS) framework to extend traditional STS theory and meet the demands with respect to AI. The iSTS framework emphasizes human-centered joint optimization across individual, organizational, ecosystem, and societal levels. The paper further integrates iSTS with current HCAI practices, proposing a hierarchical HCAI (hHCAI) approach. This hHCAI approach offers a structured approach to address challenges in HCAI practices from a broader sociotechnical perspective. Finally, we provide recommendations for future iSTS and hHCAI work.
... Ces démarches reprennent notamment des principes de la conception centrée utilisateur(Norman and Draper, 1986 ;Bannon, 2011), en y adjoignant des critères éthiques. Plusieurs chercheurs(Reidl, 2019 ;Xu, 2019;Shneiderman, 2021, Schmager, et al., 2023 ou institutions portent ces démarches, par exemple aux Etats-Unis d'Amériques où des instituts de recherches dédiés à l'HCAI ont été créés par Stanford University, UC Berkeley, et le MIT. . Globalement, ces démarches mettent en avant le principe d'augmentation de l'humain par l'IA plutôt que son remplacement. ...
Research
Full-text available
Le rapport offre une analyse critique et nuancée des enjeux liés à l'introduction de l'IA dans les entreprises et institutions, allant au-delà des discours simplistes sur les bénéfices de l’automatisation au travail et la crainte de perte d'emplois. Il met en lumière l'importance cruciale des choix organisationnels et de la participation des travailleurs dans le déploiement de l'IA. Les effets de l'IA sur l'emploi et les conditions de travail ne sont pas prédéterminés, mais dépendent largement des décisions prises par les entreprises/institutions et les acteurs sociaux. Ce rapport met également en garde contre des risques de l’IA pour le travail (subordination accrue, fiabilité des actions, sens du travail, diminution de la créativité et d’uniformisation de la pensée et des produits) et d’une concentration excessive du pouvoir de marché des entreprises développant ou déployant l'IA, et appelle à une régulation renouvelée pour prévenir les dérives potentielles pour le travail et l’emploi. Pour ce faire, le rapport préconise une approche basée sur quatre piliers pour un usage soutenable de l'IA permettant de préserver l'emploi et le travail : le développement de la capacité d’apprentissage des organisations, un dialogue social renouvelé, des conduites de projet participatives et donnant une valeur à l’expérience des professionnels, et des expérimentations d'usage in situ. Ainsi, il constitue un outil précieux pour les syndicats, offrant des pistes de compréhension et d'action pour aborder les défis posés par l'IA dans le monde du travail. Il encourage une approche proactive et informée, permettant aux représentants des travailleurs de participer pleinement aux débats et décisions concernant l'introduction de l'IA dans leurs secteurs respectifs.
... Ces démarches reprennent notamment des principes de la conception centrée utilisateur(Norman and Draper, 1986 ;Bannon, 2011), en y adjoignant des critères éthiques. Plusieurs chercheurs(Reidl, 2019 ;Xu, 2019;Shneiderman, 2021, Schmager, et al., 2023 ou institutions portent ces démarches, par exemple aux Etats-Unis d'Amériques où des instituts de recherches dédiés à l'HCAI ont été créés par Stanford University, UC Berkeley, et le MIT. . Globalement, ces démarches mettent en avant le principe d'augmentation de l'humain par l'IA plutôt que son remplacement. ...
Article
Full-text available
The rapid development of artificial intelligence (AI) has posed many dilemmas for higher education, one of which is the development of university educators’ competencies in using AI technologies in the educational process. The purpose of this study is to present the current state of the problem of university educators’ professional development in the sphere of AI in the theory and practice of education. To achieve the goal, theoretical and empirical methods were used. The group of theoretical ones includes the analysis of scientific literature and Internet sources, study and generalization of advanced pedagogical experience, comparative analysis, content analysis, systematization. The group of empirical methods includes document analysis, questionnaire and survey. The first part of the article presents the analysis of international and Russian regulatory documents, which showed the significance of the studied issue for the state and society, and also allowed us to find out that the legal framework regulating AI in higher education is currently undergoing the stage of active formation. The second part of the article presents the review of scientific publications by foreign and Russian scientists, which helped to highlight the theoretical aspects of the current state of the problem of university educators’ professional development in the field of AI, as well as to identify its insufficientcoverage. The third part of the article presents the results of the study of educational practice in the form of systematization of educators’ development programs offered by universities and commercial organizations at the moment. The systematization is made on two bases: by the means of implementation and by the target audience. The fourth part of the article describes the authors’ experience in the development and implementation of a professional development program for educators on the creation of educational content using neural networks which took place in South Ural State University (National Research University). The conclusion states the necessity of systematic study of the problem, coordination of actions of educational organizations and state bodies to develop a supporting regulatory framework, the necessity to create conditions that promote the continuous development of educators’ AI competencies.
Chapter
Giving voice to a diversity of human perspectives, this chapter grounds the conceptual ideas that were presented in Chapters 1 and 2 into reality through personal essays that illustrate the kaleidoscopic interplay of humans and technology across sectors, disciplines, and cultures. Starting with the 4 macro-questions of existence (Why—Who—Where—What) these narratives from innovators, leaders, educators, and artists demonstrate the principle that everything is connected in a continuum of constant change where everything matters, and every action has amplified ripple effects online and offline. Their accounts also illustrate the Win4 that will be explored in more detail in Chapter 4, whereby action that is taken for others serves the person who acts, the one for whom action is taken, the community they live in, and wider society. This ripple effect is amplified in a hybrid world.
Article
Full-text available
Being responsible for Artificial Intelligence (AI) harnessing its power while min-imising risks for individuals and society is one of the greatest challenges of our time. A vibrant discourse on Responsible AI is developing across academia, policy making and corporate communications. In this editorial, we demonstrate how the different literature strands intertwine but also diverge and propose a comprehensive definition of Responsible AI as the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values. This definition clarifies that Responsible AI is not a specific category of AI artifacts that have special properties or can undertake responsibilities, humans are ultimately responsible for AI, for its consequences and for controlling AI development and use. We explain how the four papers included in this special issue manifest different Responsible AI practices and synthesise their findings into an integrative framework that includes business models, services/products, design processes and data. We suggest that IS Research can contribute socially relevant knowledge about Responsible AI providing insights on how to balance instrumental and humanistic AI outcomes and propose themes for future IS research on Responsible AI.
Conference Paper
Full-text available
Human-centered Artificial Intelligence (HCAI) is a term frequently used in the discourse on how to guide the development and deployment of AI in responsible and trustworthy ways. Major technology actors including Microsoft, Apple and Google are fostering their own AI ecosystems and they are also providing HCAI guidelines. However, these guidelines are mostly oriented to commercial contexts. This paper focuses on HCAI for public services. Approaching human-AI interaction through the lens of social contract theory we identify amendments to improve the suitability of existing commercially-oriented HCAI guidelines for the public sector. Following the Action Design Research methodological approach, we worked with a public organization to apply, assess, and adapt the "Google PAIR guidelines", a well-known framework for human-centered AI development. Three HCAI considerations that are important for public services were identified and proposed as amendments to the existing guidelines: a) articulation of a clear value proposition by weighing public good vs. individual benefit, b) definition of reuse boundaries for public data given the relationship between citizens and their government, c) accommodation of citizen diversity considering differences in technical and administrative literacy. This paper aims to shift the perspective within human-AI interaction, acknowledging that exchanges are not always subject to commercial agreements but can also be based on the mechanisms of a social contract.
Article
Full-text available
The increasing deployment of artificial intelligence (AI) powered solutions for the public sector is hoped to change how developing countries deliver services in key sectors such as agriculture, healthcare, education, and social sectors. And yet AI has a high potential for abuse and creates risks, which if not managed and monitored will jeopardize respect and dignity of the most vulnerable in society. In this study, we argue for delineating public procurements’ role in the human-centred AI (HCAI) discourses, focusing on the developing countries. The study is based on an exploratory inquiry and gathered data among procurement practitioners in Uganda and Kenya, which have similar country procurement regimes: where traditional forms of competition in procurement apply compared to more recent pre-commercial procurement mechanisms that suit AI procurement. We found limited customization in AI technologies, a lack of developed governance frameworks, and little knowledge and distinction between AI procurement and other typical technology procurement processes. We proposed a framework, which in absence of good legal frameworks can allow procurement professionals to embed HCAI principles in AI procurement processes.
Article
Full-text available
The main impetus for the global efforts toward the current digital transformation in almost all areas of our daily lives is due to the great successes of artificial intelligence (AI), and in particular, the workhorse of AI, statistical machine learning (ML). The intelligent analysis, modeling, and management of agricultural and forest ecosystems, and of the use and protection of soils, already play important roles in securing our planet for future generations and will become irreplaceable in the future. Technical solutions must encompass the entire agricultural and forestry value chain. The process of digital transformation is supported by cyber-physical systems enabled by advances in ML, the availability of big data and increasing computing power. For certain tasks, algorithms today achieve performances that exceed human levels. The challenge is to use multimodal information fusion, i.e., to integrate data from different sources (sensor data, images, *omics), and explain to an expert why a certain result was achieved. However, ML models often react to even small changes, and disturbances can have dramatic effects on their results. Therefore, the use of AI in areas that matter to human life (agriculture, forestry, climate, health, etc.) has led to an increased need for trustworthy AI with two main components: explainability and robustness. One step toward making AI more robust is to leverage expert knowledge. For example, a farmer/forester in the loop can often bring in experience and conceptual understanding to the AI pipeline-no AI can do this. Consequently, human-centered AI (HCAI) is a combination of "artificial intelligence" and "natural intelligence" to empower, amplify, and augment human performance, rather than replace people. To achieve practical success of HCAI in agriculture and forestry, this article identifies three important frontier research areas: (1) intelligent information fusion; (2) robotics and embodied intelligence; and (3) augmentation, explanation, and verification for trusted decision support. This goal will also require an agile, human-centered design approach for three generations (G). G1: Enabling easily realizable applications through immediate deployment of existing technology. G2: Medium-term modification of existing technology. G3: Advanced adaptation and evolution beyond state-of-the-art.
Article
Full-text available
While AI has benefited humans, it may also harm humans if not appropriately developed. The priority of current HCI work should focus on transiting from conventional human interaction with non-AI computing systems to interaction with AI systems. We conducted a high-level literature review and a holistic analysis of current work in developing AI systems from an HCI perspective. Our review and analysis highlight the new changes introduced by AI technology and the new challenges that HCI professionals face when applying the human-centered AI (HCAI) approach in the development of AI systems. We also identified seven main issues in human interaction with AI systems, which HCI professionals did not encounter when developing non-AI computing systems. To further enable the implementation of the HCAI approach, we identified new HCI opportunities tied to specific HCAI-driven design goals to guide HCI professionals addressing these new issues. Finally, our assessment of current HCI methods shows the limitations of these methods in support of developing HCAI systems. We propose the alternative methods that can help overcome these limitations and effectively help HCI professionals apply the HCAI approach to the development of AI systems. We also offer strategic recommendation for HCI professionals to effectively influence the development of AI systems with the HCAI approach, eventually developing HCAI systems.
Article
Full-text available
Personas have successfully supported the development of classical user interfaces for more than two decades by mapping users’ mental models to specific contexts. The rapid proliferation of Artificial Intelligence (AI) applications makes it necessary to create new approaches for future human-AI interfaces. Human-AI interfaces differ from classical human-computer interfaces in many ways, such as gaining some degree of human-like cognitive, self-executing, and self-adaptive capabilities and autonomy, and generating unexpected outputs that require non-deterministic interactions. Moreover, the most successful AI approaches are so-called "black box" systems, where the technology and the machine learning process are opaque to the user and the AI output is far not intuitive. This work shows how the personas method can be adapted to support the development of human-centered AI applications, and we demonstrate this on the example of a medical context. This work is - to our knowledge - the first to provide personas for AI using an openly available Personas for AI toolbox. The toolbox contains guidelines and material supporting persona development for AI as well as templates and pictures for persona visualisation. It is ready to use and freely available to the international research and development community at https://github.com/human-centered-ai-lab/PERSONAS. Additionally, an example from medical AI is provided as a best practice use case. This work is intended to help foster the development of novel human-AI interfaces that will be urgently needed in the near future.
Article
In the ever-evolving area of digital transformation, following responsible and sustainable practices is essential. This editorial article discusses the importance of responsible digital transformation, emphasizing the need for academia, private and public organizations, civil society, and individuals to work together in developing digital business models that generate shared value while addressing societal challenges. The article highlights the emergence of corporate digital responsibility (CDR) and the shift from industry 4.0 to industry 5.0, which focuses on human-centric approaches and human-AI partnerships. Furthermore, it underscores the need for interdisciplinary research and systematic approaches encompassing various dimensions of sustainability. By integrating sustainable ICT principles into digital transformation initiatives, organizations can contribute to a more sustainable and responsible digital future. The suggestions in this paper, coupled with the nice research contributions included in the special issue, seek to offer a broader foundation to support responsible digital transformations for sustainable societies.
Article
One of the major criticisms of Artificial Intelligence is its lack of explainability. A claim is made by many critics that without knowing how an AI may derive a result or come to a given conclusion, it is impossible to trust in its outcomes. This problem is especially concerning when AI-based systems and applications fail to perform their tasks successfully. In this Special Issue Editorial, we focus on two main areas, explainable AI (XAI) and accuracy, and how both dimensions are critical to building trustworthy systems. We review prominent XAI design themes, leading to a reframing of the design and development effort that highlights the significance of the human, thereby demonstrating the importance of human-centered AI (HCAI). The HCAI approach advocates for a range of deliberate design-related decisions, such as those pertaining to multi-stakeholder engagement and the dissolving of disciplinary boundaries. This enables the consideration and integration of deep interdisciplinary knowledge, as evidenced in our example of social cognitive approaches to AI design. This Editorial then presents a discussion on ways forward, underscoring the value of a balanced approach to assessing the opportunities, risks and responsibilities associated with AI design. We conclude by presenting papers in the Special Issue and their contribution, pointing to future research endeavors.
Conference Paper
The last decade has witnessed the expansion of design space to include the epistemologies and methodologies of more-than-human design (MTHD). Design researchers and practitioners have been in- creasingly studying, designing for, and designing with nonhumans. This panel will bring together HCI experts who work on MTHD with different nonhumans as their subjects. Panelists will engage the audience through discussion of their shared and diverging vi- sions, perspectives, and experiences, and through suggestions for opportunities and challenges for the future of MTHD. The panel will provoke the audience into reflecting on how the emergence of MTHD signals a paradigm shift in HCI and human-centered design, what benefits this shift might bring and whether MTH should become the mainstream approach, as well as how to involve nonhumans in design and research.