Content uploaded by Valentina De Simone
Author content
All content in this area was uploaded by Valentina De Simone on Feb 26, 2025
Content may be subject to copyright.
ScienceDirect
Available online at www.sciencedirect.com
Procedia Computer Science 253 (2025) 2347–2357
1877-0509 © 2025 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 6th International Conference on Industry 4.0 and
Smart Manufacturing
10.1016/j.procs.2025.01.295
Available online at www.sciencedirect.com
^ĐŝĞŶĐĞŝƌĞĐƚ
Procedia Computer Science 00 (2024) 000–000
www.elsevier.com/locate/procedia
1877-0509 © 2024 The Authors. Published by ELSEVIER B.V. This is an open access article under the CC BY-NC-ND license
(https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 6th International Conference on Industry 4.0 and Smart Manufacturing
6th International Conference on Industry 4.0 and Smart Manufacturing
The role of human error in human robot interaction
Carmen Espositoa*, Valentina De Simonea, Valentina Di Pasqualea, Marta Rinaldib,
Marcello Ferab, Salvatore Mirandaa
aDepartment of Industrial Engineering, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano (SA), Italy
bDepartment of Engineering, University of Campania “Luigi Vanvitelli”, via Roma 29, 81031 Aversa, Italy
Abstract
In the context of Human-Robot Interaction (HRI) within manufacturing environments, Human Error (HE) remains a critical factor
affecting performance, safety, and efficiency. Understanding and categorizing these errors, essential for the design of safe and
efficient collaborative work cells, still remains a research topic uncovered by the scientific literature. This study aims to fill this
gap by systematically investigating human errors in HRI, offering a comprehensive review of existing classifications, their
underlying causes, and the research gaps in current literature. The primary objective is to develop a preliminary taxonomy of human
errors specific to HRI, which will serve as a foundation for improving the design of collaborative cells. By providing actionable
insights, this taxonomy supports the optimization of both performance and safety in industrial operations. Despite these
contributions, the research highlights ongoing challenges in fully grasping the complex interactions and feedback mechanisms that
drive human errors. Therefore, the study calls for future research on adaptive systems, zero-shot reasoning, and enhanced feedback
loops to further minimize human error in HRI settings.
© 2024 The Authors. Published by ELSEVIER B.V. This is an open access article under the CC BY-NC-ND license
(https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 6th International Conference on Industry 4.0 and Smart
Manufacturing
Keywords: collaborative robot; human performance; human error; human reliability; industrial systems.
* Corresponding author. Tel.: +39 089 964033.
E-mail address: caesposito@unisa.it
Available online at www.sciencedirect.com
^ĐŝĞŶĐĞŝƌĞĐƚ
Procedia Computer Science 00 (2024) 000–000
www.elsevier.com/locate/procedia
1877-0509 © 2024 The Authors. Published by ELSEVIER B.V. This is an open access article under the CC BY-NC-ND license
(https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 6th International Conference on Industry 4.0 and Smart Manufacturing
6th International Conference on Industry 4.0 and Smart Manufacturing
The role of human error in human robot interaction
Carmen Espositoa*, Valentina De Simonea, Valentina Di Pasqualea, Marta Rinaldib,
Marcello Ferab, Salvatore Mirandaa
aDepartment of Industrial Engineering, University of Salerno, Via Giovanni Paolo II, 132, 84084 Fisciano (SA), Italy
bDepartment of Engineering, University of Campania “Luigi Vanvitelli”, via Roma 29, 81031 Aversa, Italy
Abstract
In the context of Human-Robot Interaction (HRI) within manufacturing environments, Human Error (HE) remains a critical factor
affecting performance, safety, and efficiency. Understanding and categorizing these errors, essential for the design of safe and
efficient collaborative work cells, still remains a research topic uncovered by the scientific literature. This study aims to fill this
gap by systematically investigating human errors in HRI, offering a comprehensive review of existing classifications, their
underlying causes, and the research gaps in current literature. The primary objective is to develop a preliminary taxonomy of human
errors specific to HRI, which will serve as a foundation for improving the design of collaborative cells. By providing actionable
insights, this taxonomy supports the optimization of both performance and safety in industrial operations. Despite these
contributions, the research highlights ongoing challenges in fully grasping the complex interactions and feedback mechanisms that
drive human errors. Therefore, the study calls for future research on adaptive systems, zero-shot reasoning, and enhanced feedback
loops to further minimize human error in HRI settings.
© 2024 The Authors. Published by ELSEVIER B.V. This is an open access article under the CC BY-NC-ND license
(https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the 6th International Conference on Industry 4.0 and Smart
Manufacturing
Keywords: collaborative robot; human performance; human error; human reliability; industrial systems.
* Corresponding author. Tel.: +39 089 964033.
E-mail address: caesposito@unisa.it
10.1016/j.procs.2025.01.295 1877-0509
© 2025 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientic committee of the 6th International Conference on Industry 4.0 and Smart Manufacturing
2348 Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357
2 Author name / Procedia Computer Science 00 (2024) 000–000
1. Introduction
The interaction between humans and robots is becoming increasingly common across a broad spectrum of
industries. From transportation and defense to emergency response and manufacturing, these advanced systems are
being integrated to enhance efficiency, safety, and productivity [1]. The widespread adoption of Human-Robot
Interaction (HRI) technologies is rapidly reshaping industries, enhancing productivity, and redefining the nature of
human work [2]. As robots become integral to collaborative environments, the interaction between humans and robots
has garnered significant attention. In fact, among the three types of interaction – coexistence, cooperation, and
collaboration – the development of collaborative systems has gathered an increasing interest in literature. Despite this,
a critical issue persists: human performance in these interactions. Factors like human feelings of trust and safety
towards robots, mental workload, as well as their design, can affect human performance, especially in HRI contexts
[3, 4]. In addition, performance is often evaluated by the time taken to complete a task and the number of errors made
by humans [5].
First of all, human errors can be defined as human activities that do not achieve the intended result [6]. This result
could be intended as the realization of a product with certain characteristics in a well-defined time. Thus, any deviation
in terms of quality and missed deadlines can be considered as an error. Regarding the quality of products, with a focus
on the assembly process, human errors account for approximately 25% of general assembly defects in automobile
engines and are responsible for up to 70% of equipment malfunctions and 40% of product repairs in the broader
manufacturing industry [7]. This underscores the significant impact of human error on product quality, which may
lead to the necessity of reworking the product, with a consequent material consumption and more waste production
[8]. Since a greater variability leads to a lower product quality, the introduction of a cobot may help reduce human
error thanks to the reduction of data variability, as well as the cognitive effort of the operator [9]. This and, in general,
mental fatigue, have been proven to be causing an increase in human error rate and task completion time [10]. So, a
production configuration that includes an interaction between human and robot represents a non-deterministic
situation that could produce errors and experience a time variation from the expected duration of the task [11];
however, if humans receive support from a robot, it is possible to reduce their fatigue and achieve a reduction of error
rate [2].
To the best of the authors’ knowledge, a clear definition of errors in the new manufacturing context characterized
by HRI is not presented in the literature. It is clear that there is a manifest need for action on error management. What
emerges from the literature is that many researchers have tried to classify errors but there are multiple possible
interpretations and ways of clustering them. Starting from a systematic literature review and the analysis of errors
theory, the purpose of this paper is to identify existing error classifications in HRI environments and unify them all in
an attempt to achieve a widely accepted taxonomy. This is a newness in HRI literature since HRI has been studied
across several domains, but human errors have not been explicitly and fully analyzed.
The paper is structured as follows: in Section 2 the methodology for the systematic literature review (SLR) and the
taxonomy definition are described; in Section 3 results of both SLR and taxonomy definition are presented, while
Section 4 contains the discussion and conclusions.
2. Methodology
To achieve the purpose of this paper, a systematic literature review was conducted to better understand the nature
of human error in HRI environments and to identify existing classifications. The Research Questions (RQs) that this
study aims to answer are listed below:
• RQ1: What aspects of the human operator and the work environment are affected by human error?
• RQ2: What types of human error and error causes are detected in HRI applications?
The next step is to gather the various classifications to define a taxonomy of human error, delineating the various
levels of detail.
Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357 2349
2 Author name / Procedia Computer Science 00 (2024) 000–000
1. Introduction
The interaction between humans and robots is becoming increasingly common across a broad spectrum of
industries. From transportation and defense to emergency response and manufacturing, these advanced systems are
being integrated to enhance efficiency, safety, and productivity [1]. The widespread adoption of Human-Robot
Interaction (HRI) technologies is rapidly reshaping industries, enhancing productivity, and redefining the nature of
human work [2]. As robots become integral to collaborative environments, the interaction between humans and robots
has garnered significant attention. In fact, among the three types of interaction – coexistence, cooperation, and
collaboration – the development of collaborative systems has gathered an increasing interest in literature. Despite this,
a critical issue persists: human performance in these interactions. Factors like human feelings of trust and safety
towards robots, mental workload, as well as their design, can affect human performance, especially in HRI contexts
[3, 4]. In addition, performance is often evaluated by the time taken to complete a task and the number of errors made
by humans [5].
First of all, human errors can be defined as human activities that do not achieve the intended result [6]. This result
could be intended as the realization of a product with certain characteristics in a well-defined time. Thus, any deviation
in terms of quality and missed deadlines can be considered as an error. Regarding the quality of products, with a focus
on the assembly process, human errors account for approximately 25% of general assembly defects in automobile
engines and are responsible for up to 70% of equipment malfunctions and 40% of product repairs in the broader
manufacturing industry [7]. This underscores the significant impact of human error on product quality, which may
lead to the necessity of reworking the product, with a consequent material consumption and more waste production
[8]. Since a greater variability leads to a lower product quality, the introduction of a cobot may help reduce human
error thanks to the reduction of data variability, as well as the cognitive effort of the operator [9]. This and, in general,
mental fatigue, have been proven to be causing an increase in human error rate and task completion time [10]. So, a
production configuration that includes an interaction between human and robot represents a non-deterministic
situation that could produce errors and experience a time variation from the expected duration of the task [11];
however, if humans receive support from a robot, it is possible to reduce their fatigue and achieve a reduction of error
rate [2].
To the best of the authors’ knowledge, a clear definition of errors in the new manufacturing context characterized
by HRI is not presented in the literature. It is clear that there is a manifest need for action on error management. What
emerges from the literature is that many researchers have tried to classify errors but there are multiple possible
interpretations and ways of clustering them. Starting from a systematic literature review and the analysis of errors
theory, the purpose of this paper is to identify existing error classifications in HRI environments and unify them all in
an attempt to achieve a widely accepted taxonomy. This is a newness in HRI literature since HRI has been studied
across several domains, but human errors have not been explicitly and fully analyzed.
The paper is structured as follows: in Section 2 the methodology for the systematic literature review (SLR) and the
taxonomy definition are described; in Section 3 results of both SLR and taxonomy definition are presented, while
Section 4 contains the discussion and conclusions.
2. Methodology
To achieve the purpose of this paper, a systematic literature review was conducted to better understand the nature
of human error in HRI environments and to identify existing classifications. The Research Questions (RQs) that this
study aims to answer are listed below:
• RQ1: What aspects of the human operator and the work environment are affected by human error?
• RQ2: What types of human error and error causes are detected in HRI applications?
The next step is to gather the various classifications to define a taxonomy of human error, delineating the various
levels of detail.
Author name / Procedia Computer Science 00 (2024) 000–000 3
2.1. Systematic literature review
With the aim of identifying all existing classifications of human errors in literature, with a particular focus on
human-robot interaction in the industrial field, a Systematic Literature Review (SLR) was conducted, following the
methodology proposed by [7, 12]. The search was executed using the Scopus database in May 2024. This process
comprised four consecutive steps: defining keywords, conducting a constrained literature database search, selecting
papers based on screening criteria, and analyzing the selected papers along with data extraction.
Four groups of keywords were formulated, related to human-robot interaction, the human role in these interactions,
the events occurring during the interaction, and the relevant field of study (Table 1). The groups were combined using
the logical operator AND, while the keywords within each group were combined using the logical operator OR. The
search criteria were restricted to papers written in English from 2014 onwards and included only conference
proceedings, articles, or reviews as document types. Finally, only papers that belonged to the following subject areas
were included: Engineering, Computer science, Mathematics, Decision Sciences, Business, Management and
Accounting, Social sciences, and Psychology. The third step was divided into three screening phases. With the first
screening, which consisted of title reading, and with the second, which involved reading abstracts, all papers that were
not related to the defined keywords, were focused on interactions between humans and generic machines rather than
human-robot interaction (HRI), did not mention theoretical classifications or practical evaluations of possible human
errors, or dealt with non-industrial matters such as the construction field or were focused solely on human safety were
excluded. The third screening phase consisted of reading the full text for all the available papers, which were evaluated
with the same criteria described above. Papers that focused on robot errors in HRI were excluded, as the study's focus
was on human behavior, not machine errors.
Table 1. List of keywords selected for the systematic literature review.
Group A
Group B
Group C
Group D
Human robot interaction, Cobot,
HRI, Human robot
collaboration, Human robot
cooperation, Human robot
coexistence, Collaborative robot
Human, Worker, Employee,
Partecipant, Sample
Error, Performance, Reliability,
Collision
Manufacturing, Logistics,
Industr*, Production, Assembly,
Maintenence
After completing the screening process, a set of articles was obtained, with a primary focus on human errors in
industrial HRI. Additionally, the snowball technique was used to include relevant papers cited within the analyzed
papers. In the end, by performing different types of tasks in their case studies and considering different types of error
taxonomies, these papers made it possible to evaluate many probable situations and hypothesize a new standard
classification of human errors.
The full list of selected papers is given in Appendix A (Table A.1), while the methodology used and the results
obtained are summarized in Fig. 1.
To complete the last step of the SLR, a spreadsheet file was created to compile all the most relevant information
from the selected papers. The carried-out analysis was focused on:
• Main bibliometric characteristics of papers: the articles were classified based on information about the authors, the
year of publication, and the type of document.
• Content analysis: the following information was recorded for each selected study:
○ Type of interaction: for papers with case studies the authors explained which type of interaction they aimed to
explore with the cobot. It is possible to find coexistence, cooperation, and collaboration [13]. Coexistence refers
to a scenario where humans and robots operate in the same environment but with minimal or no direct interaction
and with no need to coordinate their actions. Cooperation involves humans and robots working together in the
same environment with some level of interaction, sharing information or tasks but they still perform them
separately. Collaboration refers to a high level of interaction where humans and robots work closely together on
shared tasks, often involving direct physical or cognitive interaction because in this kind of scenario humans
and robots are interdependent and aim to contribute to the completion of the same task.
2350 Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357
4 Author name / Procedia Computer Science 00 (2024) 000–000
○
Deduced interaction: if there is no clear declaration about the type of interaction, it has been deduced from the
type of task.
○ Taxonomy: some papers may not propose a new classification of human errors but may refer to another
taxonomy existing in the literature. This information is of paramount importance to find papers to include in the
definition of a new taxonomy that takes into account all the previous classifications and proposes a standard
classification of human errors.
○ Type of identified errors: for each paper, the human errors identified were classified as "general" and "specific",
depending on whether the proposed classification was generic and more theoretical or targeted to the particular
case study.
○ Type of factors: those factors that can influence or be influenced by human errors (i.e. mental and physical
fatigue, quality, safety, etc.).
○ Existing relation between evaluated factors and identified errors: it has been highlighted whether there are
proven correlations between the factors and human error.
Fig. 1 - Methodology used for the systematic literature review.
2.2. Taxonomy definition
First, the human error theory has been examined in general, not focusing on HRI. To classify human error, it is
necessary to distinguish between phenotype and genotype. The phenotype is how the error appears and represents the
empirical basis for its classification; on the other hand, the genotype is a contributing cause of the erroneou s action
[14]. Regarding the phenotype, there are some classifications proposed in the literature. For example, a common
classification is the one based on variations in operations sequence, like the omission of an action or the substitution
of an action with another [15, 16]. Another proposal of error classification is based on the difference between product
defects and process defects [17], with a further categorization of human errors and robot errors [9]. Some others also
distinguish between task execution time errors [5, 14], assembly process errors [5, 9, 17], and part selection and
positioning errors [17]. Second, earlier classifications focused on HE in general were considered. In this way, it is
possible to go from higher levels of abstraction down to the specific error, adapting previous classifications to the HRI
field. Finally, to ensemble this multitude of classifications and give a new comprehensive overview, a new taxonomy
was developed with the intention of classifying the different possible types of errors, in order to facilitate their
identification in HRI environments.
Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357 2351
4 Author name / Procedia Computer Science 00 (2024) 000–000
○ Deduced interaction: if there is no clear declaration about the type of interaction, it has been deduced from the
type of task.
○ Taxonomy: some papers may not propose a new classification of human errors but may refer to another
taxonomy existing in the literature. This information is of paramount importance to find papers to include in the
definition of a new taxonomy that takes into account all the previous classifications and proposes a standard
classification of human errors.
○ Type of identified errors: for each paper, the human errors identified were classified as "general" and "specific",
depending on whether the proposed classification was generic and more theoretical or targeted to the particular
case study.
○ Type of factors: those factors that can influence or be influenced by human errors (i.e. mental and physical
fatigue, quality, safety, etc.).
○ Existing relation between evaluated factors and identified errors: it has been highlighted whether there are
proven correlations between the factors and human error.
Fig. 1 - Methodology used for the systematic literature review.
2.2. Taxonomy definition
First, the human error theory has been examined in general, not focusing on HRI. To classify human error, it is
necessary to distinguish between phenotype and genotype. The phenotype is how the error appears and represents the
empirical basis for its classification; on the other hand, the genotype is a contributing cause of the erroneou s action
[14]. Regarding the phenotype, there are some classifications proposed in the literature. For example, a common
classification is the one based on variations in operations sequence, like the omission of an action or the substitution
of an action with another [15, 16]. Another proposal of error classification is based on the difference between product
defects and process defects [17], with a further categorization of human errors and robot errors [9]. Some others also
distinguish between task execution time errors [5, 14], assembly process errors [5, 9, 17], and part selection and
positioning errors [17]. Second, earlier classifications focused on HE in general were considered. In this way, it is
possible to go from higher levels of abstraction down to the specific error, adapting previous classifications to the HRI
field. Finally, to ensemble this multitude of classifications and give a new comprehensive overview, a new taxonomy
was developed with the intention of classifying the different possible types of errors, in order to facilitate their
identification in HRI environments.
Author name / Procedia Computer Science 00 (2024) 000–000 5
3. Results
3.1. SLR results
The bibliometric indicators taken into account correspond to the type of article and the year of publication. As
regards the distribution of articles over the years, it can be seen that, over a 10-year period, the highest concentration
is in the last year, with more than 60% of all articles published in this period (Fig. 2a). The increase in attention to
HRI might be explained in two ways. First of all, the growing concern for environmental sustainability: although one
might think that a manual solution would involve less energy consumption than a collaborative one, it has actually
been shown that this difference is not substantial [8], making the introduction of robots within global manufacturing
companies very common. Suffice it to say that today half of such companies have at least one robot in their factories
[22]. In addition, in recent years advancements in new technologies have enhanced the safety of machines, including
robots. Consequently, the concept of human-robot interaction (HRI) has emerged and shows great promise for
improving the performance of industrial systems. This concept also considers the social aspects of human labor, which
may become less repetitive and exhausting [18]. Regarding the type of document, there is a prevalence of articles
instead of conference papers (Fig. 2a). Turning to the content analysis, the type of interaction was one of the elements
considered in the evaluation of human error in HRI. Seven out of thirteen papers [5, 9, 17–21] indicated the type of
interaction, with a high prevalence of collaboration (5) over coexistence (1) and cooperation (1). Reading the full text,
it was possible to understand in depth the execution of the chosen task in the remaining articles and how the human
operator and the robot interacted: in particular, in the deduced type of interaction there is a prevalence of coexistence
(3), followed by collaboration (2). In one case it was not possible to determine the type of interaction. Overall,
collaboration is the most common interaction in the case studies analyzed in the papers considered (Fig. 2b). Another
aspect considered was the reference to a taxonomy existing in the literature. Only 3 papers out of 13 [15, 21, 22] use
an existent taxonomy to classify errors occurring during the task execution of the considered case study.
Fig. 2. (a) Articles and document type distribution over the years; (b) Count by type of interaction specified and deduced.
In regard to the type of task carried out in the papers that included a case study, assembly is the most diffused type
of task (62%), followed by pick-and-place (15%), quality control and disassembly (8% each), as can be seen in Fig.
3. It is interesting to note that assembly is the task type most experimented with in the case studies, underlining how
it is of great interest for research. If we then cross-reference these results with those relating to the type of interaction,
it is evident that assembly is the preferred task for each type of interaction and this is particularly true for collaboration,
where we find it in 5 out of 7 case studies [5, 9, 17, 21, 23]. This sets the path for future research. It is important to
note that the count of interaction types reported in Fig. 3 includes cases where the type is declared and cases where
the type is deduced from the characteristics of the case study. Furthermore, in one case, the type of interaction was
(a)
(b)
2352 Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357
6 Author name / Procedia Computer Science 00 (2024) 000–000
not specified and it was not possible to deduce it, nor was the type of task, as it gave a general overview of human-
robot interaction and presented a case study that only aimed to highlight 4 different types of human error [24].
Fig. 3. Type and number of tasks identified for each type of interaction.
Additionally, to address RQ1, a focus was made on the studies that analyzed the impact of human error on the work
environment and the human personality. In particular, factors such as safety, quality and performance are affected by
human error, while task complexity, mental and physical fatigue, and the learning phenomenon have the power to
increase or decrease human error. Specifically, 7 of the 13 papers highlighted the relationship between human error
and the factors listed below:
• Safety: human errors can worsen safety in the industry, causing collisions or hazardous events [15, 19, 25].
• Task complexity: increasing the complexity of tasks the number of errors committed by humans arises [5].
• Product quality: the assessment of quality can be conducted by evaluating human error rate. So, a higher error rate
leads to a lower product quality [19].
• Mental and physical fatigue: operator fatigue, both physical and mental, affects their level of attention, increasing
the chance of error [17].
• Learning phenomenon: comparing the case of a task performed completely manually to a case of collaboration
with a robot, it was observed that the number of errors is greater in the second case, demonstrating how the presence
of the robot affects human emotions and perceptions [18].
• Performance: one way to measure the operator's performance is to count the number of errors committed. Thus, as
the number of human errors increases, the measured performance worsens [26].
Finally, to address the RQ2, specific and generic error classes and their causes were identified throughout the SLR.
Following the lead of Hollnagel (1993) [14], classified as either a correct or incorrect action, with the latter being
considered an error. Additionally, several studies found in the literature provided classifications of various error types
or listed those observed in their case studies [15, 17, 19, 21]. These classifications were used to build the detailed
taxonomy described in the following paragraph.
3.2. Error taxonomy
The proposed taxonomy focuses mainly on error phenotype but also gives a brief description of error genotype –
which is the ensemble of error causes – and error consequences. Starting from the classification of phenotypes, the
Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357 2353
6 Author name / Procedia Computer Science 00 (2024) 000–000
not specified and it was not possible to deduce it, nor was the type of task, as it gave a general overview of human-
robot interaction and presented a case study that only aimed to highlight 4 different types of human error [24].
Fig. 3. Type and number of tasks identified for each type of interaction.
Additionally, to address RQ1, a focus was made on the studies that analyzed the impact of human error on the work
environment and the human personality. In particular, factors such as safety, quality and performance are affected by
human error, while task complexity, mental and physical fatigue, and the learning phenomenon have the power to
increase or decrease human error. Specifically, 7 of the 13 papers highlighted the relationship between human error
and the factors listed below:
• Safety: human errors can worsen safety in the industry, causing collisions or hazardous events [15, 19, 25].
• Task complexity: increasing the complexity of tasks the number of errors committed by humans arises [5].
• Product quality: the assessment of quality can be conducted by evaluating human error rate. So, a higher error rate
leads to a lower product quality [19].
• Mental and physical fatigue: operator fatigue, both physical and mental, affects their level of attention, increasing
the chance of error [17].
• Learning phenomenon: comparing the case of a task performed completely manually to a case of collaboration
with a robot, it was observed that the number of errors is greater in the second case, demonstrating how the presence
of the robot affects human emotions and perceptions [18].
• Performance: one way to measure the operator's performance is to count the number of errors committed. Thus, as
the number of human errors increases, the measured performance worsens [26].
Finally, to address the RQ2, specific and generic error classes and their causes were identified throughout the SLR.
Following the lead of Hollnagel (1993) [14], classified as either a correct or incorrect action, with the latter being
considered an error. Additionally, several studies found in the literature provided classifications of various error types
or listed those observed in their case studies [15, 17, 19, 21]. These classifications were used to build the detailed
taxonomy described in the following paragraph.
3.2. Error taxonomy
The proposed taxonomy focuses mainly on error phenotype but also gives a brief description of error genotype –
which is the ensemble of error causes – and error consequences. Starting from the classification of phenotypes, the
Author name / Procedia Computer Science 00 (2024) 000–000 7
next step consists of classifying these incorrect actions as follows (Fig. 4), based also on the specific error classes
found through the SLR:
• Timing: a timing error is considered a misalignment between human operator and robot. Given the duration of a
certain task, if the operator can’t keep up with the robot they generate a delay. This class of error can be further
decomposed in:
o Premature start of action: the action starts earlier than planned.
o Delayed start of action: the action starts later than planned.
o Premature end of action: the action ends earlier than planned.
o Delayed end of action: the action ends later than planned.
• Sequence: a sequence error is a deviation from the sequence established for the production of a specific product.
This class of error can be further decomposed in:
o Insertion: it consists of the inclusion of an unscheduled action in the sequence, with a consequent increase
in lead time.
o Omission: it consists of the exclusion of a scheduled action, resulting in an incorrect final product.
o Substitution: it consists of exchanging one action for another, resulting in an incorrect final product.
o Reversing: it consists of inverting in order of two contiguous actions, resulting in an incorrect final
product.
• Execution: an execution error is an erroneous action that occurs during the physical realization of the final product.
This class of error can be further decomposed in:
o Incorrect assembly: it consists of the wrong execution of the final product assembly.
o Wrong positioning: it consists of the wrong positioning of components during production or assembly
phases. This subclass of error can be broken down in:
▪ Wrong location: while interacting with a robot, the operator puts a component in the wrong place,
hindering the robot (i.e. the robot does not find the component to perform the next operation).
▪ Wrong orientation: while interacting with a robot, the operator puts a component in the wrong
orientation, hindering the robot (i.e. the robot cannot execute the pick-and-place).
o Wrong input to the robot: an incorrect input error occurs when the operator should be providing the robot
with some type of information to continue the production/assembly sequence but fails to do so.
o Collision: a collision error is determined by an operator deviation from the intended sequence of
movements and operations, engaging in behavior that the robot cannot predict, generating collisions.
• Handling: a handling error occurs when the operator does not properly handle a finished product or part of it. This
class of error can be further decomposed in:
o Dropping: it consists of the operator's loss of grip on the component or finished product, e.g. due to a lack
of force.
o Damage: it consists of damage by the operator to the component or finished product, e.g. due to an excess
of force.
• Selection: a selection error occurs when the operator chooses the wrong element in the production/assembly
process. This class of error can be further decomposed in:
o Wrong part: it consists of the operator choosing the wrong component when producing or assembling the
final product.
o Wrong fasteners: it consists of the operator choosing the wrong hardware (i.e. screws, bolts, nuts) when
producing or assembling the final product.
2354 Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357
8 Author name / Procedia Computer Science 00 (2024) 000–000
• Unspecified event: when an error can’t be classified in any of the previously described classes it can be defined as
an unspecified event.
Fig. 4. Human Error Taxonomy in Human-Robot Interaction Contexts Scheme.
All these errors may have different causes. Finally, it is possible to distinguish between first-order causes and
second-order causes [21]. Human error is produced by first-order causes, which include the lack of attention or vigor
in the operator as well as skill, to be intended as the inadequate knowledge or the lack of experience and trust in the
task that has to be performed. First-order causes are generated by second-order causes: these concern the human
condition of the operator. They include the operator's inattentiveness, the possible haste in performing the task, or the
poor training received.
Once the human errors have been identified and their causes have been investigated, it is necessary to explore their
possible consequences. From the study of the theory on human error emerged three possible factors that could
experience consequences: productivity, quality, and safety. As soon as a human error occurs, productivity can be
reduced by stopping the production or assembly line, causing an increase in the lead time, while quality can be reduced
by compromising the functionality of the final product. Safety could be affected too, since a wrong movement of the
cooperator could cause a collision with the robot. These consequences can be classified following [18] by considering
three different levels of severity:
• Low severity: a low severity error causes a small decrease in productivity with no consequences on the quality of
the final product or safety.
• Medium severity: a medium severity error causes a medium decrease in productivity with low consequences on
the quality of the final product and no variations on safety.
• High severity: a high severity error causes a high decrease in productivity with substantial consequences on the
quality of the final product and the impairment of safety.
Those presented are some general causes and consequences of human error and cannot be associated with a
particular error phenotype, since multiple kinds of error originate from the same cause and can lead to the same
Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357 2355
Author name / Procedia Computer Science 00 (2024) 000–000 9
consequences.
4. Discussions and conclusions
Human error remains a fundamental challenge in HRI, impacting the efficiency and safety of these systems. The
complexities of human-robot interaction often lead to unforeseen problems, stemming from the inherent
unpredictability of human behavior and the limitations of current robotic systems to adapt seamlessly to human
variability. This dichotomy underscores the need to understand and mitigate human error to ensure the effective
implementation of HRI technologies.
Based on the results of SLR, problems caused by human error were identified. In particular, the type of interaction
and task were evaluated, as well as references to other taxonomies. The content analysis of the selected papers allowed
the identification of current research gaps and elements for the development of a new and more complete human error
taxonomy.
The proposed taxonomy starts from the incorrect action and identifies six possible connotations of error: timing,
sequence, execution, handling, selection, or unspecified event. Each of them is then decomposed into more specific
types of error to include all possible erroneous situations. The main limitation of this study is the reference to a single
database (Scopus): in future work, it may be interesting to extend the research to other databases and to improve the
search string. Furthermore, areas such as collision and safety were excluded due to their generic nature, which would
have resulted in an even larger sample of articles to analyse. However, these topics may include articles discussing
the classification of human error and may offer new categories not yet included in the proposed taxonomy.
In general, it is possible to imagine the future inclusion of new types of errors, to enrich the proposed taxonomy
and make it as complete as possible, in order to facilitate the identification of human errors in human-robot interaction.
Once the errors have been identified, it will be interesting the understand of their causes were – i.e. inexperience or
lack of attention of the operator [27] – as well as the consequences that resulted from them. Not only quality and time
dilation problems but, in fact, human errors can also affect the general safety of the industrial environment [7]. In fact,
the majority of present-day catastrophes result from a combination of numerous minor events, system failures, and
human errors which, on their own, would be insignificant but, when occurring in a specific sequence of circumstances
and actions, can lead to irreversible situations [28]. It is also reasonable to assume that the severity of the consequences
of human error may vary from one sector of industry to another. It would be interesting to explore all these aspects in
more detail in a systematic study based on the proposed taxonomy, in order to better understand the existing
relationships between human errors, their effects, and the environment or industrial sector.
In addition, further studies could focus on the development of performance indicators based on the measurement
of human error in order to define a standard approach in such a subjective domain.
Finally, since the SLR has shown a growing interest in human-robot collaboration with a focus on assembly tasks,
it could be interesting to design a collaborative cell to evaluate this type of interaction and human error, designing an
experimental campaign that could enrich and improve the proposed taxonomy.
Acknowledgements
This research work is part of the activities carried out in the context of the BALANCE project (Human-roBot
performAnce evaLuAtioN by digital teChnologiEs), Code. P20229BPCL. CUP D53D23018100001,
B53D23026880001, funded by the European Union – NextGenerationEU Plan, component M4C2, investment 1.1,
through the Italian Ministry for Universities and Research MUR “Bando PRIN 2022 - D.D. 1409 del 14-09-2022”.
Appendix A
Table A.1: Selected papers analysis in detail.
2356 Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357
10 Author name / Procedia Computer Science 00 (2024) 000–000
Ref. Title Year Type of
interaction
Interaction
deduced
Type of task Type of
error
Influenced
factors
[18]
A Human Error Analysis in Human–
Robot Interaction Contexts: Evidence
from an Empirical Study
2023
Coexistence
Assembly
Generic +
Specific
Learning
phenomenon
[17]
Manual assembly and Human–Robot
Collaboration in repetitive assembly
processes: a structured comparison based
on human-centered performances
2023
Collaboration
Assembly
Generic
Mental and
physical fatigue
[19]
Visual quality and safety monitoring
system for human-robot cooperation
2023
Cooperation
Assembly
Specific
1. Assessment of
quality
2. Safety
[26]
Shared-Control Teleoperation Paradigms
on a Soft-Growing Robot Manipulator
2023
Not specified
Coexistence
Pick-and-place
Generic
Performance
[5] Advanced workstations and collaborative
robots: exploiting eye-tracking and
cardiac activity indices to unveil senior
workers’ mental workload in assembly
tasks
2023 Collaboration Assembly Generic +
Specific
Difficulty
[29] Lean back or lean in? Exploring social
loafing in human–robot teams
2023 Not specified Coexistence Quality control Specific
[20]
Towards Mutual-Cognitive Human-
Robot Collaboration: A Zero-Shot Visual
Reasoning Method
2023
Collaboration
Disassembly
Specific
Safety
[9]
Towards the modelling of defect
generation in human-robot collaborative
assembly
2023
Collaboration
Assembly
Specific
[15]
Testing Robot System Safety by Creating
Hazardous Human Worker Behavior in
Simulation
2022
Not specified
Collaboration
Pick-and-place
Generic
[22]
Self-perception of Interaction Errors
Through Human Non-verbal Feedback
and Robot Context
2022
Not specified
Not specified
Not specified
Generic
[30]
Recovering from Assembly Errors by
Exploiting Human Demonstrations
2018
Not specified
Coexistence
Assembly
Specific
[21] Modeling operator behavior in the safety
analysis of collaborative robotic
applications
2017 Collaboration Assembly Generic Safety
[23] Systematic analysis of video data from
different human–robot interaction
studies: a categorization of social signals
during error situations
2015 Not specified Assembly Generic
References
[1] Wan, Yuhui, and Chengxu Zhou. (2023) “Predicting Human-Robot Team Performance Based on Cognitive Fatigue.” in ICAC 2023 - 28th
International Conference on Automation and Computing Institute of Electrical and Electronics Engineers Inc.;
[2] Li, Chen, Aleksandra Kaszowska, and Dimitrios Chrysostomou. (2023) “A Multimodal Attention Tracking in Human-Robot Interaction in
Industrial Robots for Manufacturing Tasks.” in ICAC 2023 - 28th International Conference on Automation and Computing Institute of Electrical
and Electronics Engineers Inc.;
[3] Caiazzo, Carlo, Marija Savkovic, Milos Pusica, Djordje Milojevic, Maria Chiara Leva, and Marko Djapan. (2023) “Development of a
Neuroergonomic Assessment for the Evaluation of Mental Workload in an Industrial Human–Robot Interaction Assembly Task: A Comparative
Case Study.” Machines 11 (11).
[4] Koppenborg, Markus, Peter Nickel, Birgit Naber, Andy Lungfiel, and Michael Huelke. (2017) “Effects of movement speed and predictability
in human–robot collaboration.” Human Factors and Ergonomics In Manufacturing 27 (4): 197–209.
[5] Pluchino, Patrik, Gabriella F.A. Pernice, Federica Nenna, Michele Mingardi, Alice Bettelli, Davide Bacchin, et al. (2023) “Advanced
workstations and collaborative robots: exploiting eye-tracking and cardiac activity indices to unveil senior workers’ mental workload in
assembly tasks.” Frontiers in Robotics and AI 10 .
Carmen Esposito et al. / Procedia Computer Science 253 (2025) 2347–2357 2357
10 Author name / Procedia Computer Science 00 (2024) 000–000
Ref.
Title
Year
Type of
interaction
Interaction
deduced
Type of task
Type of
error
Influenced
factors
[18]
A Human Error Analysis in Human–
Robot Interaction Contexts: Evidence
from an Empirical Study
2023
Coexistence
Assembly
Generic +
Specific
Learning
phenomenon
[17]
Manual assembly and Human–Robot
Collaboration in repetitive assembly
processes: a structured comparison based
on human-centered performances
2023
Collaboration
Assembly
Generic
Mental and
physical fatigue
[19]
Visual quality and safety monitoring
system for human-robot cooperation
2023
Cooperation
Assembly
Specific
1. Assessment of
quality
2. Safety
[26]
Shared-Control Teleoperation Paradigms
on a Soft-Growing Robot Manipulator
2023
Not specified
Coexistence
Pick-and-place
Generic
Performance
[5]
Advanced workstations and collaborative
robots: exploiting eye-tracking and
cardiac activity indices to unveil senior
workers’ mental workload in assembly
tasks
2023
Collaboration
Assembly
Generic +
Specific
Difficulty
[29]
Lean back or lean in? Exploring social
loafing in human–robot teams
2023
Not specified
Coexistence
Quality control
Specific
[20]
Towards Mutual-Cognitive Human-
Robot Collaboration: A Zero-Shot Visual
Reasoning Method
2023
Collaboration
Disassembly
Specific
Safety
[9]
Towards the modelling of defect
generation in human-robot collaborative
assembly
2023
Collaboration
Assembly
Specific
[15]
Testing Robot System Safety by Creating
Hazardous Human Worker Behavior in
Simulation
2022
Not specified
Collaboration
Pick-and-place
Generic
[22]
Self-perception of Interaction Errors
Through Human Non-verbal Feedback
and Robot Context
2022
Not specified
Not specified
Not specified
Generic
[30]
Recovering from Assembly Errors by
Exploiting Human Demonstrations
2018
Not specified
Coexistence
Assembly
Specific
[21]
Modeling operator behavior in the safety
analysis of collaborative robotic
applications
2017
Collaboration
Assembly
Generic
Safety
[23]
Systematic analysis of video data from
different human–robot interaction
studies: a categorization of social signals
during error situations
2015
Not specified
Assembly
Generic
References
[1] Wan, Yuhui, and Chengxu Zhou. (2023) “Predicting Human-Robot Team Performance Based on Cognitive Fatigue.” in ICAC 2023 - 28th
International Conference on Automation and Computing Institute of Electrical and Electronics Engineers Inc.;
[2] Li, Chen, Aleksandra Kaszowska, and Dimitrios Chrysostomou. (2023) “A Multimodal Attention Tracking in Human-Robot Interaction in
Industrial Robots for Manufacturing Tasks.” in ICAC 2023 - 28th International Conference on Automation and Computing Institute of Electrical
and Electronics Engineers Inc.;
[3] Caiazzo, Carlo, Marija Savkovic, Milos Pusica, Djordje Milojevic, Maria Chiara Leva, and Marko Djapan. (2023) “Development of a
Neuroergonomic Assessment for the Evaluation of Mental Workload in an Industrial Human–Robot Interaction Assembly Task: A Comparative
Case Study.” Machines 11 (11).
[4] Koppenborg, Markus, Peter Nickel, Birgit Naber, Andy Lungfiel, and Michael Huelke. (2017) “Effects of movement speed and predictability
in human–robot collaboration.” Human Factors and Ergonomics In Manufacturing 27 (4): 197–209.
[5] Pluchino, Patrik, Gabriella F.A. Pernice, Federica Nenna, Michele Mingardi, Alice Bettelli, Davide Bacchin, et al. (2023) “Advanced
workstations and collaborative robots: exploiting eye-tracking and cardiac activity indices to unveil senior workers’ mental workload in
assembly tasks.” Frontiers in Robotics and AI 10 .
Author name / Procedia Computer Science 00 (2024) 000–000 11
[6] Reason J. (2000) “Human error: models and management.” The Western journal of medicine 172 (6): 393–6.
[7] Di Pasquale, Valentina, Salvatore Miranda, Walther Patrick Neumann, and Azin Setayesh. (2018) “Human reliability in manual assembly
systems: a Systematic Literature Review.” IFAC-PapersOnLine 51 (11): 675–80.
[8] Rinaldi, Marta, Mario Caterino, and Marcello Fera. (2023) “Sustainability of Human-Robot cooperative configurations: Findings from a case
study.” Computers and Industrial Engineering 182 .
[9] Puttero, Stefano, Elisa Verna, Gianfranco Genta, and Maurizio Galetto. (2023) “Towards the modelling of defect generation in human-robot
collaborative assembly.” in Procedia CIRP Elsevier B.V.; p. 247–52.
[10] Wang, Xiaodan, Rossitza Setchi, and Abdullah Mohammed. (2022) “Modelling Uncertainties in Human-Robot Industrial Collaborations.” in
Procedia Computer Science Elsevier B.V.; p. 3646–55.
[11] Shin, Dongmin, Richard A. Wysk, and Ling Rothrock. (2006) “Formal model of human material-handling tasks for control of manufacturing
systems.” IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans 36 (4): 685–96.
[12] Bayonne, Enrique, Juan A. Marin-Garcia, and Rafaela Alfalla-Luque. (2020) “Partial least squares (PLS) in operations management research:
Insights from a systematic literature review.” Journal of Industrial Engineering and Management 13 (3): 565–97.
[13] Kaiser, L., A. Schlotzhauer, and M. Brandstötter. (2018) “Safety-related risks and opportunities of key design-aspects for industrial human-
robot collaboration.” in International Conference on Interactive Collaborative Robotics Springer International Publishing; p. 95–104.
[14] Hollnagel, Erik. (1993) “The phenotype of erroneous actions.”Vol. 39, International Journal of Man-Machine Studies. p. 1–32.
[15] Huck, Tom P., Christoph Ledermann, and Torsten Kroger. (2022) “Testing Robot System Safety by Creating Hazardous Human Worker
Behavior in Simulation.” IEEE Robotics and Automation Letters 7 (2): 770–7.
[16] Fields, Robert E. (2001) “Analysis of erroneous actions in the design of critical systems.” Science (January). Available from:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.6932&rep=rep1&type=pdf
[17] Gervasi, Riccardo, Matteo Capponi, Luca Mastrogiacomo, and Fiorenzo Franceschini. (2023) “Manual assembly and Human–Robot
Collaboration in repetitive assembly processes: a structured comparison based on human-centered performances.” International Journal of
Advanced Manufacturing Technology 126 (3–4): 1213–31.
[18] Caterino, Mario, Marta Rinaldi, Valentina Di Pasquale, Alessandro Greco, Salvatore Miranda, and Roberto Macchiaroli. (2023) “A Human
Error Analysis in Human–Robot Interaction Contexts: Evidence from an Empirical Study.” Machines 11 (7).
[19] Kozamernik, Nejc, Janez Zaletelj, Andrej Košir, Filip Šuligoj, and Drago Bračun. (2023) “Visual quality and safety monitoring syste m for
human-robot cooperation.” International Journal of Advanced Manufacturing Technology 128 (1–2): 685–701.
[20] Li, Shufei, Pai Zheng, Liqiao Xia, Xi Vincent Wang, and Lihui Wang. (2023) “Towards Mutual-Cognitive Human-Robot Collaboration: A
Zero-Shot Visual Reasoning Method.” in IEEE International Conference on Automation Science and Engineering IEEE Computer Society;
[21] Askarpour, M., D. Mandrioli, M. Rossi, and F. Vicentini. (2017) “Modeling operator behavior in the safety analysis of collaborative robotic
applications.” in Computer Safety, Reliability, and Security: 36th International Conference, SAFECOMP Trento, Italy: Springer International
Publishing; p. 89–104.
[22] Loureiro, F., J. Avelino, P. Moreno, and A. Bernardino. (2022) “Self-perception of Interaction Errors Through Human Non-verbal Feedback
and Robot Contex.” in International Conference on Social Robotics Springer Nature Switzerland; p. 475487.
[23] Giuliani, Manuel, Nicole Mirnig, Gerald Stollnberger, Susanne Stadler, Roland Buchner, and Manfred Tscheligi. (2015) “Systematic analysis
of video data from different human–robot interaction studies: a categorization of social signals during error situations.” Frontiers in Psychology
6 (July).
[24] Filippo Cavallo, John-John Cabibihan, Laura Fiorini, Alessandra Sorrentino, Hongsheng He, Xiaorui Liu, et al. (2022) “Social Robotics.”
[Internet]Filippo Cavallo John-John Cabibihan Laura Fiorini Alessandra Sorrentino Hongsheng He Xiaorui Liu et al. (eds.) Cham: Springer
Nature Switzerland; (Lecture Notes in Computer Science; vol. 13818). Available from: https://link.springer.com/10.1007/978-3-031-24670-8
[25] Tonetta, Stefano, Erwin Schoitsch, and Friedemann Bitsch, editors. (2017) “Computer Safety, Reliability, and Security.” [Internet]Cham:
Springer International Publishing; (Lecture Notes in Computer Science; vol. 10488). Available from: http://link.springer.com/10.1007/978-3-
319-66266-4
[26] Stroppa, Fabio, Mario Selvaggio, Nathaniel Agharese, Ming Luo, Laura H. Blumenschein, Elliot W. Hawkes, et al. (2023) “Shared-Control
Teleoperation Paradigms on a Soft-Growing Robot Manipulator.” Journal of Intelligent and Robotic Systems: Theory and Applications 109 (2).
[27] Askarpour, M., D. Mandrioli, M. Rossi, and F. Vicentini. (2017) “Modeling operator behavior in the safety analysis of collaborative robotic
applications.” in Computer Safety, Reliability, and Security: 36th International Conference, SAFECOMP Trento, Italy: Springer International
Publishing; p. 89–104.
[28] Di Pasquale, Valentina, Salvatore Miranda, Raffaele Iannone, and Stefano Riemma. (2015) “A Simulator for Human Error Probability Analysis
(SHERPA).” Reliability Engineering & System Safety 139 : 17–32.
[29] Cymek, Dietlind Helene, Anna Truckenbrodt, and Linda Onnasch. (2023) “Lean back or lean in? Exploring social loafing in human–robot
teams.” Frontiers in Robotics and AI 10 .
[30] Muxfeldt, Arne, and Jochen J. Steil. (2018) “Recovering from Assembly Errors by Exploiting Human Demonstrations.” in Procedia CIRP
Elsevier B.V.; p. 63–8.