Article

Cognitive Reliability and Error Analysis Method–CREAM

Authors:
  • Resilient Systems Plus
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... A few studies have discussed how PIFs affect each other qualitatively (e.g., CREAM [8]). However, some others have tried to describe the analysis of mental interdependencies between different PIFs in addition to explaining the outcomes in very complex applications requiring excessive efforts by analysts (e.g., IDAC) [9]. ...
... The model expresses that the presence of a specific PSF might adjust the impacts of other PSFs and HEPs. Furthermore, a "+" sign denotes a direct effect (increase-increase and decrease-decrease), whereas a "-" sign denotes an inverse effect (increase-decrease and decrease-increase) [8]. ...
... Step Description of steps operators (second section of worksheet) 7 Loading data into software constructing linear programming 8 Determining the weight of each task (w j ) and each factor's distance from the ideal point (t i ) ...
... Subsequently, it helps provide an assessment of the consequences of human performance. It helps an analyst determine conditions that affect cognitive reliability and develop modifications to improve these conditions (Hollnagel, 1998). CREAM is based on a distinction between competence and control. ...
... Step 3: Find the combined CPC score, which is represented as a triplet [ P reduced , P not significant , P improved ]. CREAM provides a mapping from descriptors to CPC scores: À1 for CPCs descriptors that result in reduced performance, 0 for descriptors that do not significantly influence performance, and + 1 for descriptors that increase performance. Table 2 Common performance conditions for furnace start-up task (Hollnagel, 1998 Table 2, and using information available in CREAM (Hollnagel, 1998), the CPC score is [3, 5, 1]. That means that 3 CPCs result in an increase in reduced human performance, 5 CPCs do not have any significant effect, and 1 CPC results in an increase in performance. ...
... Step 3: Find the combined CPC score, which is represented as a triplet [ P reduced , P not significant , P improved ]. CREAM provides a mapping from descriptors to CPC scores: À1 for CPCs descriptors that result in reduced performance, 0 for descriptors that do not significantly influence performance, and + 1 for descriptors that increase performance. Table 2 Common performance conditions for furnace start-up task (Hollnagel, 1998 Table 2, and using information available in CREAM (Hollnagel, 1998), the CPC score is [3, 5, 1]. That means that 3 CPCs result in an increase in reduced human performance, 5 CPCs do not have any significant effect, and 1 CPC results in an increase in performance. ...
Chapter
With advancements in technology and sophistication, the role of humans in the process industries has transformed from predominantly manual operations to one primarily involving monitoring, diagnosis, and prognosis. These tasks are cognitively challenging as they involve the acquisition and processing of large amounts of information. Human errors during such operations can be catastrophic. In this chapter, we discuss the role of human factors in understanding the interaction between humans and other elements of the work system. We also discuss a framework to systematically account for human errors. Subsequently, we provide a detailed discussion on how human failures can be quantified by using human reliability assessment techniques. Finally, given the changing nature of the roles of individuals, owing to digitalization, we discuss the role of physiological measurements in guiding the application of human factors principles for evaluating and enhancing human performance.
... After the 1990s, the cognitive reliability model was established based on analysis process, where researchers tried to describe the human error mechanism by analyzing human error factors such as environmental conditions, the operator's own factors, and equipment status. Collectively, the methods from this period are called the second-generation HRA method, and mainly include ATHEANA (A Technique for Human Event Analysis) [17], CREAM (Cognitive Reliability and Error Analysis Method) [18], MERMOS [19], and so on. ...
... Techniques for the human error-rate prediction [30] (THERP) method mainly considered two stages, including human reliability: monitoring and execution. The cognitive reliability and error analysis method [18] (CREAM) divided cognitive function into four stages: observation, interpretation, planning, and execution. Each type of cognitive function was divided into several failure modes. ...
... Human-computer Interface [18,[32][33][34] For example, Human-computer Interface interactivity (input and output mode) and availability, layout rationality, the salience of key information. ...
Article
Full-text available
HRA (Human Reliability Analysis) can be seen as a symmetric problem, as it is mainly reflected in two aspects of failure and success. Human error is the most common accident in industrial systems; furthermore, an astronaut is in a very complex environment, and coupled with weightlessness, it is easy to cause human error. For this reason, this paper took the human-computer interface in a spacecraft cabin as the background, and based on the literature, questionnaire inquiry, and the division of three cognitive processes during the interaction between the astronaut and human, a computer interface was determined. This paper proposed a human reliability influencing factors system of different cognitive phases for the human-computer interface in a spacecraft cabin, a task analysis tree with a symmetry of success and failure, an HRA model with symmetry of failure and success based on cognitive stages, and Game Theory and Fuzzy Center of Gravity Method, and obtained influencing factors weights of three cognitive stages. By simulating an experiment, the trend of error probability curves shows the rationality of the human reliability method. Finally, an example was illustrated, and the analysis process of the example demonstrates that an HRA model with symmetry provides a feasible analysis process and method for the cognitive reliability of the spacecraft cabin human-computer interface interaction. The research achievements in this paper can provide theoretical guidance to improve human error root reason analysis, an analysis basis of how to improve influencing factors level, and provide a HRA method based on cognitive stages for the human-computer interaction process in a spacecraft cabin.
... From Reason Reason (1990), it is obtained that human error is taken as a universal term to comprise all the occasions which a planned sequence of mental or physical activities fails to generate the intended outcome, and these failures cannot be associated to the intervention of some chance agency. Hollnagel defined human error as an erroneous action which fails to generate the expected result and/or which produces an unwanted consequence Hollnagel (1998). In Dhillon's definition Dhillon (2017), human error is the failure to execute a stated task that could result in interruption of scheduled operations or damage to property and equipment. ...
... Various human error taxonomies have been proposed. Three dominated taxonomies are reviewed in this contribution, which are Rasmussen's skill, rule, and knowledge error Rasmussen (1986), Reason's slips, lapses, mistakes and violations Reason (1990), and Hollnagel's phenotypes and genotypes Hollnagel (1998). ...
... The CPC score could be calculated as [ reduced, improved]. In this case, human performance reliability is determined with control mode map Hollnagel (1998). ...
Article
Full-text available
The role played by humans is becoming more and more important as the proportion of human-related accidents is increasing in industry and traffic. Human error taxonomies and their applications in driving context improve the understanding of human error mechanisms in situated driving context. In previous works, the authors provide a human performance reliability score (HPRS) which can be applied to driving data using the modified fuzzy-based CREAM (cognitive reliability and error analysis method) approach. The data clustering approach FN-DBSCAN (fuzzy neighborhood density-based spatial clustering of application with noise) with genetic algorithm is applied to automatically generate membership functions characterizing the driving behaviors individually. The driving behaviors and the mechanism of human error to the corresponding HPRS numbers are not analyzed in previous works. In this contribution, the classification of human driver error and its application in driving context is reviewed. The driving behaviors and the different human errors with continuously calculated values are analyzed to investigate what really happens. Human driver reliability is evaluated especially in situated context, this means dynamically changing situations (on a second-timescale). The newly developed approach provides a dynamic measure and therefore allows to dynamically identify critical situations during operation in real time. As example the supervision of an interacting human driver is shown.
... In Hollnagel's book regarding the Cognitive Reliability and Error Analysis method (CREAM) (Hollnagel 1998), relationships between the factors/classification groups were proposed. It was suggested that this could be achieved by noting that to each consequent described by a classification group must correspond to one or more antecedents, from (an)other classification group(s). ...
... It was suggested that this could be achieved by noting that to each consequent described by a classification group must correspond to one or more antecedents, from (an)other classification group(s). An (incomplete) example table was also proposed that summarised the relationships between the antecedents and consequents (Hollnagel 1998). Entries within the table show (forward) links, the columns describe the antecedents with the factors listed in the top row while the rows describe the consequents with the factors listed in the left column. ...
... Entries within the table show (forward) links, the columns describe the antecedents with the factors listed in the top row while the rows describe the consequents with the factors listed in the left column. This table was then completed by Morais (Morais, Estrada-Lugo, et al. 2022) using the classification scheme provided by Hollnagel (Hollnagel 1998). This table provides an expert's opinion on the links between the 53 performance shaping factors proposed with CREAM, from this these factors can be grouped as stated before. ...
Conference Paper
Full-text available
Within the field of human reliability analysis (HRA), there is an acknowledged demand to move further towards data driven models. There have been several independent research projects focused on gathering the required empirical data, to support existing theoretical models used in HRA, as well as allow the use of probabilistic tools, such as Bayesian Networks, to model such data. However, with regards to Bayesian Networks, there is a reliance upon expert elicitation to design the structure of the network, that is, the causal links between the considered factors are determined by an expert, with the data used only to estimate the conditional probability tables. This work aims to provide a methodology/framework to elicit causal links between performance shaping factors from data, producing a HRA model constructed entirely from data, with the ability to integrate the knowledge provided by experts. The Multi-Attribute Technological Accidents Dataset (MATA-D) has been used as the data source, therefore the model is produced under a framework based on the Cognitive Reliability and Error Analysis Method (CREAM). This model is produced through a combination of information theory and structure learning algorithms for Bayesian Networks. The proposed model/methodology aims to support experts in their evaluation of human error probability, and reveal causal links between performance shaping factors, that may not have otherwise been considered.
... In the 1980 s, Hollnagel (in Scandinavia) and Woods (in the US) developed their own perspectives on cognition, human error and safety and accident models in a CSE context originally created by Rasmussen (Hollnagel, Woods, 1983, Woods, Roth, 1988, Hollnagel, 1993, 1998, Woods, Johannesen, Cook, Sarter, 1994, Woods, 1988. The two authors are conceptually close, and their collaboration leads to the proposition of Joint Cognitive Systems (JCS) in the mid-2000 (Hollnagel, Woods, 2005, a development that started in the early 1980 s (Hollnagel, Woods, 1983). ...
... Based on this idea, Hollnagel developed his own HRA methodology in the late 1990 s, called CREAM (Hollnagel, 1998). This he followed with a second one in the early 2000 s, called FRAM, due to his dissatisfaction with the first, which he considered to overly mimic the engineering mind set (Hollnagel, 2004). ...
... One central topic connecting many of them is the conceptualisation of complexity in relation to causality, emergence and analogies. Since SII (Hollnagel, 2014) is based on previous writings (Hollnagel, 2004(Hollnagel, , 2009, Leveson is criticising a much longer thread of developments, going back to the origins of RE, perhaps even some aspects of CSE, and Hollnagel's approach of HRA (Hollnagel, 1993(Hollnagel, , 1998. What propelled Leveson to co-edit the RE book with CSE authors in the mid-2000 s (Hollnagel, Woods, Leveson, 2006) in the first place might have been their common view of what they considered to be the limitations of Reason's model, with the help of Rasmussen. ...
Article
Over the past two decades, the ‘new view’ has become a popular term in safety theory and practice. It has however also been criticised, provoking division and controversy. The aim of this article is to clarify the current situation. It describes the origins, ambiguities and successes of the ‘new view’ as well as the critiques formulated. The article begins by outlining the origins of this concept, in the 1980 s and 1990 s, from the cognitive (system) engineering (CSE) school initiated by Rasmussen, Hollnagel and Woods. This differed from Reason’s approach to human error in this period. The article explains how Dekker, in the early 2000 s, translates ideas from the CSE school to coin the term ‘new view’, while also developing, shortly after, an argument against Reason’s legacy that was more radical and critical than his predecessors’. Secondly, the article describes the ambiguities associated with the term ‘new view’ because of the different programs that have derived from CSE (Resilience Engineering – RE then Safety II, Safety Differently, Theory of Graceful Extensibility). The text identifies three programs by different thinkers (methodological, formal and critical) and Dekker’s three eclectic versions of the ‘new view’. Thirdly, the article discusses the successes of the CSE and RE school, showing how it has strongly resonated with many practitioners outside the academic world. Fourthly, the critiques raised within the field of human factors and system safety but also from different traditions (e.g., system safety engineering with Leveson, sociology of safety with Hopkins) are introduced, and discussed.
... HRA techniques such as first-generation, a technique for human error rate prediction (THERP), human error assessment and reduction technique (HEART) (Swain and Guttmann 1983), the success-likelihood index method (SLIM) (Embrey et al. 1984) as well as human cognition reliability (Hannaman, Spurgin, and Lukic 1985) focus specifically on work properties in HEP estimations and less with the effects of environment and situation (Abrishami et al. 2020). The second-generation approaches, including such cognitive reliability and error analysis methods (Hollnagel 1993) and A Technique for Human Error Analysis (Barriere et al. 2000), were developed to improve on the design of the first generation. In second-generation methods, operator cognition and context are seen as important contributors to HEP. ...
... In the view of extensive human reliability analysis, the crew performance throughout preparing the ship for navigation is found unsatisfactory. According to the contextual control model and probability interval (Hollnagel 1998), the choice of the action is essentially random. Although the crew performance follows a plan, there would be some possible deviations due to observation and execution errors. ...
Article
Preparation for a sea voyage is one of the fundamental aspects of navigation. Several complexities are involved during the preparation of the ship for navigation due to the nature of maritime work. At this point, analysing human-related error is of paramount importance to ensure the safety of the ship and the crew. This paper describes the principles of a methodology, namely fuzzy-based shipboard operation human reliability analysis (SOHRA), to quantitatively perform human error assessment through procedures of preparing the ship for navigation. While the SOHRA (a marine-specific HRA approach) quantifies human error, the fuzzy logic deals with ambiguity and vagueness in the human error detection problem. The findings show that the total HEP (Human error probability) is found 1.49E-01 for preparing the ship for navigation. Consequently, the paper provides practical contributions to shore-based safety professionals, ship managers, and masters of the ship since it performs a systematic human reliability assessment and enhances safety control levels in the operational aspect.
... • It might be useful to combine the FTA and the FRAM process (Toroody et al., 2016). However, the FTA may not be suitable for some complex and dynamic socio-technical systems such as human-centric maritime operations (Patriarca et al., 2020;Toroody et al., 2016;Praetorius & Kataria, 2016 (Hollnagel, 1998). It can help to (Hollnagel, 1998): ...
... However, the FTA may not be suitable for some complex and dynamic socio-technical systems such as human-centric maritime operations (Patriarca et al., 2020;Toroody et al., 2016;Praetorius & Kataria, 2016 (Hollnagel, 1998). It can help to (Hollnagel, 1998): ...
Article
Full-text available
Abstract This article aims at systematically reviewing the entire collection of papers published on the development and application of the functional resonance analysis method (FRAM) in the last decade. The Preferred Reporting Item for Systematic Reviews and Meta-Analyses (PRISMA) methodology has been utilized as a formal systematic literature review standard for data gathering. The analysis encompassed 47 documents devoted to this subject matter, systematically retrieved from the online database Scopus. The findings revealed the necessity for the development of systemic safety assessment approaches to explain performance variability of complex and dynamic socio-technical systems (risk assessment or accident investigation). Indeed, it is crucial to rigorously assess the performance variability throughout safety appraisal since unexpected performance variability can combine in undesirable manners and consequently denotes a threat for safety and losses of human life. However, the FRAM process has some pros and cons as discussed in this review. Consequently, other assessment methods exist to complement the FRAM process. Keywords: Functional Resonance Analysis Method (FRAM), Resilience Engineering (RE), Decision-Making, Strategic Asset Management (SAM), Risk Management (RM), Industry 4.0, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
... It has been developed to deliver representational modeling of human actions (trees of the event HRA) and the estimative of Human Error Probability (HEP) [13]. The method SLIM-MAUD, according to [14], is based on the supposition that the failure probability associated with the performance of the task comes from a combination of PSFs (Performance Shaping Factors) that includes the individuals', environment, and task characteristics. ...
... The CREAM -The Cognitive Reliability Error Analysis Method, consists basically of a number of groups the describe phenotypes (errors mode, manifestations) and genotypes of wrong actions, where the last refer to a fundamental distinction between a person genotypes, technology and organization related to this nature [14]. ...
Article
Full-text available
This paper will bring a new proposal for the management and control of consumable items in an electronic company at Manaus Industrial Pole, using the concept of human reliability in order to reach a best efficiency of the activities carried out by the coating station operators and also the use of tools like control charts, Ishikawa diagram, 5W2H, and PDCA which when are combined result in the understanding of the problem and also a better management method in the current situation.
... Some methods attempt to assess the dependency. CREAM presents adjustment strategy to deal with the synergy of PSFs (Hollnagel and Hollnagel, 1998). PHOENIX develops a hierarchical PSFs set and mentions the dependency among adjacent hierarchies (Ekanem et al., 2016). ...
... In most event-chain based HRA techniques, PSFs act as the basic quantification elements for acquiring human error event probabilities, where each PSF represents a specific aspect influencing operator's performance and is assigned with a corresponding grade score, based on either subjective or objective evaluation results (Gertman et al., 2004;Hollnagel and Hollnagel, 1998;Chang and Xing, 2016). Indeed, most HRA models fail to capture causal relationships between influential factors, which has been one of the most common criticisms over the years. ...
Article
Full-text available
Safe operation is the foundation and guarantee of the sustainable development of the nuclear power industry. With the continuous improvement of digital nuclear power plant (NPP) design and construction, fewer and fewer accidents are caused solely by mechanical failure. Human factors (HFs) have become the most important cause of NPP accidents. By combing insights from the framework of Human Factor Analysis and Classification System (HFACS) and introducing System Dynamics approach, this paper proposes a human reliability model which identifies four safety archetypes illustrating HF risk mechanisms in NPP operation, captures the PSF interrelationship and breaks through traditional independent PSF analysis. This model focuses on the path dependence characteristics of HF risk accumulation process in a form of S-shaped risk-time curve, and the derived scenario simulations provide suggestions for related stakeholders in nuclear power field to evaluate their decision-making on human reliability improvement better in a medium-and-long term vision.
... These PSFs are aspects of human behaviour and contexts that can affect human performance and are often used to derive human error probabilities (HEPs) and identify contributors to human performance. To predict human performance reliability, a contextual description should be provided as discussions of what can happen in a particular situation should be based on a description of the specific circumstances or conditions (Hollnagel (1998); Fujita and Hollnagel (2004)). It is reasonable, then, that human error probability can be affected by a characterisation of the context. ...
... The Cognitive Reliability and Error Analysis Method (CREAM) was first developed by Hollnagel (1998), and was used to predict human performance reliability. The human error probability can be determined directly from a characterisation of the context, based on a description of the specific circumstances or conditions (Fujita and Hollnagel (2004)). ...
Article
Emergency preparedness is of paramount importance in successful emergency responses at sea. Therefore, emergency drills are regularly conducted to maintain acceptable levels of emergency preparedness. However, it needs to be considered that emergency drill operations themselves include significant risks, and there is no evidence that these risks are appropriately considered when planning emergency drill operations. Human error is one of the main contributors of accidents during emergency drill procedures. The main question posed is how overall risk, including human errors, during an emergency drill can be correctly evaluated. This paper introduces a new hybrid approach based on the Standardised Plant Analysis Risk Human Reliability Analysis (SPAR-H) method with a fuzzy multiple attributive group decision-making method. The method provides a framework for evaluating specific scenarios associated with human errors and identifies contributors that affect human performance. Estimated human errors are utilised to assess human reliability using a new approach based on a system reliability block diagram. The rescue boat drill procedure for a man overboard is selected to illustrate the method. The findings of this research show each human error probability and its contributing factors per task. As a result, overall reliability of 6.06E-01 was obtained for rescue boat drill operation.
... The idea of evaluating human reliability based on the "influencing factors-reliability" dependency is the basis for the second generation of HRA methods (Di Pasquale et al., 2013;Havlikova et al., 2015). The most well-known method is the CREAM (cognitive reliability and error analysis method) (Hollnagel, 1998). This method uses four classes of human reliability and nine influencing factors called common performance condition (CPC). ...
... The most well-known method of HRA based on fuzzy rules is Fuzzy CREAM (Konstandinidou et al., 2006;Marseguerra et al., 2007). In this method, fuzzy rules are generated based on the CREAM diagram (Hollnagel, 1998), and the defuzzification procedure is used for point estimation of the probability of human error (Mamdani, 1974). Disadvantages of Fuzzy CREAM are as follows: ...
Article
Full-text available
This article offers a method for analyzing the reliability of a man–machine system (MMS) and ranking of influencing factors based on a fuzzy cognitive map (FCM). The ranking of influencing factors is analogous to the ranking of system elements the probabilistic theory of reliability. To approximate the dependence of “influencing factors—reliability,” the relationship of variable increments is used, which ensures the sensitivity of the reliability level to variations in the levels of influencing factors. The novelty of the method lies in the fact that the expert values of the weights of the FCM graph edges (arcs) are adjusted based on the results of observations using a genetic algorithm. The algorithm's chromosomes are generated from the intervals of acceptable values of edge weights, and the selection criterion is the sum of squares of deviations of the reliability simulation results from observations. The method is illustrated by the example of a multifactor analysis of the reliability of the “driver–car–road” system. It is shown that the FCM adjustment reduces the discrepancy between the reliability forecast and observations almost in half. Possible applications of the method can be complex systems with vaguely defined structures whose reliability depends very much on interrelated factors measured expertly.
... H. Walker et al., 2010). However, as a driver becomes more accustomed to the vehicle, cognitive 'shortcuts' can likely be identified and behaviour moves more towards the rule and skill-based levels (Halbrügge, 2018;Hollnagel, 1998). Hence, it is unknown how the HMI should best support this transition. ...
Thesis
Full-text available
Partially automated vehicles are increasing in prevalence and enable drivers to hand over physical control of the vehicle’s longitudinal and latitudinal control to the automated system. However, at this partial level of automation, drivers will still be required to continuously monitor the vehicle’s operation and take back control at any time from the system when required. The Society of Automotive Engineers (SAE) defines this as Level 2 automation and consequently a number of design implications arise. To support the driver in the monitoring task, Level 2 vehicles today present a variety of information about sensor readings and operational issues to keep the driver informed; so appropriate action can be taken when required. However, existing research has shown that current Level 2 HMIs increase the cognitive workload, leading to driver cognitive disengagement and hence increasing the risk to safety. However, despite this knowledge, these Level 2 systems are available on the road today and little is known about what information should be presented to drivers inside these systems. Hence, this doctorate aimed to deliver design recommendations on how HMIs can more appropriately support the driver in the use of a partially automated Level 2 (or higher) vehicle system. Four studies were designed and executed for this doctorate. Study 1 aimed to understand the information preferences for drivers in a Level 2 vehicle using semi-structured interviews. Participants were exposed to a 10 minute, Level 2 driving simulation. A total of 25 interviews were conducted for first study. Using thematic analysis, two categories of drivers: ‘High Information Preference’ (HIP) and ‘Low Information Preference’ (LIP) were developed. It was evident that the drivers' expectations of the partial automation capability differed, affecting their information preferences and highlighting the challenge of what information should be presented inside these vehicles. Importantly, by defining these differing preferences, HMI designers can be more informed to design effective HMI, regardless of the driver’s predisposition. Building on this, an Ideas Café public engagement event was designed for Study 2; implementing a novel methodology to understand factors of trust in automated vehicles. Qualitative data gathered from the 35 event attendees was analysed using thematic analysis. The results reaffirmed the importance of the information presented in automated vehicles. Based on these first two studies, it was evident that there was an opportunity to develop a more robust understanding of what information is required in a Level 2 vehicle. Information requirements were quantitatively investigated through two eye-tracking studies (Studies 3 and 4). Both used a novel three- or five-day longitudinal study design. A shortlist of nine types of information was developed based on the results from the first two studies, regulatory standards and collaborations with Jaguar Land Rover experts. This was the first shortlist of its kind for automated vehicles. These 9 information types were presented to participants and eye- tracking was used to record their information usage during Level 2 driving. Study 3 involved 17 participants and displayed only steady state scenarios. Study 4 involved 27 participants and introduced handover and warning events. Across both studies, information usage changed significantly, highlighting the methodological importance of longitudinal testing over multiple exposures. Participants increased their usage of information confirming the vehicle’s current state technical competence. In comparison, usage decreased of future state information that could help predict the future actions of the vehicle. By characterising the change in information usage, HMI designers can now ensure important information is designed appropriately. Notably, the ‘Action Explanation’ information, that described what the vehicle was doing and why, was found to be consistently the most used information. To date, this type of information has not been observed on any existing Level 2 HMI. Results from all four studies was synthesised to develop novel design recommendations for the information required inside Level 2 vehicles, and how this should be adapted over time depending on the driver’s familiarity with the system and driving events. This doctorate has contributed novel design recommendations for Level 2 vehicles through an innovative methodological approach across four studies. These design recommendations can now be taken forward to design and test new HMIs that can create a better, safer experience for future automated vehicles.
... The HRA has three main steps: recognizing fundamental operations, analyzing relevant tasks, and determining human error probability (HEP) [10]. Several methods have been used to assess the human contribution to the accident, including Technique for Human Error Rate Prediction (THERP) [11], Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) [12], Cognitive Reliability and Error Analysis Method (CREAM) [13], Information, Decision and Action in Crew context Mostafa Mirzaei Aliabadi: https://orcid.org/0000-0003-3772-6780; Iraj Mohammadfam: https://orcid.org/0000-0003-2460-2789; ...
Article
Full-text available
Background One of the important actions for enhancing human reliability in any industry is assessing human error probability (HEP). The HEART technique is a robust tool for calculating HEP in various industries. The traditional HEART has some weaknesses due to expert judgment. For these reasons, a hybrid model is presented in this study to integrate HEART with Best-Worst Method. Materials Method In this study, the blasting process in an iron ore mine was investigated as a case study. The proposed HEART-BWM was used to increase the sensitivity of APOA calculation. Then the HEP was calculated using conventional HEART formula. A consistency ratio was calculated using BWM. Finally, for verification of the HEART-BWM, HEP calculation was done by traditional HEART and HEART-BWM. Results In the view of determined HEPs, the results showed that the mean of HEP in the blasting of the iron ore process was 2.57E-01. Checking the full blast of all the holes after the blasting sub-task was the most dangerous task due to the highest HEP value, and it was found 9.646E-01. On the other side, obtaining a permit to receive and transport materials was the most reliable task, and the HEP was 8.54E-04. Conclusion The results showed a good consistency for the proposed technique. Comparing the two techniques confirmed that the BWM makes the traditional HEART faster and more reliable by performing the basic comparisons.
... In this way, SLI is found for each error mode or specific situation. Finally, SLI is calibrated to derive HEP [28,29]. The detailed application steps of the method are also presented in Section 2.2. ...
Article
Full-text available
The transportation of steel scrap cargoes in global trade has been increasing over the years. According to the researches, the demand of steel scrap materials will be more than double in the late 21st century whereby forecasting of steel scrap production and so steel scrap cargo operations will also evenly rise in ports. Increasing steel scrap cargo operations will bring along several undesirable accidents and injuries. Investigations show that the most paramount reason for these incidents or accidents related with steel scrap cargo operations in maritime sector is human error. In this sense, it is aimed to identify human error probabilities (HEPs) for steel scrap cargo operation that is performed frequently in maritime sector, especially in bulk carrier vessels. In this study, Success Likelihood Index Method (SLIM), which is one of the methods for Human Reliability Analysis (HRA), is used to determine HEPs in steel scrap cargo operations due to the limited data availability on this topic. Accordingly, the most common error modes that are determined via detailed literature review are ranked in accordance with HEP values. It is put forward which and how much error mode is affected from Performance Shaping Factor (PSF) such as education, supervision, environmental condition, equipment and tool condition, and experience mostly. According to the analysis results, “the falling piece of steel scrap on the deck during steel scrap loading or unloading operation” has the most probability occurrence. Consequently, it is understood that training and experience factors are critically important for preventing errors in steel scrap cargo operation in overall view. On the other hand, environmental condition, supervision, and equipment and tool condition factors include the prominent level of significance to bring down the probability of accruing of some specific errors. Accordingly, the proposed approach not only make a theory-based contribution to the maritime literature, but also to active contribution to the sector involving P&I Clubs, shipping companies, and classification societies toward focusing point for minimizing the accidents about steel scrap cargo operations.
... The Contextual control model (COCOM) is a cognitive model developed in accordance with the information-processing perspective (Hollnagel, 1993(Hollnagel, , 1998. Here, the operator's mental states are divided into four categories: observation, interpretation, planning, and execution. ...
Article
Operators' mental models play a central role in safety-critical domains like the chemical process industries. Accurate mental models, i.e., a correct understanding of the process and its causal linkages, are prerequisites for safe operation. Mental models are often defined and explained in abstract terms that make their interpretation subjective and prone to bias. In this work, we propose a Hidden Markov Model (HMM) based formalism to characterize control room operators' mental models while handling abnormal situations. We show that a suitable HMM representing the operator's mental model – including the states, state transition probabilities, and emission probability distributions – can be identified experimentally using data of the operator's control actions, eye gaze, and process variable values. This HMM can be used for the quantitative assessment of operators' mental models as illustrated using various case studies. We discuss the potential applications of the model in identifying various cognitive errors and human reliability assessment. In Part 2 of this paper, we use the proposed approach to assess operators' learning during training.
... The nine common performance conditions in CREAM method are selected as the nine PSFs. The nine PSFs provided by professor Hollnagel (1998) are: ...
Article
Full-text available
This study aims to propose an approach for determining key Performance Shaping Factors (PSFs) to promote human reliability management during LNG ship offloading process. Offloading LNG from ship to onshore terminal is a high-risk and human-related operation; a small human error may trigger catastrophic consequences such as fire, explosion, and even fatality. Therefore, ensuring high human reliability level is necessary. It is widely acknowledged that human reliability is mainly influenced by plenty of PSFs. If some top important PSFs can be identified, then it will be helpful to human reliability assurance and targeted management for avoiding human errors in the shipping LNG offloading work. Determining key PSFs is a decision-making system, but there is always lack of historical data of PSF. Namely, this decision-making system has strong characteristic of grey, which is an obstacle for finding the significant PSFs. Due to this condition, the grey theory-based Grey Relational Analysis (GRA) method is a choice and should be selected for handling the insufficient PSF data and grey characteristics. Apart from GRA, the definition of risk (frequency products consequence) is utilised as the basis for reasonably explaining the ranking order of each involved PSF. In one word, GRA is firstly conducted from the view of frequency and the view of consequence, then combining the results together to identify key PSFs. The proposed method is applied to a real shipping LNG offloading case. The final result indicates that the proposed method provides a reasonable and effective way to find key PSFs for ensuring human reliability in shipping LNG offloading work.
... The latest machine learning models are like "black boxes, " i.e., they have such a complex structure that users cannot understand how an AI system converts data into decisions-making (84). However, human-computer interaction forms an Integrated Cognitive System in which the human operator remains at the top of the system and can take over when a specific situation requires it (85,86). ...
Article
Full-text available
During the Covid-19 health emergency, telemedicine was an essential asset through which health systems strengthened their response during the critical phase of the pandemic. According to the post-pandemic economic reform plans of many countries, telemedicine will not be limited to a tool for responding to an emergency condition but it will become a structural resource that will contribute to the reorganization of Healthcare Systems and enable the transfer of part of health care from the hospital to the home-based care. However, scientific evidences have shown that health care delivered through telemedicine can be burdened by numerous ethical and legal issues. Although there is an emerging discussion on patient safety issues related to the use of telemedicine, there is a lack of reseraches specifically designed to investigate patient safety. On the contrary, it would be necessary to determine standards and specific application rules in order to ensure safety. This paper examines the telemedicine-risk profiles and proposes a position statement for clinical risk management to support continuous improvement in the safety of health care delivered through telemedicine.
... The failure data related to the Passive-Active heave compensation system contained more than 100 events observed in the period (from 2014 to 2020), all failure modes resulting in heave compensation loss, totalizing 60,965 operating days as operation time. For the human error, based on (Hollnagel, 1998) a nominal cognitive function failure probability of 1.0 E-03 is considered relative to an error mode for an action performed at the wrong time (i.e., premature/delayed) or out of proper sequence, with confidence interval between 1.0 E-03 and 9.0 E-03). The failure probabilities were computed for one operating day, i.e., (T=24h). ...
... Such methodologies are known as second-generation HRA techniques, and includes for example Cognitive Environmental Simulation (CES) (Woods et al., 1992) developed in 1992, a technique for human error analysis (ATHEANA) (Cooper et al., 1996) developed in 1996 and the Standardized Plant Analysis of Risk-Human Reliability Analysis (SPAR-H) (Abrishami et al., 2020). The most famous secondgeneration technique is the Cognitive Reliability And Error Analysis Methods (CREAM) (Hollnagel, 1998) published by Hollnagel in 1993. CREAM is based on the contextual control model (COCOM) which considers four cognitive functions (i.e. ...
Article
Human reliability is a critical aspect during safety assessment of many complex systems. Design, installation and maintenance are some of the tasks that are still demanded to human operators, and thus they are subjected to human factor. The probability of a human error could be estimated using several Human Reliability Analysis (HRA). HRA has been firstly developed for nuclear plants, then it has widespread over many different applications. Several international standards and technical reports agree that human reliability is a central concern during safety assessment of railway-related systems. Despite many papers have already been published in HRA field, a systematic literature review and bibliometric analysis of human reliability in railway engineering is missing. This paper build upon such needs analyzing the state of the art of HRA with application to railroads systems from 2000 to August 2021. The review highlights a significant increase in the interest of HRA in railway. Recent papers do not focus on the proposal of innovative HRA methods specifically developed for railway. Instead, almost all the papers deal with the analysis of railway accidents (by a human factor and safety assessment point of view) or with the improvements of existing methods originally developed for other fields. However, the analysis of the state of the art pointed out some major gaps in the field that need to be discussed by further works.
... Since the 1980s, nearly 50 Human Reliability Analysis (HRA) methods have been developed (Xing et al., 2021). Among them, Many HRA methods identified HOFs, such as Technique for Human Error Rate Prediction (THERP) (Swain & Guttmann, 1983), Human Cognitive Reliability Correlation (HCR) (Hannaman & Spurgin,1984), Success Likelihood Index Methodology (SLIM) (Embrey et al., 1984), Cognitive Reliability and Error Analysis Method (CREAM) (Hollnagel, 1998), the Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H) (Gertman et al., 2005.), Human Error Assessment and Reduction Technique (HEART) (Williams, 1988), although using different descriptions, like Performance Influencing Factors (PIFs), Performance Shaping Factors (PSFs), Common Performance Conditions (CPCs), and so on. ...
Article
Full-text available
Lacking data has always been a challenging problem for risk analysts on human and organizational factors (HOFs) since the theme comes to birth. Accident reports are an essential source of HOFs information, but they are often in the form of unstructured text, making it challenging to apply the number statistic method directly. The traditional manual coding of accident records could introduce uncertainties and inefficiencies, especially when a large number of records is available. Thanks to the development of the natural language processing (NLP) technique, some analysts have attempted to mine the text of accident reports (Single et al., 2020). A similar approach was adopted to highlight HOFs contributing to the accidents. The NLP and HOFs categories have then been introduced to obtain the critical structure of HOFs related accidents. Furthermore, the approach of text similarities calculation is applied to support the relationship analysis of performance influencing factors (PIF) based on the mining of data of the EU Major Accident Reporting System's (eMARS). In general terms, a framework is proposed to efficiently exploit the information contained in accident records to assess the HOFs elements better to be included in process risk assessment.
... The Cognitive Reliability and Error Analysis Method (CREAM [26]) was developed as a response to first-generation HRA techniques. CREAM describes a number of failure modes, which were further developed into the variability modes of the Functional Resonance Analysis Method (FRAM). ...
Article
Full-text available
Aviation is a highly inter-connected system. This means that a problem in one area may cause effects in other countries or parts of the Air Transport System (ATS). Examples range from local air traffic disruptions to the 2010 volcanic ash crisis. Agility, like resilience, refers to the ability to cope with dynamics and complexity in a flexible manner, by adjusting and adapting performance and the organization of work to fit changing demands. The aim of this work is to help ATS organizations with increasing their agility in the face of crises and challenges. To this end, this article presents the Agile Response Capability (ARC) guidance material. ARC was developed from a literature study and a number of case studies that combined past event analysis, interviews, focus groups, workshops, questionnaires, and exercise observation methodologies. ARC aims to help aviation organizations to set up, run, and evaluate exercises promoting agility to handle disturbances and crises, and to enable structured pro-active and retrospective analysis of scenarios and actual events. The elements and steps of the ARC approach are illustrated and exemplified with data from three case studies. The ARC methodology facilitates more agile and resilient ways of responding to the fundamental and novel surprises that have become almost commonplace in the past decade, and are likely to continue to do so. https://www.mdpi.com/2412-3811/7/2/11
... Literature [15] proposed a Cognitive Reliability and Error Analysis Method (CREAM). This model doesn't think that human performance output occurs randomly, but believes that human performance output depends on the situational environment in which the person completes the task. ...
Article
Full-text available
The outdoor terminal box is the basic node of the power Internet of Things as an intermediate link between outdoor electrical equipment and indoor equipment, like measurement and control, protection, communication and other equipment. Terminal boxes and terminals need to be regularly checked on site, loop tests, transformations, and equipment replacement. Operator errors have become one of the main factors in outdoor terminal box accidents. In terms of human risk assessment of outdoor terminal boxes, few papers are involved. In order to correctly evaluate the influence of human factors on the failure of outdoor terminal boxes, the paper first analyzes the human behavior factors of outdoor terminal box operators, the common performance conditions of the CREAM (Cognitive Reliability and Error Analysis Method) model is used to analyze the human behavior mechanism and behavior reliability factors during the operation of the terminal box. Then, the SLIM (Success Likelihood Index Method) model is used to calculate the probability of human error. The proportional failure model is used to calculate the failure rate of the outdoor terminal box itself. Finally, taking a circuit breaker terminal box as an example for simulation, the probability of human error is 1.56%, the equipment failure rate is 0.84%, the risk value of the system is 10.7%, and the risk level of the system is 3. From the probabilistic perspective, it shows that human factors have a greater influence on the causes of accidents.
... The CPC score could be calculated as [ reduced, improved]. In this case, human performance reliability is determined with control mode map Hollnagel (1998). ...
Article
Full-text available
Human behavior monitoring classically refers to the detection of human movements or a simple recognition of activities in limited known space. The monitoring of human activities in the context of concrete operating tasks often focuses on the detection of operating errors, unauthorized actions, or implicitly on the violation of protection goals. This contribution uses a qualitative description approach (situation-operator-modeling(SOM)) with which the logic of human interaction in given formalized context as action sequences as well as the situational, i.e. contextual, application of individual single actions can be realized. However, human performance reliability in action sequences is not clear and the optimal action sequence can not be defined. To solve this problem, the concept of human performance reliability score (HPRS) proposed in previous works is calculated with the modified fuzzy-based CREAM (cognitive reliability and error analysis method) approach. Therefore, situated and personalized HPRS values could be assigned to the action sequences in SOM action space. In this case, an event-discretized behavior model situated and personalized monitoring human performance with human reliability values could be generated. Using the example of human driving behavior for driving situations on highways, the application of the method is presented in detail. The monitoring of concrete example driver in real time will be demonstrated. The examples show that a direct warning or assistance will be helpful.
... The stimulus-Organism-Response (S-O-R) paradigm [32] is a basic explanation of human behavior mechanisms in psychology. The "O" part, which happens between stimulus and response, is commonly seen as human information processing, which essentially represents the process of human cognition; see Figure 1. ...
Article
Full-text available
Decision time, also known as choice reaction time, has been frequently discussed in the field of psychology. The Hick–Hyman Law (HHL) has been a fundamental model that has revealed the quantitative relationship between the mean choice reaction time of human and the information entropy of stimuli. However, the HHL is only focused on rule-based behavior in which rules for selecting response according to stimulus are certain and neglects to model the knowledge-based behavior in which choices are uncertain and influenced by human belief. In this article, we explored the decision time related to one basic knowledge-based behavior—uncertain binary choice, where selections of response are determined by human belief degrees but not by stimuli uncertainties. Two experiments were conducted: one for verifying the HHL and the other for uncertain binary choice. The former (experiment) demonstrated the effectiveness of the HHL, and the latter one indicated that there is an exponential relationship existing between decision time and entropy of belief degree in uncertain binary choice. Moreover, data obtained from both experiments showed that the disturbance term of decision time should not be seen as probabilistic as existing studies have assumed, which highlighted the necessity and advantage of uncertain regression analysis.
... The determination of the probability of incorrect execution of a task by the operator (human error probability-HEP) is a part of the human-machine system's probability safety analysis (PSA). We usually classify the HRA methods to the methods of the 1st generation (e.g., THERP-technique for human-error rate prediction [17], HEART-human error rate technique [40], and SLIM-success likelihood index method [41]) and the 2nd generation (e.g., CREAM-cognitive reliability and error analysis method [42], and ATHEANA-a technique for human error analysis [43]). The method presented in this paper can be extended in the future using the interaction between the equipment and the human operator. ...
Article
Full-text available
The human factor is an essential aspect of the operability and safety of many technical systems. This paper focuses on the analysis of human errors in the railway domain. The subject of human reliability analysis is the behavior of operators of station-signaling systems responsible for rail traffic management. We use a technique for human-error rate prediction as the 1st generation human reliability analysis to deal with task analyses, error identification and representation, and the quantification of human error probabilities. The paper contributes to the comparison of three technologically different railway traffic control systems, having different degrees of automation—from the manually operated (electro-mechanical), through semi-automated (relay-based) to almost fully automated (computer-based) station-signaling systems. We observe the frequency of individual operations performed in time intervals and calculate human error probability and human success probability values for each operation. Thus, we can analyze human reliability and compare the workload of operators working with control systems of different degrees of automation.
... The knowledge and experience of a route that drivers develop over time also supports anticipation and future-orientated behaviour [Hollnagel 1998]. Route knowledge allows the driver to think ahead, and helps control the allocation of cognitive and perceptual resources based on expectations about the future. ...
Thesis
High-speed rails have a significant driving safety requirement than other public transport because of faster speed and an increasing public demand. However, the particularity of train driving often leads to driver’s susceptible to fatigue. Under this consideration, last decade has seen widespread adoption of ADAS in rail-based transportation industry, specifically driver fatigue detection system. ADAS is meant to help the train drivers. The trajectory planner in ADAS guides the driver to maintain a level of velocity (v) and acceleration (a) to go from station A to station B, by considering various factors as fuel efficiency, road terrain, traffic and also the state of the driver from the driver fatigue detection system. However, sometimes due to bad lighting conditions/ bad driver position/ faulty sensor, the accurate information about the train and the driver state may be delayed. The aperiodic unavailability of the driver and the train state to the ADAS system raises concern about the train dynamics stability and safety. Therefore, consideration of uncertainty in driver’s and train’s state during train stability analysis becomes essential. For this purpose, a model-based approach is employed to approximate ADAS-Driver-Train interaction and prove stability of driver advisory train control system. For the stability study, the system consisting of Driver-Train in open-loop is considered as a sampled-data system and ADAS as a controller. Further, the input-delay approach is used to transform the sampled-data system to time-varying delay system. Further, timedependent Lyapunov functionals and convexification arguments are used to derive stability criteria in terms of LMI conditions. The criteria allows to estimate the maximum allowable delay in driver and train state measurement to guarantee train dynamics stability.
... Daramola (8) utilized the human factor analysis and classification system (HFACS), which is a human factor research tool based on system theory, to analyse human error factors related to safety accidents. Hollnagel (9) proposed the cognitive reliability and error analysis method (CREAM), emphasizing the important influence of the situational environment on human behavior; the unique cognitive model provides root cause traceability and human error probability prediction. There are many methods and tools for human factor analysis in different fields, among which HFACS is currently one of the most widely applied tools. ...
Article
Full-text available
This paper firstly proposes a modified human factor classification analysis system (HFACS) framework based on literature analysis and the characteristics of falling accidents in construction. Second, a Bayesian network (BN) topology is constructed based on the dependence between human factors and organizational factors, and the probability distribution of the human-organizational factors in a BN risk assessment model is calculated based on falling accident reports and fuzzy set theory. Finally, the sensitivity of the causal factors is determined. The results show that 1) the most important reason for falling accidents is unsafe on-site supervision. 2) There are significant factors that influence falling accidents at different levels in the proposed model, including operation violations in the unsafe acts layer, factors related to an adverse technological environment for the unsafe acts layer, loopholes in site management in the unsafe on-site supervision layer, lack of safety culture in the adverse organizational influence layer, and lax government regulation in the adverse external environment layer. 3) According to the results of the BN risk assessment model, the most likely causes are loopholes in site management work, lack of safety culture, insufficient safety inspections and acceptance, vulnerable process management and operation violations.
Article
In this paper, we describe the process for taking Human Systems Integration (HSI) research developed by George Mason University for creating high quality standard operating procedures (SOPs) and using that information to modify a commercial off‐the‐shelf tool, Innoslate®, to create a new tool, Sopatra®, that aids SOP developers in creating and verifying that the procedures work within the Allowable Operational Time Window (AOTW). The tool compares the AOTW with the Time on Procedure (ToP) to create new metrics: Procedure Buffer Time (PBT) and Probability of Failure to Complete (PFtC). A new Natural Language Processing (NLP) technique was developed as well.
Article
Human reliability analysis (HRA) is a proactive approach to model and evaluate systematic human errors and has been extensively implemented in various complicated systems. The assessment of human errors relies heavily on the knowledge and experience of experts in real‐world cases. Moreover, there are usually specific sorts of uncertainty while experts use linguistic labels to evaluate human failure events. In this context, this paper seeks to establish a new model based on the hesitant fuzzy matrix (HFM) and the cognitive reliability and error analysis method (CREAM) to conduct a quantitative analysis of human errors. This model handles the multiple crisp scores of the common performance conditions (CPCs) given by experts according to the context description in terms of CPCs, determines the weights of CPCs by the HFM, and elicits the human error probability (HEP) point estimation formula considering consequences based on the CREAM. Finally, the effectiveness and practicality of the presented HFM‐CREAM model are demonstrated through the emergency response analysis of the steam generator tube rupture (SGTR) in nuclear power plant.
Article
Full-text available
Objectives. Application of human reliability analysis (HRA) techniques originally developed for industrial settings to the healthcare sector may be controversial in terms of reliability and methodological level. The aim of the present study was to adapt a standardized plant analysis risk-human reliability analysis (SPAR-H) technique for application in surgical settings through suggesting more context-specific definitions for performance shaping factors (PSFs), designing precise levels and elicitation of multipliers through a domain expert judgment approach. Methods. A ratio magnitude estimation approach was used for carrying out domain expert judgment for multiplier elicitation. Experts from four teaching hospitals participated in the present study. Intra-class correlation was used in order to examine the inter-rater reliability of the estimated multipliers for each level of diagnosis and action task type. Results. Available time, threat stress, task complexity, experience/training, procedures, working conditions, human-machine interface, fatigue and teamwork were the nine suggested PSFs for the adapted SPAR-H technique. Conclusion. Context-specific definitions of the PSFs can enhance the reliability of human error probability assessments. Eventually, it could be concluded that multiplier elicitation through domain expert judgment is an efficient approach for adaptation of the HRA techniques for application in specific contexts.
Article
Full-text available
Numerous studies have been conducted to assess the role of human errors in accidents in different industries. Human reliability analysis (HRA) has drawn a great deal of attention among safety engineers and risk assessment analyzers. Despite all technical advances and the development of processes, damaging and catastrophic accidents still happen in many industries. Human Error Assessment and Reduction Technique (HEART) and Cognitive Reliability and Error Analysis Method (CREAM) methods were compared with the hierarchical fuzzy system in a steel industry to investigate the human error. This study was carried out in a rolling unit of the steel industry, which has four control rooms, three shifts, and a total of 46 technicians and operators. After observing the work process, reviewing the documents, and interviewing each of the operators, the worksheets of each research method were completed. CREAM and HEART methods were defined in the hierarchical fuzzy system and the necessary rules were analyzed. The findings of the study indicated that CREAM was more successful than HEART in showing a better capability to capture task interactions and dependencies as well as logical estimation of the HEP in the plant studied. Given the nature of the tasks in the studied plant and interactions and dependencies among tasks, it seems that CREAM is a better method in comparison with the HEART method to identify errors and calculate the HEP.
Article
Full-text available
Medication errors can endanger the health and safety of patients and need to be managed appropriately. This study aimed at developing a new and comprehensive method for estimating the probability of medication errors in hospitals. An extensive literature review was conducted to identify factors affecting medication errors. Success Likelihood Index Methodology was employed for calculating the probability of medication errors. For weighting and rating of factors, the Fuzzy multiple attributive group decision making methodology and Fuzzy analytical hierarchical process were used, respectively. A case study in an emergency department was conducted using the framework. A total number of 17 factors affecting medication error were identified. Workload, patient safety climate, and fatigue were the most important ones. The case study showed that subtasks requiring nurses to read the handwritten of other nurses and physicians are more prone to human error. As there is no specific method for assessing the risk of medication errors, the framework developed in this study can be very useful in this regard. The developed technique was very easy to administer.
Chapter
Human support and the inevitable interaction necessarily play a major part in daily production and maintenance activities at onshore and offshore facilities. The impact of human reliability on performance of workers at offshore facility is higher compared to onshore facility, owing to the harsh offshore working environment. This study presents an integrated probability model that combines human and system reliabilities and utilises a Bayesian network (BN). A comparison was made between onshore and offshore facilities in terms of human reliability and probability of hydrocarbon release leading to fire and explosion in order to investigate the influence of human performance shaping factors (PSFs) of human error at each working environment. Particularly, this research uses probability values obtained from expert opinions in fire and explosion scenarios caused by hydrocarbon release in onshore and offshore facilities. The proposed integrated approach is applied to a case study on hydrocarbon storage tank at onshore and offshore facilities. Accordingly, three possible consequences were evaluated (i.e., probability of safe condition, hydrocarbon release and high pressure) that could lead to fire and explosion. The probabilities of consequences are compared between onshore and offshore working conditions. It has been observed that the probability of hydrocarbon release of at onshore is slightly higher compared to offshore working environment with a percentage difference of 0.032% and this can be neglected due to the small difference. Meanwhile, the influence of PSFs of human error are higher at offshore working condition. Therefore, it can be concluded that there is a higher influence of human PSF of stress, task complexity, training, experience and atmospheric factor of human error at offshore compared to onshore due to the harsh working environment.
Technical Report
Full-text available
Petro-HRA is a method for qualitative and quantitative assessment of human reliability in the petroleum industry. The method allows systematic identification, modelling and assessment of tasks that affect major accident risk. The method is mainly intended for use within a quantitative risk analysis (QRA) framework but may also be used as a stand-alone analysis. Petro-HRA should be used to estimate the likelihood of human failure events (HFEs) in post-initiating event scenarios. The method was developed in an R&D project for Norges Forskningsråd and was published in 2017. Since then, it has been applied in several petroleum projects in Norway. In 2020, Equinor funded a project with DNV and IFE to update the method. Recommendations for improvements were collected via a review of 10 Petro-HRA technical reports to Equinor and a series of structured interviews with Petro-HRA method users and stakeholders. The guideline has been split into two documents for ease of use. The text in some sections has been modified for clarity, and new or modified examples have been provided to better explain how to apply the guidance.
Article
Error due to human activities in any operation is analysed by using human reliability analysis approach in which the principal step is to identify the potential human errors followed by quantification and analysis of the error. The work presented in this research intends to apply a methodology for identifying human errors and to prioritize the risk associated with them in a LPG unloading operation. The methodology uses Hierarchical Task Analysis approach which provides the basic framework along with Systematic Human Error Reduction and Prediction Approach which aids in identification and categorization of the errors associated with each tasks with the help of predefined error taxonomy. Also, in order to quantify the risk associated with each identified error, fuzzy Failure Mode and Effect Analysis approach has been adopted. To rank and prioritize the risk associated with each identified errors where the individual constituent components are non-commensurable in nature, Vise Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method has been incorporated. The applicability of the methodology presented will aid to comprehend the severity of risk corresponding to each error at different levels and the ranking mechanism thus developed in this work aids to prioritize the action to minimize the likelihood of errors.
Article
This study explores the influence of cooperative vehicle infrastructure system (CVIS) on the driver’s visual and driving performance. Taking the work zone as an example, a driving simulation experiment involving 37 drivers has been conducted to collect driver’s eye movement and vehicle running data in the same scenario with and without CVIS information respectively. Next, the nonparametric test and grey relational analysis (GRA) have been used to compare the specific performance and to analyze the correlations among the multi-stage of driver’s information process. The results present that CVIS information has the potential to alleviate the driver’s tension and the difficulty of information perception. Meanwhile, the driver tends to search visual information more actively and complete the lane changing behavior earlier. Further analysis indicates that CVIS information can reduce the influence of mental workload on lane-changing decisions, and enhance the correlation between visual information processing and lane-changing decisions, as well as the effect of decision-making timing on the running state. Therefore, drivers’ speed control ability was improved and the traffic flow was smoother. The findings give an insight into the influence mechanism of the cooperative information on driving performance and provide a direction for a comprehensive assessment of CVIS based on human factors.
Chapter
Aviation safety is greatly influenced by pilot performance reliability. To assess the reliability, many human reliability analysis (HRA) methods are developed. Currently, in most HRA methods performance shaping factors (PSFs) are used to represent internal and external factors which contribute to human error. Up to now, the effects of PSFs are usually considered to be independent. However, more and more evidences show that causal relationships do exist among PSFs and neglecting that interrelationship will make the assessed human error rate to be too optimistic or too conservative. This paper builds a Bayesian network (BN) to represent interrelated relationships based on investigation of 50 human factor related aviation mishap reports of US Airforce from 2011 to 2019. The causal dependency of PSFs is derived from Human Factor Analysis and Classification System (HFACS) framework. And an Expectation–Maximization algorithm is used to quantify the dependency, which reduces the heavy reliance on expert judgement. Through sensitivity analysis we find that 2 key factors influencing cognitive error are negative state of operator and deteriorating technical condition, implying these factors have greater influence on the cognitive process of operator such as interpreting task demand and information perception. The proposed BN model can be used to identify the primary PSFs influencing pilot performance, providing targeted risk mitigating suggestions.
Article
Full-text available
In terms of safety management, the implementation of industrial parks construction projects (IPCPs) is incredibly challenging due to the special working conditions and the specific type of use of the buildings. On the other hand, the possibility of causing accidents in these areas based on human errors is high and important for project execution due to the risks of human errors and financial losses. Therefore, this study tries to fill this existing research gap by identifying and evaluating the effective key factors leading to the occurrence of construction accidents caused by human errors in the development of IPCPs. After a holistic review of the reported literature, four rounds of fuzzy Delphi survey were launched to capture the individual opinions and feedback from various project experts. Accordingly, 41 key factors affecting human errors in the implementation of industrial parks construction projects in Iran were identified and classified into nine main groups of wrong actions, observations/interpretations, planning/processes, equipment, organization, individual activities, environmental conditions, rescue, and technology. Then, the step-wise weight assessment ratio analysis (SWARA) method was adopted to rate and rank the identified factors of human errors in the implementation of IPCPs in Iran. The research findings indicated that among the elicited factors, time factor (0.1226), delayed interpretation (0.1080), and incorrect diagnosis/prediction (0.0990) are the three most crucial factors leading to human errors in the implementation of IPCPs in Iran. The results of this research study have provided various major project stakeholders with an effective decision-aid tool to make better-informed decisions in managing and reducing the occurrence of construction site accidents particularly caused by human errors associated with IPCPs.
Article
Human error associated with medical device use may lead to devastating consequences for end-users. Identifying post-market associations between instances of use error can inform human factors design decisions and guide regulatory action. The US Food and Drug Administration (FDA) requires medical device manufacturers, importers, and user facilities to track and report instances of adverse events. These reports are available in the Manufacturer and User Facility Experience (MAUDE) database. MAUDE exists to support post-market surveillance and to aid the discovery of adverse event-medical device associations. Each event contained in MAUDE contains an event narrative: a free-text description of the event. These event narratives are coded with a “device problem code” that describes the nature of each event that can aid in identifying trends, how,ever codes related to human factors are limited in detail. In the authors’ prior work, new use error categories for MAUDE entries were proposed tprovideides decomposition based on primary and secondary use error. In this work, these use error categories were used to structure entries based on narrative content. Topic modeling was performed for automatic extraction of narrative themes for use error MAUDE data from 2010 – 2019. Latent Dirichlet Allocation, an unsupervised generative model, was used to provide a descriptive analysis of this textual data and identify thematic topics. Notable outcomes included the categorization of narratives into six distinct topics; the first five primarily involved rule-based errors during the operation of glucose self-management devices, and the sixth involved knowledge-based errors during inpatient surgical procedures. Distinct divides between error narratives for healthcare providers and patients, as well as for different device types were observed, demonstrating an alignment with proposed use error categories. These categories can be used to monitor trends for specific medical device user segments and can inform device manufacturers of usability design requirements that must be addressed.
Chapter
This article addresses selected issues of the functional safety management of a hazardous process installation. An important role in reducing risks plays nowadays a safety-related control system (SRCS) as a part of the industrial automation and control system (IACS). Responsible tasks in abnormal and accident situations execute the human operators that make use of an alarm system (AS) and its interface within the human system interface (HSI). In this article an approach is outlined for evaluating the human error probability (HEP) interacting with AS. It includes determining the required risk reduction expressed by relevant safety integrity level (SIL). Determined SIL of given safety function to be implemented in the basic process control system (BPCS) and/or the safety instrumented system (SIS) must be then verified for their architectures considered. The HEP for relevant human operator behaviour type is evaluated using the human cognitive reliability (HCR) model.KeywordsIndustrial installationsRisk reductionFunctional safetyIndustrial control systemAlarm systemHuman factorsHuman cognitive reliability
Article
Background: The outbreak of COVID-19 has adversely affected both global economy and public health around the world. These effects have also been observed in many workplaces, including mines. Objective: This study aimed to examine the human error of copper miners during the pandemic. Method: This descriptive-analytical, cross-sectional study was performed on 192 workers of a copper mine in Iran.. For this, occupation tasks were firstly analyzed using the Hierarchical Task Analysis (HTA), and then the human error in different subunits was assessed using the basic Cognitive Reliability and Error Analysis Method (CREAM). The prevalence of COVID-19 among miners was determined by assessing positive PCR test records. Results: The probability of human error in the operational subunits including mining, crushing, processing, and support subunits was estimated to be 0.0056, 0.056, 0.0315, and 0.0177, respectively. All three operational units were found to be in the scrambling control mode. The support unit was determined to be in the tactical control mode. Approximately 50% of all workers had been infected with COVID-19, with the highest prevalence in support units. Conclusion: The results suggest that during the COVID-19 pandemic, copper miners are at higher risk of human error induced by poor working conditions. Therefore, it is recommended to employ some management strategies such as promotion of safety, health monitoring, and adopting supportive measures to control occupational stresses and therefore the probability of human error in the mine's operational units.
Article
The objective of this paper is the optimization of failure management in manual assembly. An advanced System Dynamics model is applied to indicate the interactions between production and quality processes. To quantify the model, the determination of the probability of failure occurrence is considered a necessary prerequisite. In manual assembly, the failure occurrence is directly related to workers' failure. Therefore, this paper analyses existing methods for calculating the human failure probability and evaluates them based on their suitability for the application in manual assembly. Subsequently, the best-evaluated method is further developed to fulfil industrial requirements. A mathematical model is developed according to the specific requirements of manual assembly, which enables the determination of the probability of failure occurrence.
Chapter
Human performance is essential to the safe operation of systems for a wide range of industries including nuclear, chemical, oil and gas, aviation, and aerospace. To understand risk and manage safety, the scenarios that lead to hazards for people, equipment, and the environment are evaluated in risk assessments. In probabilistic risk assessments (PRAs), probabilities are estimated for scenarios and their elements to support the prioritization of risk contributors and the selection of system modifications to enhance safety. Within a PRA, the Human Reliability Analysis (HRA) addresses the potential failure events related to human interactions with the technical system. The scope of the human-system interactions addressed in HRA includes human actions to operate, maintain, and to respond to abnormal conditions and emergencies. This chapter presents the scientific basis for HRA and shows the relationship of HRA to the broader field of human factors and ergonomics. Additionally, this chapter provides an overview of HRA methods, the associated empirical data, and the structured analysis process used to identify the key human failure events (HFEs) relevant for safety, to characterize the conditions and factors that affect their occurrence, and to estimate their probabilities. The insights gained from the HRA process and its integration into PRA can be used to improve human performance as well as to identify modifications to hardware, equipment, and automation, with the overall aim of improving safety. Besides comprehensive HRA applications within PRA to evaluate the risk of facilities as a whole, HRA is also used in the design process to evaluate new or modified systems, functions, automation, and human-system interfaces. As a discipline, HRA applies risk and reliability techniques in order to evaluate the impact of human performance on the overall system. This includes human actions whose reliability has been improved by human factors, ergonomics, and human factors engineering; as well as actions within automated systems. The intent of this chapter is to provide the general reader a high-level understanding of how the human element is treated in risk assessments, to provide human factors experts an outline of how their science is transformed into probabilities and integrated in PRA, and to provide PRA practitioners a synopsis of science, empirical data, and judgment blended in the HRA process and outputs. The chapter closes with a summary of recent developments to further advance HRA and overall conclusions on the state of practice.KeywordsHuman reliability analysisHuman factorsProbabilistic risk assessmentQuantitative risk assessmentHuman failure eventsHuman error probabilityHuman performance dataError of commission
Article
Human error is an important factor leading to nuclear power plants (NPPs) accidents. The human reliability analysis (HRA) is considered to be an effective method to reduce human error. Therefore, this paper proposes a method to quantify human reliability based on Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method. Firstly, this paper used the performance shaping factors of SPAR-H to build human reliability model. Secondly, the triangular fuzzy number was used to quantify the qualitative information of root nodes, and the fuzzy IF-THEN rule was used to determine the prior probability distribution of intermediate nodes. Finally, Bayesian reasoning was used to quantify human reliability based on the human reliability model. The result of the developed method is consistent with the result of Cognitive Reliability and Error Analysis Methods (CREAM). The developed method can be used as a tool to quantify the human reliability in the NPPs system.
ResearchGate has not been able to resolve any references for this publication.