Article

The ‘new view’ of human error. Origins, ambiguities, successes and critiques

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Over the past two decades, the ‘new view’ has become a popular term in safety theory and practice. It has however also been criticised, provoking division and controversy. The aim of this article is to clarify the current situation. It describes the origins, ambiguities and successes of the ‘new view’ as well as the critiques formulated. The article begins by outlining the origins of this concept, in the 1980 s and 1990 s, from the cognitive (system) engineering (CSE) school initiated by Rasmussen, Hollnagel and Woods. This differed from Reason’s approach to human error in this period. The article explains how Dekker, in the early 2000 s, translates ideas from the CSE school to coin the term ‘new view’, while also developing, shortly after, an argument against Reason’s legacy that was more radical and critical than his predecessors’. Secondly, the article describes the ambiguities associated with the term ‘new view’ because of the different programs that have derived from CSE (Resilience Engineering – RE then Safety II, Safety Differently, Theory of Graceful Extensibility). The text identifies three programs by different thinkers (methodological, formal and critical) and Dekker’s three eclectic versions of the ‘new view’. Thirdly, the article discusses the successes of the CSE and RE school, showing how it has strongly resonated with many practitioners outside the academic world. Fourthly, the critiques raised within the field of human factors and system safety but also from different traditions (e.g., system safety engineering with Leveson, sociology of safety with Hopkins) are introduced, and discussed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Targeting the analysis of socio-technical complexity, the System-Theoretic Accident Model and Processes (STAMP) was developed to engineer safer systems. Since its inception in the early 2000s, STAMP and its associated techniques, namely the System-Theoretic Process Analysis (STPA) and the Causal Analysis based on System Theory (CAST), have attracted increasing interest as suitable approaches for safety studies. Nonetheless, a literature review on their applications is lacking. This paper fills this gap via a scoping literature survey on contributions indexed in academic journals and conference proceedings. Through a systematic analysis of 321 eligible documents, this research presents a comprehensive examination of relevant features for STAMP, STPA and CAST since 2003, such as the system type and domain of analysis, coverage and completeness of the analytical and methodological steps, respectively, as well as alterations and/or enrichments of their original versions. The bibliometric findings from primary, secondary and non-empirical research contributions are discussed to critically reflect on the past and present of STAMP and its associated techniques, and possible future directions are highlighted.
Article
Full-text available
This paper reviews the key perspectives on human error and analyses the core theories and methods developed and applied over the last 60 years. These theories and methods have sought to improve our understanding of what human error is, and how and why it occurs, to facilitate the prediction of errors and use these insights to support safer work and societal systems. Yet, while this area of Ergonomics and Human Factors (EHF) has been influential and long standing, the benefits of the ‘human error approach’ to understanding accidents and optimising system performance have been questioned. This state of science review analyses the construct of human error within EHF. It then discusses the key conceptual difficulties the construct faces in an era of systems EHF. Finally, a way forward is proposed to prompt further discussion within the EHF community. Practitioner statement: This state-of-science review discusses the evolution of perspectives on human error as well as trends in the theories and methods applied to understand, prevent and mitigate error. It concludes that, although a useful contribution has been made, we must move beyond a focus on individual error to systems failure to understand and optimise whole systems.
Article
Full-text available
This article explores what complementary perspectives Science and Technology Studies and in particular Actor Network Theory may bring to Safety Science beyond what comes out of traditional comparisons between highly profiled theories/perspectives of Normal Accident Theory (NAT), High Reliability Organisations (HRO) Resilience Engineering (RE) and Safety II. In the article, core ideas of NAT, HRO and RE/Safety II are reviewed, and debates over NAT/HRO, HRO/RE and Safety I/Safety II are discussed. Thereafter, controversies over complexity, non-events and uncertainty respectively are identified and elaborated, drawing on a richer repertoire from the social sciences, in particular Actor Network Theory. The article concludes by inviting to more serious engagement in scientific controversies and politics of safety, and operationalises this into three propositions: Take complexity seriously; broaden the perspectives and methodologies for understanding sociotechnical work; and make safety science research politically oriented (again).
Article
Full-text available
This article provides a historical and critical account of James Reason’s contribution to safety research with a focus on the Swiss cheese model (SCM), its developments and its critics. This article shows that the SCM is a product of specific historical circumstances, has been developed over a ten years period following several steps, and has benefited of the direct influence of John Wreathall. Reason took part in intense intellectual debates and publications in the 1980s during which many ideas circulated among researchers, featuring authors as influent as Donald Norman, Jens Rasmussen, Charles Perrow or Barry Turner. The 1980s and 1990s were highly productive from a safety research point of view (e.g. human error, incubation models, high reliability organisation, safety culture) and Reason has considerably influenced it with a rich production of models, based on both research and industrial projects. Historical perspectives offer interesting insights because they can question research, the conditions of its production, its relevance and, sometimes, its success, as for the SCM. But, because of this success, critics have vividly argued about some of the SCM limitations, including its simplistic vision of accidents and its degree of generality. Against these positions, the article develops a ‘critique of the criticism’, and the article concludes that the SCM remains a relevant model because of its systemic foundations and its sustained use in high-risk industries; despite of course, the need to keep imagining alternatives based on the mix of collective empirical, practical and graphical research which was in the SCM background.
Book
Full-text available
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
Book
Full-text available
Our fascination with new technologies is based on the assumption that more powerful automation will overcome human limitations and make our systems 'faster, better, cheaper,' resulting in simple, easy tasks for people. But how does new technology and more powerful automation change our work? Research in Cognitive Systems Engineering (CSE) looks at the intersection of people, technology, and work. What it has found is not stories of simplification through more automation, but stories of complexity and adaptation. When work changed through new technology, practitioners had to cope with new complexities and tighter constraints. They adapted their strategies and the artifacts to work around difficulties and accomplish their goals as responsible agents. The surprise was that new powers had transformed work, creating new roles, new decisions, and new vulnerabilities. Ironically, more autonomous machines have created the requirement for more sophisticated forms of coordination across people, and across people and machines, to adapt to new demands and pressures. This book synthesizes these emergent Patterns though stories about coordination and mis-coordination, resilience and brittleness, affordance and clumsiness in a variety of settings, from a hospital intensive care unit, to a nuclear power control room, to a space shuttle control center. The stories reveal how new demands make work difficult, how people at work adapt but get trapped by complexity, and how people at a distance from work oversimplify their perceptions of the complexities, squeezing practitioners. The authors explore how CSE observes at the intersection of people, technology, and work, how CSE abstracts patterns behind the surface details and wide variations, and how CSE discovers promising new directions to help people cope with complexities. The stories of CSE show that one key to well-adapted work is the ability to be prepared to be surprised. Are you ready?.
Article
Full-text available
The paper introduces the theory of graceful extensibility which expresses fundamental characteristics of the adaptive universe that constrain the search for sustained adaptability. The theory explains the contrast between successful and unsuccessful cases of sustained adaptability for systems that serve human purposes. Sustained adaptability refers to the ability to continue to adapt to changing environments, stakeholders, demands, contexts, and constraints (in effect, to adapt how the system in question adapts). The key new concept at the heart of the theory is graceful extensibility. Graceful extensibility is the opposite of brittleness, where brittleness is a sudden collapse or failure when events push the system up to and beyond its boundaries for handling changing disturbances and variations. As the opposite of brittleness, graceful extensibility is the ability of a system to extend its capacity to adapt when surprise events challenge its boundaries. The theory is presented in the form of a set of 10 proto-theorems derived from just two assumptions-in the adaptive universe, resources are always finite and change continues. The theory contains three subsets of fundamentals: managing the risk of saturation, networks of adaptive units, and outmaneuvering constraints. The theory attempts to provide a formal base and common language that characterizes how complex systems sustain and fail to sustain adaptability as demands change.
Chapter
Full-text available
shows how one can go beyond spartan laboratory paradigms and study complex problem-solving behaviors without abandoning all methodological rigor / describes how to carry out process tracing or protocol analysis methods as a "field experiment" (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
Full-text available
This chapter discusses the cognitive systems engineering. To build a cognitive description of a problem solving world, it is necessary to understand how representations of the world interact with different cognitive demands imposed by the application world in question and with characteristics of the cognitive agents, both for existing and prospective changes in the world. Building a cognitive description is part of a problem driven approach to the application of computational power. In tool-driven approaches, knowledge acquisition focuses on describing domain knowledge in terms of the syntax of computational mechanisms, that is, the language of implementation is used as a cognitive language. Semantic questions are displaced either to whoever selects the computational mechanisms or to the domain expert who enters knowledge. The alternative is to provide an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain. Acquiring and using domain semantics is essential to be capable of avoiding potential errors and specifying performance boundaries when building intelligent machines.
Technical Report
Full-text available
The publication of the IOM report To Err is Human in 2000 served as a catalyst for a growing interest in improving the safety of health care. Yet despite decades of attention, activity and investment, improvement has been glacially slow. Although the rate of harm seems stable, increasing demand for health services, and the increasing intensity and complexity of those services (people are living longer, with more complex co-morbidities, and expecting higher levels of more advanced care) imply that the number of patients harmed while receiving care will only increase, unless we find new ways to improve safety. Most people think of safety as the absence of accidents and incidents (or as an acceptable level of risk). In this perspective, which we term Safety-I, safety is defined as a state where as few things as possible go wrong. A Safety-I approach presumes that things go wrong because of identifiable failures or malfunctions of specific components: technology, procedures, the human workers and the organisations in which they are embedded. Humans—acting alone or collectively—are therefore viewed predominantly as a liability or hazard, principally because they are the most variable of these components. The purpose of accident investigation in Safety-I is to identify the causes and contributory factors of adverse outcomes, while risk assessment aims to determine their likelihood. The safety management principle is to respond when something happens or is categorised as an unacceptable risk, usually by trying to eliminate causes or improve barriers, or both. This view of safety became widespread in the safety critical industries (nuclear, aviation, etc.) between the 1960s and 1980s. At that time performance demands were significantly lower than today and systems simpler and less interdependent. It was tacitly assumed then that systems could be decomposed and that the components of the system functioned in a bimodal manner—either working correctly or incorrectly. These assumptions led to detailed and stable system descriptions that enabled a search for causes and fixes for malfunctions. But these assumptions do not fit today’s world, neither in industries nor in health care. In health care, systems such as an intensive care or emergency setting cannot be decomposed in a meaningful way and the functions are not bimodal, neither in detail nor for the system as a whole. On the contrary, everyday clinical work is—and must be—variable and flexible. Crucially, the Safety-I view does not stop to consider why human performance practically always goes right. Things do not go right because people behave as they are supposed to, but because people can and do adjust what they do to match the conditions of work. As systems continue to develop and introduce more complexity, these adjustments become increasingly important to maintain acceptable performance. The challenge for safety improvement is therefore to understand these adjustments—in other words, to understand how performance usually goes right in spite of the uncertainties, ambiguities, and goal conflicts that pervade complex work situations. Despite the obvious importance of things going right, traditional safety management has paid little attention to this. Safety management should therefore move from ensuring that ‘as few things as possible go wrong’ to ensuring that ‘as many things as possible go right’. We call this perspective Safety-II; it relates to the system’s ability to succeed under varying conditions. A Safety-II approach assumes that everyday performance variability provides the adaptations that are needed to respond to varying conditions, and hence is the reason why things go right. Humans are consequently seen as a resource necessary for system flexibility and resilience. In Safety-II the purpose of investigations changes to become an understanding of how things usually go right, since that is the basis for explaining how things occasionally go wrong. Risk assessment tries to understand the conditions where performance variability can become difficult or impossible to monitor and control. The safety management principle is to facilitate everyday work, to anticipate developments and events, and to maintain the adaptive capacity to respond effectively to the inevitable surprises (Finkel 2011). In light of increasing demands and growing system complexity, we must therefore adjust our approach to safety. While many adverse events may still be treated by a Safety-I approach without serious consequences, there is a growing number of cases where this approach will not work and will leave us unaware of how everyday actions achieve safety. This may have unintended consequences because it unintentionally degrades the resources and procedures needed to make things go right. The way forward therefore lies in combining the two ways of thinking. While many of the existing methods and techniques can continue to be used, the assimilation of a Safety-II view will also require new practices to look for what goes right, to focus on frequent events, to maintain a sensitivity to the possibility of failure, to wisely balance thoroughness and efficiency, and to view an investment in safety as an investment in productivity. This White Paper helps explains the key differences between, and implications of, the two ways of thinking about safety.
Conference Paper
Full-text available
Article
Full-text available
This paper deals with three issues. First, the question of the boundaries of safety science - what is in and what is out - is a practical question that journal editors and reviewers must respond to. I have suggested that there is no once-and-for-all answer. The boundaries are inherently negotiable, depending on the make-up of the safety science community. The second issue is the problematic nature of some of the most widely referenced theories or theoretical perspective in our inter-disciplinary field, in particular, normal accident theory, the theory of high reliability organisations, and resilience engineering. Normal accident theory turns out to be a theory that fails to explain any real accident. HRO theory is about why HROs perform as well as they do, and yet it proves to be impossible to identify empirical examples of HROs for the purpose of either testing or refining the theory. Resilience engineering purports to be something new, but on examination it is hard to see where it goes beyond HRO theory. The third issue concerns the paradox of major accident inquiries. The bodies that carry out these inquiries do so for the purpose of learning lessons and making recommendations about how to avoid such incidents in the future. The paradox is that the logic of accident causal analysis does not lead directly to recommendations for prevention. Strictly speaking recommendations for prevention depend on additional argument or evidence going beyond the confines of the particular accident.
Book
Full-text available
Resilience engineering has consistently argued that safety is more than the absence of failures. Since the first book was published in 2006, several book chapters and papers have demonstrated the advantage in going behind 'human error' and beyond the failure concept, just as a number of serious accidents have accentuated the need for it. But there has not yet been a comprehensive method for doing so; the Functional Resonance Analysis Method (FRAM) fulfils that need. Whereas commonly used methods explain events by interpreting them in terms of an already existing model, the FRAM is used to model the functions that are needed for everyday performance to succeed. This model can then be used to explain specific events, by showing how functions can be coupled and how the variability of everyday performance sometimes may lead to unexpected and out-of-scale outcomes - either good or bad. The FRAM is based on four principles: equivalence of failures and successes, approximate adjustments, emergence, and functional resonance. As the FRAM is a method rather than a model, it makes no assumptions about how the system under investigation is structured or organised, nor about possible causes and cause-effect relations. Instead of looking for failures and malfunctions, the FRAM explains outcomes in terms of how functions become coupled and how everyday performance variability may resonate. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, to understand why things sometimes go wrong but also why they normally succeed.
Article
Full-text available
Scientific research accesses the past to predict the future. The history of science is often best told by those who have lived it. Our purpose is to provide a brief history of human-automation interaction research, including a review of theories for describing human performance with automated systems, an accounting of automation effects on cognitive performance, a description of the origins of adaptive automation and key developments, and an identification of contemporary methods and issues in operator functional state classification. Based on this history and acknowledgements of the state of the art of human-automaton interaction, future predictions are offered.
Article
Full-text available
Cognitive engineering needs viable constructs and principles to promote better understanding and prediction of human performance in complex systems. Three human cognition and performance constructs that have been the subjects of much attention in research and practice over the past three decades are situation awareness (SA), mental workload, and trust in automation. Recently, Dekker and Woods (2002) and Dekker and Hollnagel (2004; henceforth DWH) argued that these constructs represent “folk models” without strong empirical foundations and lacking scientific status. We counter this view by presenting a brief description of the large science base of empirical studies on these constructs. We show that the constructs can be operationalized using behavioral, physiological, and subjective measures, supplemented by computational modeling, but that the constructs are also distinct from human performance. DWH also caricatured as “abracadabra” a framework suggested by us to address the problem of the design of automated systems (Parasuraman, Sheridan, & Wickens, 2000). We point to several factual and conceptual errors in their description of our approach. Finally, we rebut DWH's view that SA, mental workload, and trust represent folk concepts that are not falsifiable. We conclude that SA, mental workload, and trust are viable constructs that are valuable in understanding and predicting human-system performance in complex systems.
Article
We report an organization's method for recruiting additional, specialized human resources during anomaly handling. The method has been tailored to encourage sharing adaptive capacity across organizational units. As predicted by Woods' theory, this case shows that sharing adaptive capacity allows graceful extensibility that is particularly useful when a system is challenged by frequent but unpredictably severe events. We propose that (1) the ability to borrow adaptive capacity from other units is a hallmark of resilient systems and (2) the deliberate adjustment adaptive capacity sharing is a feature of some forms of resilience engineering. Some features of this domain that may lead to discovery of resilience and promote resilience engineering in other settings, notably hospital emergency rooms.
Article
Safety Culture has now been for almost three decades a highly promoted, advocated and debated but contentious notion. This article argues first that one needs to differentiate between two waves of studies, debates, controversies and positions. A first one roughly from the late 1980s/early 1990s to mid-2000s which brought an important distinction between interpretive and functionalist views of safety culture, then a second wave, from mid-2000s to nowadays which brings additional and alternative positions among authors. Four views, some more radical and critical, some more neutral and some more enthusiastic about safety culture are differentiated in this article. It is contended that this evolution of the debate, this second wave of studies, should be understood within a broader historical and social context. It is characterised, borrowing insights from management studies, by patterns of interactions between academics, publishers, consultants, regulators and industries. In this context, safety culture appears in a new light, as a product among other (albeit a central one) of a safety field (and market) which is socially structured by this diversity of actors. This helps sensitise, first, the second wave of studies, debates, controversies and positions on safety culture of the past 15 years as identified in this article. Second, approaching safety culture through this angle is an opportunity to questions safety research more globally and, third, an occasion to pinpoint some of the currently unproblematised network properties of high-risk sociotechnical systems.
Chapter
INTRODUCTION: In the last decade, the difference between work as imagined and work as (actually) done has been important in the literature on safety and resilience. The salience and analytical power of this seemingly trivial distinction lies in the fact that it challenges organizational epistemes, dominant discourses of work within organizational and management literature, by underscoring that there is more to work than our descriptions of it. Work as done in practice can never be fully described or prescribed. There is always more to work in context that can be captured in formal descriptions. Though the perfect description of work as done is a futile aspiration rather than an achievable goal, we have found it useful in our own research to address situated work in its own right and to challenge the distanced, stereotyping descriptions of it that are generated from a distance, by researchers, by management systems and by managers (Suchman, 1995). Understanding work as is performed in real life contexts is particularly important for understanding the variability of normal work that we associate with resilience. We argue that it is necessary to follow the call of Barley and Kunda (2001) to “bring work back in” to organizational theory, to base our theories on empirical studies that seek to understand work as it is performed in real life situations with their material, social and temporal particularities. In doing this, we are inspired by strands of research outside the realm of safety science, in the social sciences more generally. It is important to reflect critically on the discourses of work in organizations, not only whether or not they are good and true representations, but also how their pragmatics influence on work and decision processes in organizations. For example, representations make some types of work and aspects of work organizational visible while obscuring or even suppressing others. Also, as argued by Suchman (1987), it is necessary to understand the role representations of actions (procedures, plans) have as resources for situated action. In this chapter, we will address changes in the dominating discourses of work and their consequences for safety. First and foremost, we argue that there is an increasing tendency towards an ever more detailed standardization. Timmermans and Epstein, drawing on Bowker & Star (2000), define standardization as a "process of constructing uniformities across time and space, through the generation of agreed-upon rules" (Timmermans and Epstein 2010: 71). While we concur with this general definition, we see a need for adding an aspect to this: When standardization meets specialization across organizational boundaries, it introduces a logic where work processes are increasingly seen as discrete operations, as consisting of atomistic products to be delivered, and less as a dynamic flow of actions. This is accompanied with regimes of accountability, of ever more detailed reporting and control, also based on standards. Digitalization is a critical catalyst for the increased ubiquity and increased level of detail of standardization. Digitalization here means making use of digital technology to support the execution and control of work processes. Information infrastructures facilitate more detailed control through descriptions of work as consisting of atomistic standardized entities (see Hanseth and Monteiro, 1998; Bowker and Star, 2000; Almklov et al, 2014a). Moreover, the digital systems have a performativity of their own. They put constraints on action, exercise power in ways that a procedure on paper cannot do, thus changing the dynamics between work as imagined and work as done. In the following, we will discuss the trends towards increasing standardization in organizations, as an element of the dominating discourses of management, and a general development in modern life. We will discuss how this meets the particularities of situated action. Thereafter we will discuss the role of digital technologies, both as enablers of detailed control, carriers of a standardizing discourse, and the possibilities they present for new forms of situated action. On the surface, standards seem neutral and technical, but they have politics. They constrain the leverage for work as actually done. The developments we discuss affect these politics, both in the sense that they increase the level of detail in which this is done. Also, we will argue, the way standardized descriptions of work are parts of transactional coordination of work (e.g. in organizations relying on outsourcing of operational work) and the way they are inscribed in digital systems through which work is performed, skews these power balances in ways that need attention for researchers interested in reliability, safety and resilience.
Chapter
Relevance and truth of theories is maintained only through hard work. This work may involve modification, redescription and recontextualisation. During many years of dedication to research and practice in complex sociotechnical systems – including aviation, oil&gas, surgery and shipping – the safety literary canon has been a faithful companion offering a number of priceless concepts and tools to understand the meaning and phenomenology of safety. However, as all research and theory development owe its value to the empirical world, it is the people and their tools, materials, practices, systems, cognitive efforts and organising principles lending themselves to investigation that is the beginning and end of our research efforts, and the scale towards which theories' relevance is measured. Sensework represents a continuation of central safety themes associated with sociotechnical systems, and rearticulates and recontextualises topics such as couplings, adaptations and sensemaking in environments where developments in digitalisation, simulation and sociotechnical complexity have had profound impact on organisations and their modus operandi. By treating visibility as a question of method rather than essence, sensework lends itself to empirical studies of work and safety in complex sociotechnical systems, offering a methodology and vocabulary for studying and analysing that which is otherwise often considered as invisible.
Book
Safety Science Research: Evolution, Challenges and New Directions provides a unique perspective into the latest developments of safety science by putting together, for the first time, a new generation of authors with some of the pioneers of the field. Forty years ago, research traditions were developed, including, among others, high-reliability organisations, cognitive system engineering or safety regulations. In a fast-changing world, the new generation introduces, in this book, new disciplinary insights, addresses contemporary empirical issues, develops new concepts and models while remaining critical of safety research practical ambitions. Their ideas are then reflected and discussed by some of the pioneers of safety science.
Article
High Reliability Organisation (HRO) and Resilience Engineering (RE) are two research traditions which have attracted a wide and diverse readership in the past decade. Both have reached the status of central contributions to the field of safety while sharing a similar orientation. This is not without creating tensions or questions, as expressed in the call of this special issue. The contention of this article is that these two schools introduce ways of approaching safety which need to be reflected upon in order to avoid simplifications and hasty judgments about their relative strength, weaknesses or degree of overlapping. HRO has gained strength and legitimacy from (1) studying ethnographically, with an organisational angle, high-risk systems, (2) debating about principles producing organisation reliability in face of high complexity and (3) conceptualising some of these principles into a successful generic model of "collective mindfulness", with both practical and theoretical success. RE has gained strength and legitimacy from (1) harnessing then deconstructing, empirically and theoretically, the notion of 'human error', (2) argued for a system (and complexity) view and discourse about safety/accidents, (3) and supported this view with the help of (graphical) actionable models and methods (i.e. the engineering orientation). In order to show this, one has to go beyond the past 10. years of RE to include a longer time frame going back to the 80. s to the early days of Cognitive Engineering (CE). The approach that is followed here includes therefore a strong historical orientation as a way to better understand the present situation, profile each school, promote complementarities while maintaining nuances.
Article
This paper describes three applications of Rasmussen's idea to systems engineering practice. The first is the application of the abstraction hierarchy to engineering specifications, particularly requirements specification. The second is the use of Rasmussen's ideas in safety modeling and analysis to create a new, more powerful type of accident causation model that extends traditional models to better handle human-operated, software-intensive, sociotechnical systems. Because this new model has a formal, mathematical foundation built on systems theory (as was Rasmussen's original model), new modeling and analysis tools become possible. The third application is to engineering hazard analysis. Engineers have traditionally either omitted human from consideration in system hazard analysis or have treated them rather superficially, for example, that they behave randomly. Applying Rasmussen's model of human error to a powerful new hazard analysis technique allows human behavior to be included in engineering hazard analysis.
Article
Human error is implicated in nearly all aviation accidents, yet most investigation and prevention programs are not designed around any theoretical framework of human error. Appropriate for all levels of expertise, the book provides the knowledge and tools required to conduct a human error analysis of accidents, regardless of operational setting (i.e. military, commercial, or general aviation). The book contains a complete description of the Human Factors Analysis and Classification System (HFACS), which incorporates James Reason’s model of latent and active failures as a foundation. Widely disseminated among military and civilian organizations, HFACS encompasses all aspects of human error, including the conditions of operators and elements of supervisory and organizational failure. It attracts a very broad readership. Specifically, the book serves as the main textbook for a course in aviation accident investigation taught by one of the authors at the University of Illinois. This book will also be used in courses designed for military safety officers and flight surgeons in the U.S. Navy, Army and the Canadian Defense Force, who currently utilize the HFACS system during aviation accident investigations. Additionally, the book has been incorporated into the popular workshop on accident analysis and prevention provided by the authors at several professional conferences world-wide. The book is also targeted for students attending Embry-Riddle Aeronautical University which has satellite campuses throughout the world and offers a course in human factors accident investigation for many of its majors. In addition, the book will be incorporated into courses offered by Transportation Safety International and the Southern California Safety Institute. Finally, this book serves as an excellent reference guide for many safety professionals and investigators already in the field. © Douglas A. Wiegmann and Scott A. Shappell 2003. All rights reserved.
Article
This book is a set of new skills written for the managers that drive safety in their workplace. This is Human Performance theory made simple. If you are starting a new program, revamping an old program, or simply interested in understanding more about safety performance, this guide will be extremely helpful.
Article
James Scott taught us what's wrong with seeing like a state. Now, in his most accessible and personal book to date, the acclaimed social scientist makes the case for seeing like an anarchist. Inspired by the core anarchist faith in the possibilities of voluntary cooperation without hierarchy, Two Cheers for Anarchism is an engaging, high-spirited, and often very funny defense of an anarchist way of seeing--one that provides a unique and powerful perspective on everything from everyday social and political interactions to mass protests and revolutions. Through a wide-ranging series of memorable anecdotes and examples, the book describes an anarchist sensibility that celebrates the local knowledge, common sense, and creativity of ordinary people. The result is a kind of handbook on constructive anarchism that challenges us to radically reconsider the value of hierarchy in public and private life, from schools and workplaces to retirement homes and government itself. Beginning with what Scott calls "the law of anarchist calisthenics," an argument for law-breaking inspired by an East German pedestrian crossing, each chapter opens with a story that captures an essential anarchist truth. In the course of telling these stories, Scott touches on a wide variety of subjects: public disorder and riots, desertion, poaching, vernacular knowledge, assembly-line production, globalization, the petty bourgeoisie, school testing, playgrounds, and the practice of historical explanation.
Article
What does the collapse of sub-prime lending have in common with a broken jackscrew in an airliner’s tailplane? Or the oil spill disaster in the Gulf of Mexico with the burn-up of Space Shuttle Columbia? These were systems that drifted into failure. While pursuing success in a dynamic, complex environment with limited resources and multiple goal conflicts, a succession of small, everyday decisions eventually produced breakdowns on a massive scale. We have trouble grasping the complexity and normality that gives rise to such large events. We hunt for broken parts, fixable properties, people we can hold accountable. Our analyses of complex system breakdowns remain depressingly linear, depressingly componential - imprisoned in the space of ideas once defined by Newton and Descartes. The growth of complexity in society has outpaced our understanding of how complex systems work and fail. Our technologies have gotten ahead of our theories. We are able to build things - deep-sea oil rigs, jackscrews, collateralized debt obligations - whose properties we understand in isolation. But in competitive, regulated societies, their connections proliferate, their interactions and interdependencies multiply, their complexities mushroom. This book explores complexity theory and systems thinking to understand better how complex systems drift into failure. It studies sensitive dependence on initial conditions, unruly technology, tipping points, diversity - and finds that failure emerges opportunistically, non-randomly, from the very webs of relationships that breed success and that are supposed to protect organizations from disaster. It develops a vocabulary that allows us to harness complexity and find new ways of managing drift.
Article
This introductory chapter sets the scene by raising the questions, making the case for their importance, suggesting how they might be resolved, and sketching the approach that will be adopted in the chapters that follow, and also introduces the principal characters and some of their conflicting opinions on appropriate lines to take. It tells of the way in which children were regarded by adults and, in particular, the ways in which parents responded to their untimely deaths. In so doing, the chapter engages with Philippe Ariès' controversial ‘parental indifference hypothesis’ as well as the wider approach to death and dying in the past developed mainly by French historians, including Michel Vovelle, Pierre Chaunu, and Daniel Roche.
Article
Accident investigation and risk assessment have for decades focused on the human factor, particularly 'human error'. Countless books and papers have been written about how to identify, classify, eliminate, prevent and compensate for it. This bias towards the study of performance failures, leads to a neglect of normal or 'error-free' performance and the assumption that as failures and successes have different origins there is little to be gained from studying them together. Erik Hollnagel believes this assumption is false and that safety cannot be attained only by eliminating risks and failures. The ETTO Principle looks at the common trait of people at work to adjust what they do to match the conditions – to what has happened, to what happens, and to what may happen. It proposes that this efficiency-thoroughness trade-off (ETTO) – usually sacrificing thoroughness for efficiency – is normal. While in some cases the adjustments may lead to adverse outcomes, these are due to the very same processes that produce successes, rather than to errors and malfunctions. The ETTO Principle removes the need for specialised theories and models of failure and 'human error' and offers a viable basis for effective and just approaches to both reactive and proactive safety management.
Article
Situation awareness (SA) has become a widely used construct within the human factors community, the focus of considerable research over the past 25 years. This research has been used to drive the development of advanced information displays, the design of automated systems, information fusion algorithms, and new training approaches for improving SA in individuals and teams. In recent years, a number of papers criticized the Endsley model of SA on various grounds. I review those criticisms here and show them to be based on misunderstandings of the model. I also review several new models of SA, including situated SA, distributed SA, and sensemaking, in light of this discussion and show how they compare to existing models of SA in individuals and teams.
Article
Resilience is becoming a prevalent agenda in safety research and organisational practice. In this study we examine how the peer-reviewed literature (a) formulates the rationale behind the study of resilience; (b) constructs resilience as a scientific object; and (c) constructs and locates the resilient subject. The results suggest that resilience engineering scholars typically motivate the need for their studies by referring to the inherent complexities of modern socio-technical systems; complexities that make these systems inherently risky. The object of resilience then becomes the capacity to adapt to such emerging risks in order to guarantee success and continuous performance of the inherently risky system. In the material reviewed, the subject of resilience is typically the individual, either at the sharp end or at higher managerial levels. These individuals are called-upon to adapt in the face of risk to guarantee the success of the system. Based on the results from how resilience has been introduced in safety sciences we raise three ethical questions for the field to address: (1) should resilience be seen as people thriving despite of, or because of, risk?; (2) should resilience theory form a basis for moral judgement?; and finally (3) how much should resilience be approached as a trait of the individual?
Article
Jens Rasmussen has been a very influential thinker for the last quarter of the 20th century in the safety science field and especially in major hazard prevention. He shaped many of the basic assumptions regarding safety and accidents which are still held today. One can see that many of his ideas underlie more recent advances in this field. Indeed, in the first decade of the 21st century, many have been inspired by his propositions and have pursued their own research agendas by using, extending or criticising his ideas. The author of numerous articles, chapters of books and books, Rasmussen had an inspiring scientific research record spreading over 30 years, expanding across the boundaries of many scientific disciplines. This article introduces selected elements of Rasmussen’s legacy, including the SRK model, his theoretical approach of errors, the issue of investigating accidents, his model of migration and the sociotechnical view. It will be demonstrated that Jens Rasmussen provided key concepts for understanding safety and accidents, many of which are still relevant today. In particular, this article introduces how some principles such as degree of freedom, self organisation and adaptation, defence in depth fallacy but also the notion of error as ‘unsuccessful experiment with unacceptable consequences’ still offer powerful insights into the challenge of predicting and preventing major accidents. It is also argued that they combine into a specific interpretation of the ‘normal accident’ debate, anticipating current trends based on complexity lenses. Overall, Jens Rasmussen defines the contours of what is called ‘a strong program for a hard problem’.
Article
The need for a cognitive engineering occurs because the introduction of computers often radically changes the work environment and the cognitive demands placed on the worker. For example, increased automation in process control applications has resulted in a shift in the human role from a controller to a supervisor, monitoring and managing semi-autonomous resources. While this change reduces people's physical workload, mental load often increases as the human role emphasizes monitoring and compensating for failures. Thus, computerization creates a larger and larger world of cognitive tasks to be performed. More and more, we create or design cognitive environments. The new capabilities have led to large amounts of activity on tool building - how to build better performing machine problem-solvers. The application of these tools creates new challenges about how to 'couple' human intelligence and machine power in a single integrated system that maximizes overall performance.
Book
Safety has traditionally been defined as a condition where the number of adverse outcomes was as low as possible (Safety-I). From a Safety-I perspective, the purpose of safety management is to make sure that the number of accidents and incidents is kept as low as possible, or as low as is reasonably practicable. This means that safety management must start from the manifestations of the absence of safety and that - paradoxically - safety is measured by counting the number of cases where it fails rather than by the number of cases where it succeeds. This unavoidably leads to a reactive approach based on responding to what goes wrong or what is identified as a risk - as something that could go wrong. Focusing on what goes right, rather than on what goes wrong, changes the definition of safety from ‘avoiding that something goes wrong’ to ‘ensuring that everything goes right’. More precisely, Safety-II is the ability to succeed under varying conditions, so that the number of intended and acceptable outcomes is as high as possible. From a Safety-II perspective, the purpose of safety management is to ensure that as much as possible goes right, in the sense that everyday work achieves its objectives. This means that safety is managed by what it achieves (successes, things that go right), and that likewise it is measured by counting the number of cases where things go right. In order to do this, safety management cannot only be reactive, it must also be proactive. But it must be proactive with regard to how actions succeed, to everyday acceptable performance, rather than with regard to how they can fail, as traditional risk analysis does. This book analyses and explains the principles behind both approaches and uses this to consider the past and future of safety management practices. The analysis makes use of common examples and cases from domains such as aviation, nuclear power production, process management and health care. The final chapters explain the theoretical and practical consequences of the new perspective on the level of day-to-day operations as well as on the level of strategic management (safety culture).
Article
This paper delves into the relationship between safety and constructivism. In the past 30 years, the constructivist discourse has become very popular but also controversial as it challenges some key categories associated with modernity, such as reason, objectivity, truth and reality. In the safety literature, several works advocate its use. This paper has three objectives. The first one is to reveal the existence of a constructivist discourse in the field of safety. It does this by bringing together scattered pieces of works from different authors who endorse and apply to various topics a position labelled as constructivist. However, and that is the paper’s second objective; it demonstrates that there is not only one constructivism, but several. In order to ground this contention, the paper proceeds with a multidisciplinary and historical approach. It is then argued that it is more appropriate not to conflate this diversity of constructivisms. The paper looks for a solution to this problem by providing a classification based on two groups of parameters: mild/strong and cognitive/social, defining four types. This step serves the third objective which consists in initiating a multifaceted constructivist program in safety composed of heterogeneous but related empirical and theoretical areas of investigations.