Article

Ironies of Automation

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the 'classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Company-level case studies in labour sociology show that experience-based process knowledge plays an important role in advanced production processes, even in low-skilled jobs that do not require a professional qualification (Hirsch-Kreinsen, 2016). When workers lack holistic knowledge on the entire work process and its product, their ability to cope with unexpected situations and problems is limited (Bainbridge, 1983;Fischer and Boreham, 2009;Warnhoff and de Paiva Lareiro, 2019). The role of holistic process knowledge in assistance-system-guided learning has not been considered by previous research. ...
Article
Purpose The purpose of this paper is to investigate how learning solely via an assistance system influences work performance compared with learning with a combination of an assistance system and additional training. While the training literature has widely emphasised the positive role of on-the-job training, particularly for groups that are often underrepresented in formalised learning situations, organisational studies have stressed the risks that emerge when holistic process knowledge is lacking and how this negatively affects work performance. This study aims at testing these negative effects within an experimental design. Design/methodology/approach This paper uses a laboratory experimental design to investigate how assistance-system-guided learning influences the individuals’ work performance and work satisfaction compared with assistance-system-guided learning combined with theoretical learning of holistic process knowledge. Subjects were divided into two groups and assigned to two different settings. In the first setting, the participants used the assistance systems as an orientation and support tool right at the beginning and learned the production steps exclusively in this way. In the second setting, subjects received an additional 10-min introduction (treatment) at the beginning of the experiment, including detailed information regarding the entire work process. Findings This study provides evidence that learners provided with prior process knowledge achieve a better understanding of the work process leading to higher levels of productivity, quality and work satisfaction. At the same time, the authors found evidence for differences among workers’ ability to process and apply this additional information. Subjects with lower productivity levels faced more difficulties processing and applying additional process information. Research limitations/implications Methodologically, this study goes beyond existing research on assistance systems by using a laboratory experimental design. Though the external validity of this method is limited by the artificial setting, it is a solid way of studying the impact of different usages of digital assistance systems in terms of training. Further research is required, however, including laboratory experiments with larger case numbers, company-level case studies and analyses of survey data, to further confirm the external validity of the findings of this study for the workplace. Practical implications This study provides some first evidence that holistic process knowledge, even in low-skill tasks, has an added value for the production process. This study contributes to firms' training policies by exploring new, digitalised ways of guided on-the-job training and demonstrates possible training benefits for people with lower levels of (initial) abilities and motivation. Social implications This study indicates the advantage for companies and societies to invest in additional skills and training and points at the limitations of assistance systems. This paper also contributes to training policies by exploring new, digitalised ways of guided on-the-job training and demonstrates possible training benefits for people with lower levels of (initial) abilities and motivation. Originality/value This study extends existing research on digital assistance systems by investigating their role in job-related-training. This paper contributes to labour sociology and organisational research by confirming the importance of holistic process knowledge as opposed to a solely task-oriented digital introduction.
... Connected technology, in the context of a pilot's cockpit, could unwittingly undo existing safety measures (i.e., changes to current checking/crosschecking procedures would need to be reviewed). Furthermore, there are also concerns relating to skill degradation (e.g., Bainbridge, 1983;Casner et al., 2014). The rise of automated datalink within the systems initialization process in commercial aviation means that pilots in general, are no longer required to manually input all aspects of route planning. ...
Article
Full-text available
Growing interest in “connected services” is set to revolutionize the design of future transport systems. In aviation, connected portable Electronic Flight Bags (EFBs) would enable some of the traditional and more arduous preflight activities (e.g., route planning) to be conducted away from the flight deck. While this offers the opportunity to improve efficiency, any potential changes to the performance of the system need to be considered alongside the possible negative outcomes. The impact of EFBs on flight operations is assessed using Operator Event Sequence Diagrams (OESDs), which allow the operator interactions with technological systems to be mapped across different scenarios. This paper presents two OESDs: one focusing on current practise and one representing a “future” scenario whereby connected EFBs are commonplace. Our analysis predicts a 44% reduction in flight‐crew operational loading due to increased connectivity in the flight deck. Not only does the analysis highlight the reduction in operations but it also presents the utility of OESDs in the development of the connected EFBs of the future as well as their broader use in understanding the impact of new technologies on performance.
... Lisanne Bainbridge's 1983 paper, the Ironies of Automation (Bainbridge 1983), was a telling and prescient summary of the many challenges that arise from automation. She pointed out the ways in which automation, paradoxically, make the human's job more crucial and more difficult, rather than easier and less essential as so many engineers believe. ...
Article
Bainbridge's Ironies of Automation was a prescient description of automation related challenges for human performance that have characterized much of the 40 years since its publication. Today a new wave of automation based on artificial intelligence (AI) is being introduced across a wide variety of domains and applications. Not only are Bainbridge's original warnings still pertinent for AI, but AI's very nature and focus on cognitive tasks has introduced many new challenges for people who interact with it. Five ironies of AI are presented including difficulties with understanding AI and forming adaptations, opaqueness in AI limitations and biases that can drive human decision biases, and difficulties in understanding the AI reliability, despite the fact that AI remains insufficiently intelligent for many of its intended applications. Future directions are provided to create more human-centered AI applications that can address these challenges.
... Nevertheless, this question is the central proposition considered here. In a year which now witnesses half a century of progress since Bainbridge's (1983) pivotal publication, it is more than pertinent to consider our current state of development, especially in light of ever-increasing degrees of task automation (Hancock 2014;Hancock, Chignell, andLoewenthal 1985, Hancock et al. 2013). This becomes an even more pointed necessity with the burgeoning growth of systemic levels of autonomy that have occurred in the intervening interval, and which continue apace today (and see Hancock 2017;Kaplan et al. 2023;Russell 2019;Tegmark 2017). ...
Article
Full-text available
Our long accepted and historically-persistent human narrative almost exclusively places us at the motivational centre of events. The wellspring of this anthropocentric fable arises from the unitary and bounded nature of personal consciousness. Such immediate conscious experience frames the heroic vision we have told to, and subsequently sold to ourselves. But need this centrality necessarily be a given? The following work challenges this, oft unquestioned, foundational assumption, especially in light of developments in automated, autonomous, and artificially-intelligent systems. For, in these latter technologies, human contributions are becoming ever more peripheral and arguably unnecessary. The removal of the human operator from the inner loops of momentary control has progressed to now an ever more remote function as some form of supervisory monitor. The natural progression of that line of evolution is the eventual excision of humans from access to any form of control loop at all. This may even include system maintenance and then, prospectively, even initial design. The present argument features a 'unit of analysis' provocation which explores the proposition that socially, and even ergonomically, the human individual no longer occupies priority or any degree of pre-eminent centrality. Rather, we are witnessing a transitional phase of development in which socio-technical collectives are evolving as the principle sources of what, may well be profoundly unhuman motivation. These developing proclivities occupy our landscape of technological innovations that daily act to magnify, rather than diminish, such progressive inhumanities. Where this leaves a science focused on work as a human-centred enterprise serves to occupy the culminating consideration of the present discourse.
... A long history of research in human-automation interaction has identified many human factors issues when automation is introduced into the work process. Early work identified the ironies of automation (Bainbridge, 1983), where for instance automation can increase in already-high workload tasks, or the automation chooses inappropriate actions due to a failure to understand the context of the situation. Automation can also cause other human factors issues such as decreases in situational awareness (Kaber et al., 1999), poor function allocation design (Dorneich et al., 2003), lack of automation system transparency (Woods, 2016;Dorneich, et al., 2017), and miscalibrated trust (Lee and See, 2004). ...
Article
Full-text available
This paper developed human-autonomy teaming (HAT) characteristics and requirements by comparing and synthesizing two aerospace case studies (Single Pilot Operations/Reduced Crew Operations and Long-Distance Human Space Operations) and the related recent HAT empirical studies. Advances in sensors, machine learning, and machine reasoning have enabled increasingly autonomous system technology to work more closely with human(s), often with decreasing human direction. As increasingly autonomous systems become more capable, their interactions with humans may evolve into a teaming relationship. However, humans and autonomous systems have asymmetric teaming capabilities, which introduces challenges when designing a teaming interaction paradigm in HAT. Additionally, developing requirements for HAT can be challenging for future operations concepts, which are not yet well-defined. Two case studies conducted previously document analysis of past literature and interviews with subject matter experts to develop domain knowledge models and requirements for future operations. Prototype delegation interfaces were developed to perform summative evaluation studies for the case studies. In this paper, a review of recent literature on HAT empirical studies was conducted to augment the document analysis for the case studies. The results of the two case studies and the literature review were compared and synthesized to suggest the common characteristics and requirements for HAT in future aerospace operations. The requirements and characteristics were grouped into categories of team roles, autonomous teammate types, interaction paradigms, and training. For example, human teammates preferred the autonomous teammate to have human-like characteristics (e.g., dialog-based conversation, social skills, and body gestures to provide cue-based information). Even though more work is necessary to verify and validate the requirements for HAT development, the case studies and recent empirical literature enumerate the types of functions and capabilities needed for increasingly autonomous systems to act as a teammate to support future operations.
... But even in the case of jobs that are largely digitized and therefore automated, there are arguments for maintaining the competences that have been relevant up to now. These arguments are based primarily on non-routine work situations that occur again and again, because of the so called "Ironies of Automation" postulated by Lisanne Bainbridge (1983). Bainbridge states that the attempt to eliminate the operator as a human source of error, which is mostly undertaken by system designers in the course of automation, is counteracted by the fact that system designers are also humans and thus transfer their own errors into the system, so that malfunctions will occur even in highly automated systems. ...
... Rather, the major insight of this body of work is that automation does not simply replace the human, but fundamentally changes the nature of the task, often in unexpected and unanticipated ways (Parasuraman & Riley, 1997). Disadvantages with automation have included mis-calibrated trust (Lee and See, 2004;Hoff and Bashir, 2015), reduced situation awareness (Kaber & Endsley, 2004), unbalanced mental workload, skill degradation (Bainbridge, 1983;, and complacency and automation bias (Parasuraman & Manzey, 2010). ...
Article
In two studies, we evaluated the trust and usefulness of automated compared to manual parking using an experimental paradigm and also by surveying owners of vehicles with automated parking features. In Study 1, we compared participants' ability to manually park a Tesla Model X to their use of the Autopark feature to complete perpendicular and parallel parking maneuvers. We investigated differences in parking success and duration, intervention behavior, self-reported levels of trust in and workload associated with the automation, as well as eye and head movements related to monitoring the automation. We found higher levels of trust in the automated parallel parking maneuvers compared to perpendicular parking. The Tesla's automated perpendicular parking was found to be less efficient than manually executing this maneuver. Study 2 investigated the frequency with which owners of vehicles with automated parking features used those features and probed why they chose not to use them. Vehicle owners reported low use of any automated parking feature. Owners further reported using their vehicle's autonomous parking features in ways consistent with the empirical findings from Study 1: higher usage rates of autonomous parallel parking. The results from both studies revealed that 1) automated parking is error-prone, 2) drivers nonetheless have calibrated trust in the automated parking system, and 3) the benefits of automated parallel parking surpass those of automated perpendicular parking with the current state of the technology.
... Driving automation (or vehicle automation) is becoming prevalent on the road, with the promise of yielding huge safety benefits. However, as described in humanautomation interaction (HAI) research (Bainbridge 1983;Endsley 2017b;Hancock 2019;Norman et al. 1990;Parasuraman and Manzey 2010;Strauch 2018), it might create certain pitfalls that could compromise traffic safety. One is notorious automation complacency, which describes that human drivers use imperfect vehicle automation in an uncritical way and that complacent drivers fail to notice automation failures and deal with emergencies that vehicle automation cannot deal with. ...
Article
Given that automation complacency, a hitherto controversial concept, is already used to blame and punish human drivers in current accident investigations and courts, it is essential to map complacency research in driving automation and determine whether current research can support its legitimate usage in these practical fields. Here, we reviewed its status quo in the domain and conducted a thematic analysis. We then discussed five fundamental challenges that might undermine its scientific legitimation: conceptual confusion exists in whether it is an individual versus systems problem; uncertainties exist in current evidence of complacency; valid measures specific to complacency are lacking; short-term laboratory experiments cannot address the long-term nature of complacency and thus their findings may lack external validity; and no effective interventions directly target complacency prevention. The Human Factors/Ergonomics community has a responsibility to minimize its usage and defend human drivers who rely on automation that is far from perfect.
... In addition, the findings stress the need for maintaining Common Ground with human road users either through integrating common conventions in the intention-prediction algorithms of AVs or through dynamic coordination based on connectivity technologies. With no doubt, maintaining Common Ground with human road users will turn to be a key issue towards avoiding coordination failures among human-driven and autonomous vehicles or even new forms of automation failures obscure to human road users thus, more difficult to be handled (Bainbridge, 1983;Woods, 1996). ...
Article
Full-text available
An observational analysis of crossing episodes between two intersecting vehicles, in which a third road user clearly affected its evolution, was conducted in an attempt to identify (i) recurring patterns of informal coordination among road users and (ii) traffic situational invariances that may inform AV prediction algorithms. The term BLOCK-EXPLOITING is introduced to describe a driver’s exploitation of situational opportunities to gain priority often contrary to regulatory provisions, but favouring overall traffic efficiency. Video-data from an urban stop-controlled intersection were analysed through the lens of joint systems theory using a phenomenological framework developed in this study. Four generic types of BLOCK-EXPLOITING were identified (i.e. covering, ghost-covering, piggybacking, sneaking). Covering and ghost-covering led to minimal or no delays while piggybacking and sneaking, although abusive to other drivers, still only resulted in 1.99 to 3.33 sec delay. It is advocated that BLOCK-EXPLOITING can be socially acceptable. Proposed design challenges for AVs in mixed traffic include the ability to (i) distinguish BLOCK-EXPLOITING from errant driving, (ii) recognise to whom a ‘space-offering’ is addressed, and (iii) assess the appropriateness or abusiveness of a BLOCK-EXPLOITING action. Finally, this study brings to fore very short-time span joint-activity coordination requirements among diverse agents unknown to each other.
Chapter
Innovation strives for the ongoing production of novelty in a systematic manner (Rammert, W., Windeler, A., Knoblauch, H. & Hutter, M. (2018a): Expanding the Innovation Zone. In W. Rammert, A. Windeler, H. Knoblauch & M. Hutter (Eds.): Innovation Society Today. Perspectives, Fields, and Cases (pp. 1–11). Springer VS, p. 3). However, the very idea of novelty necessarily entails non-linear, contingent and therefore unpredictable elements, as it is concerned with futures that are not yet stabilized. This paper aims to reflect upon this relationship by asking how those futures appear in the making of innovations and how these images are shaped by ongoing processes of producing novelty. Comparing four empirical cases – the development of nudges, the discursive justification of ‘autonomous’ driving technology, the playful tinkering with technologies in hacking and making, and the construction of neo-traditional and futuristic buildings – we develop and answer three analytical questions: 1) What futures become effective in each respective case, and how can they be described? 2) Which ways lead to these futures, how can they be achieved? 3) How certain are these futures, what relationship with uncertainty do they entail? We show how ‘the future’ is better described as a plurality of ‘futures’, depending on the concrete realization of a certain innovation process. Hence, our paper suggests taking this heterogeneity into account when analyzing innovation processes. Moreover, we argue that such a comparison of diverging cases can yield important insights for the theory of the innovation society, as it demonstrates innovation’s general multiplicity and situational adaptability.
Article
Civic engagement is increasingly becoming digital. The ubiquity of computing increases our technologically mediated interactions. Governments have instated various digitization efforts to harness these new facets of virtual life. What remains to be seen is if citizen political opinion, which can inform the inception and effectiveness of public policy, is being accurately captured. Civicbase is an open‐source online platform that supports the application of Quadratic Voting Survey for Research (QVSR), a novel survey method. In this paper, we explore QVSR as an effective method for eliciting policy preferences, optimal survey design for prediction, Civicbase's functionalities and technology stack, and Personal AI, an emerging domain, and its relevance to modeling individual political preferences.
Chapter
Full-text available
There is much potential and promise for the use of artificial intelligence (AI) in healthcare, e.g., in radiology, mental health, ambulance service triage, sepsis diagnosis and prognosis, patient-facing chatbots, and drug and vaccine development. However, the aspiration of improving the safety and efficiency of health systems by using AI is weakened by a narrow technology focus and by a lack of independent real-world evaluation. It is to be expected that when AI is integrated into health systems, challenges to safety will emerge, some old, and some novel. Examples include design for situation awareness, consideration of workload, automation bias, explanation and trust, support for human–AI teaming, training requirements and the impact on relationships between staff and patients. The use of healthcare AI also raises significant ethical challenges. To address these issues, a systems approach is needed for the design of AI from the outset. Two examples are presented to illustrate these issues: 1. Design of an autonomous infusion pump and 2. Implementation of AI in an ambulance service call centre to detect out-of-hospital cardiac arrest.
Article
Automation failure is a key construct in human-automation interaction research. Yet the paucity of exposition on this construct has led to confusion about what sorts of failures are suitable for testing predictions of human performance in response to automation failure. We illustrate here how overly narrow or broad definitions of automation failure limit the explanatory power of human performance models in a way that is not obviously reasoned. We then review three aviation safety events that challenge the overly narrow definition. Reflecting on those events and other observations, we propose an initial taxonomy of automation failure and other automation-related human performance challenges. We conclude by pointing out the utility of the taxonomy for advancing human-automation interaction research.
Chapter
To systematically understand trends and applications of human-machine interface in air traffic control, this study used data from Web of Science Core Collection as a sample, adopted scientific bibliometric method, used VOSviewer and CiteSpace to draw intuitive data knowledge graph and completed visualization analysis. The results reveal the research hotspots, trends and application status, and also show that the overall number of literature in the search area is increasing, the USA, France, Germany, China and the Netherlands are outstanding in terms of output in countries; universities and colleges are the main institutions of research, but the cooperation between research institutions and authors is not close enough; the research hotspots mainly focus on air traffic control, human-computer interaction, human-machine interface, automation, systems, performance, situation awareness, mental workload, etc.; future research will focus on systems, automation, human factor, mental workload, air traffic management, performance, human-machine interface, and situational awareness, etc. This study provides an in-depth analysis and interpretation of the trends and applications of human-machine interface in air traffic control in an objective and quantitative manner while also presents a clear knowledge structure of the research topic to provide theoretical and practical guidance for subsequent scholars in conducting research in this field.KeywordsHuman-Machine InterfaceAir Traffic ControlScientometric Analysis
Chapter
Full-text available
In recent years, the growth of cognitively complex systems has motivated researchers to study how to improve these systems’ support of human work. At the same time, there is a momentum for introducing Artificial Intelligence (AI) in safety critical domains. The Air Traffic Control (ATC) system is a prime example of a cognitively complex safety critical system where AI applications are expected to support air traffic controllers in performing their tasks. Nevertheless, the design of AI systems that support effectively humans poses significant challenges. Central to these challenges is the choice of the model of how air traffic controllers perform their tasks. AI algorithms are notoriously sensitive to the choice of the models of how the human operators perform their tasks. The design of AI systems should be informed by knowledge of how people think and act in the context of their work environment. In this line of reasoning, the present study has set out to propose a framework of cognitive functions of air traffic controllers that can be used to support effectively adaptive Human - AI teaming. Our aim was to emphasize the “staying in control” element of the ATC. The proposed framework is expected to have meaningful implications in the design and effective operationalization of Human - AI teaming projects at the ATC Operations rooms.KeywordsAir Traffic ControlCognitive Systems EngineeringArtificial Intelligence
Chapter
In this paper, the framework of sociodigital sovereignty and an according classification matrix will be presented. Both have been developed on the basis of action regulation and sociotechnical theories in order to analyze and design different aspects of sociodigital sovereignty within sociotechnical systems. By using this matrix, it is possible to identify and address biases and potential conflicts in terms of transparency, reliability, trust, and fairness. The sociotechnical approach presented here addresses the three aspects - human, technology and organization – and will be complemented by an action-theoretical perspective that includes the three aspects of “transparency/explainability”, “confidence of action/efficiency” and “freedom of action/divergence”. This results in a matrix of nine fields in which different facets of sociodigital sovereignity are systematically adressed (e. g. Hartmann & Shajek, 2023). Furthermore, the resulting implications from a use case will be presented in this paper. There, the classification matrix was used in a workshop to analyze a highly automated technical system. Finally, future developments of the framework and the according classification matrix are outlined.Keywordssociodigital sovereigntyexplainable AIsociotechnical systemsaction regulation theories
Chapter
Artificial Intelligence (AI) offers the potential to transform our lives in radical ways. In particular, when AI is combined with the rapid development of mobile communication and advanced sensors, this allows autonomous driving (AD) to make a great progress. In fact, Autonomous Vehicles (AVs) can mitigate some shortcomings of manual driving, but at the same time the underlying technology is not yet mature enough to be widely applied in all scenarios and for all types of vehicles. In this context, the traditional SAE-levels of automation (J3016B: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles—SAE International. Available online: https://www.sae.org/standards/content/j3016_201806/) can lead to uncertain and ambiguous situations, so yielding to a great risk in the control of the vehicle. In this context, the human drivers should be supported to take the right decision, especially on those edge-cases where automation can fail. A decision-making system is well designed if it can augment human cognition and emphasize human judgement and intuition. It is worth to noting here that such systems should not be considered as teammates or collaborators, because humans are responsible for the final decision and actions, but the technology can assist them, reducing workload, raising performances and ensuring safety. The main objective of this paper is to present an intelligent decision support system (IDSS), in order to provide the optimal decision, about which is the best action to perform, by using an explainable and safe paradigm, based on AI techniques.KeywordsDecision MakingHuman-Centered Artificial IntelligenceAutonomous Driving
Chapter
Even as our ability to counter cyber attacks improves, it is inevitable that threat actors may compromise a system through either exploited vulnerabilities and/or user error. Aside from material losses, cyber attacks also undermine trust. Self-Driving Cars (SDCs) are expected to revolutionize the automotive industry and high levels of human trust in such safety-critical systems is crucial if they are to succeed. Should adverse experiences occur, SDCs will be particularly vulnerable to the loss of trust. This paper presents findings from an initial experiment which is part of an ongoing study exploring how fully autonomous Level 5 SDCs would be blamed and trusted in the event of a cyber attack. To do this a future thinking-based methodology was used. Participants were presented with a series of randomly ordered hypothetical news headlines about SDC-cyber incidents. After reading each headline, they were required to rate their trust and assign blame. Twenty different hypothetical SDC-cyber incidents were created and manipulated between participants through the use of cyber security specific terminology (e.g. hackers) and non-specific cyber security terminology. This was manipulated to investigate whether the wording – i.e. being explicitly or overtly cyber (versus non explicitly or covert) of a reported incident affected trust and blame. Overall trust ratings in SDC technology in the context of a cyber incident were low across both conditions which has the potential to impact uptake and adoption. Whilst there was no significant overall difference in trust between the overtly and covertly cyber conditions, indications for further lines of inquiry were evident – including differences between some of the scenarios. In terms of blame, attribution was varied and context dependent but across both conditions the SDC company was blamed the most for the cyber incidents.KeywordsSelf-Driving CarsCyber SecurityTrust
Chapter
Speeding is the primary cause of traffic accidents. To improve road safety, the European Union started implementing a new regulation mandating that all new vehicles coming to the EU market must be equipped with an Intelligent Speed Assistance (ISA) system from 2022 onwards. However, the rule did not include existing vehicles on the roads. Our research aims to fulfill this gap by investigating user experiences and acceptance with a retrofit system developed by V-tron, a Dutch company. Seven participants signed up for our study and our technicians installed the ISA system on their cars. They then used the car for more than one month and reported their experiences weekly. We also recruited a driving school and conducted a focus group with five instructors. Using interview and questionnaire methods to collect their first-person experiences, we saw that all participants acknowledged the vision and potential of ISA systems in reducing speeding and improving traffic safety. While the retrofit system is easy to use, the technology needs to be improved in accuracy and robustness. The overruling mechanism also needs to minimize the latency and consider secondary users unfamiliar with the speed control. Three design concepts were proposed to improve user experiences and eventually promote the adaptation of ISA systems.KeywordsSpeedingDriving Assistance SystemTechnology Acceptance ModelUser Experience
Chapter
Autonomous systems are developed for both civilian and military applications. To investigate the use of semi-autonomous ground and air units in mechanized combat and control by voice commands, a case study with two participants was performed. The study was performed in a simulation environment, and voice commands were implemented by Wizard of Oz (WoZ) procedures. The objective was to compare control of semi-autonomous units by voice commands with control by communication with human operators that controlled non-autonomous units. The results show that control of units by communication with human operators was more efficient and less demanding, since the human operators understood the situation and adapted accordingly. Discussions with the participants indicated that efficient control of autonomous units by voice commands requires higher levels of autonomy and fewer and simpler voice commands than implemented in the present study. The study also demonstrated that the simulation environment and WoZ procedures are applicable to testing control of semi-autonomous units by voice commands.Keywordsautonomous systemsvoice controlsimulationUGVUAV
Chapter
This chapter provides an overview of human error, from conceptual models to research into why highly skilled individuals make errors, as well as how those errors can be systematic and somewhat predictable. Research on human error is examined, from the early studies of Fitts and Jones (Analysis of factors contributing to 270 “pilot error” experiences in operating aircraft controls (Report TSEAA-694-12A). Aero Medical Laboratory, Air Material Command, Wright-Patterson Air Force Base, OH: Aeromedical Lab, Dayton, 1947) concerning WWII aviators to the classic vigilance decrement findings, as well as various models of chains of events that lead to human error within complex adaptive systems. Examples from several real-world scenarios are included.KeywordsHuman errorVigilance decrementSwiss cheese modelRisk management
Chapter
The set of traditional characteristic features of large-scale complex systems (LSS) included the large number of variables, structure of interconnected subsystems, and other features that complicate the control models, such as nonlinearities, time delays, and uncertainties. The advances in information and communication technologies (ICT) and the modern business models have led to important evolution in the concepts and the corresponding management and control infrastructures of large-scale complex systems. The last three decades have highlighted several new characteristic features, such as networked structure, enhanced geographical distribution associated with the increased cooperation of subsystems, evolutionary development, higher risk sensitivity, presence of more, possibly conflicting, objectives, and security and environment concerns. This chapter aims to present a balanced review of several traditional well-established methods (such as multilevel and decentralized control) and modern control solutions (such as cooperative and networked control) for LSS together with the technology development and new application domains, such as smart city with heating and water distribution systems, and environmental monitoring and protection. A particular attention is paid to automation infrastructures and associated enabling technologies together with security aspects.KeywordsComplex systemsCloud ComputingDecision supportEnvironment protectionICTInterconnected systemsInternet of ThingsNetworked controlSmart City
Chapter
Designers frequently look toward automation as a way to increase system efficiency and safety by reducing involvement. This approach can disappoint because the contribution of people often becomes more, not less, important as automation becomes more powerful and prevalent. More powerful automation demands greater attention to its design, supervisory responsibilities, system maintenance, software upgrades, and automation coordination. Developing automation without consideration of the human operator can lead to new and more catastrophic failures. For automation to fulfill its promise, designers must avoid a technology-centered approach that often yields strong but silent forms of automation, and instead adopt an approach that considers the joint operator-automation system that yields more collaborative, communicative forms of automation. Automation-related problems arise because introducing automation changes the type and extent of feedback that operators receive, as well as the nature and structure of tasks. Also, operators’ behavioral, cognitive, and emotional responses to these changes can leave the system vulnerable to failure. No single approach can address all of these challenges because automation is a heterogeneous technology. There are many types and forms of automation and each poses different design challenges. This chapter describes how different types of automation place different demands on operators. It also presents strategies that can help designers achieve promised benefits of automation. The chapter concludes with future challenges in automation design and human interaction with increasingly autonomous systems.KeywordsAutomation designVehicle automationMental modelsSupply chainsTrust
Chapter
A review of aerospace systems automation is provided with an emphasis on examples and design principles. First, a discussion of aerospace systems manufacturing automation is provided, followed by a discussion of automation in the operation of aerospace systems, including aircraft and air traffic control. The chapter provides guidance to managers, engineers, and researchers tasked with studying or building aerospace systems.KeywordsAutomationAir traffic controlFlight deckAircraft manufacturingAirspace systemsAdditive manufacturing
Chapter
The growing importance of digitalization in business also has its impact on automation. In this section, we will put this into the perspective of significant market drivers, and different views on aspects of automation, across the industrial value chain, or along the automation lifecycle.Key trends in digitalization are having an impact on automation. Making data available for more applications through the industrial Internet of Things (industrial IoT) and cyber physical systems will be covered. Such systems expand the traditional automation stack with more advanced (wired and wireless) communication technologies, and the computing power and storage available in a cloud further extends capacity for additional functionality.An important use of data is artificial intelligence. We will cover AI’s impact on the industry, and how it plays into the automation functionality. The context of the data and its analysis is brought together in the concept of the digital twin that is covered as well.The availability of these technologies, from IoT to AI, allows for an evolution of automation toward more autonomous systems. Autonomy is today mostly seen in vehicles, we will put it into the context of the industrial automation system and will show how systems (among other robots) will become more autonomous, and will get the capability to collaborate.This will open a whole new range of applications, enabled by a bright future of automation.KeywordsDigitalizationIndustrial Internet of ThingsCyber-physical systemsCollaborative robotsIndustrial AIDigital twinAutonomous systemsCollaborative systems
Chapter
Automated systems can provide tremendous benefits to users; however, there are also potential hazards that users must be aware of to safely operate and interact with them. To address this need, safety warnings are often provided to operators and others who might be placed at risk by the system. This chapter discusses some of the roles safety warnings can play in automated systems. During this discussion, the chapter addresses some of the types of warnings that might be used, along with issues and challenges related to warning effectiveness. Design recommendations and guidelines are also presented.KeywordsFalse alarmWarning systemMaterial safety data sheetFault tree analysisAmerican National Standard Institute
Article
Full-text available
Artificial General Intelligence (AGI) is the next and forthcoming evolution of Artificial Intelligence (AI). Though there could be significant benefits to society, there are also concerns that AGI could pose an existential threat. The critical role of Human Factors and Ergonomics (HFE) in the design of safe, ethical, and usable AGI has been emphasized; however, there is little evidence to suggest that HFE is currently influencing development programs. Further, given the broad spectrum of HFE application areas, it is not clear what activities are required to fulfill this role. This article presents the perspectives of 10 researchers working in AI safety on the potential risks associated with AGI, the HFE concepts that require consideration during AGI design, and the activities required for HFE to fulfill its critical role in what could be humanity's final invention. Though a diverse set of perspectives is presented, there is broad agreement that AGI potentially poses an existential threat, and that many HFE concepts should be considered during AGI design and operation. A range of critical activities are proposed, including collaboration with AGI developers, dissemination of HFE work in other relevant disciplines, the embedment of HFE throughout the AGI lifecycle, and the application of systems HFE methods to help identify and manage risks.
Article
In this article, we propose to treat agency as something which is accomplished in the entanglement of humans with technologies. This redirects our attention away from the question of what distinguishes humans from smart machines and towards querying how people and automated apparatuses join in processes of mutual sociomaterial engagement. To further our argument, we look at self-service kiosks, which are ubiquitous yet largely overlooked components of mediated environments. We reflect on a participant observation in groceries stores and interviews with customers familiar with self-checkout facilities. They make us aware that operating this equipment is not an individual affair but a joint activity by default, taking place in a temporally regimented setting prone to human errors and malfunction when people try to respond to the terminals’ protocol. This sort of imperfect automation has ambivalent ramifications which rely on the capabilities of users and the capacities of an interface and its underlying operations. Agency, we conclude, thus becomes a matter of viable performance in which humans may act machine-like while machines perform an expanding share of activities.
Chapter
The Cambridge Handbook of Computational Cognitive Sciences is a comprehensive reference for this rapidly developing and highly interdisciplinary field. Written with both newcomers and experts in mind, it provides an accessible introduction of paradigms, methodologies, approaches, and models, with ample detail and illustrated by examples. It should appeal to researchers and students working within the computational cognitive sciences, as well as those working in adjacent fields including philosophy, psychology, linguistics, anthropology, education, neuroscience, artificial intelligence, computer science, and more.
Article
Full-text available
Since the early days of artificial intelligence (AI), many logics have been explored as tools for knowledge representation and reasoning. In the spirit of the Crossley Festscrift and recognizing John Crossley’s diverse interests and his legacy in both mathematical logic and computer science, I discuss examples from my own research that sit in the overlap of logic and AI, with a focus on supporting human–AI interactions.
Article
Full-text available
Syftet med denna studie var att beskriva sjuksköterskors erfarenheter av att använda robotar vid läkemedelshantering bland äldre personer. Tolv sjuk­sköterskor intervjuades via telefon med hjälp av en fråge­guide. Insamlade data analyserades sedan genom en ­induktiv kvalitativ innehållsanalys. Studien identifierade tre övergripande kategorier. Skapar självständighet handlar om hur roboten bidrar till äldre personers oberoende samt ökade ansvar, engagemang och trygghet i samband med läkemedelshantering. Ökad patientsäkerhet syftar på att rätt patient får rätt läke­medel i rätt tid, vilket bidrar till färre läkemedelsavvikelser. Resurssparande tydliggör såväl miljöbesparingar, som att mindre tid och personalresurser behöver läggas på läkemedelsadministrering. Att använda läkemedelsrobot kan således generera vinster på både individ-, grupp- och organisatorisk nivå, men det är viktigt att införande och användande anpassas till varje enskild individ. Robot use in older people’s medication ­management – an interview study with community nurses ­in Sweden The aim of this study was to describe nurses’ experiences of using robots in medication management among older persons. Twelve nurses were interviewed by telephone, using an interview ­guide. Collected data were analysed using an inductive qualitative content analysis. Three main categories were identified; Creating independence regards how the medicine dispensing robot contributes to the independence and autonomy of older people as well as increased responsibility, engagement, and feelings of security. Increased patient safety means that the right patient gets the right medicine at the right time, contributing to fewer adverse drug events. Saving resources highlights savings, both regarding the environment and human resources, as staff spend less time administering medication. The use of medicine dispensing robots can generate profits on individual, group, and organizational levels. However, it is important that implementation and use are individually adjusted.
Chapter
Full-text available
Artificial intelligence and Assistance Systems are having an impact on the economy, society, skilled work and work environment. However, there are often very different assessments of the effects: On the one hand the loss of jobs and even professions have been predicted, on the other hand new support and options for work are emerging. The actual promotion of these systems will depend on the opportunities of intervention and control by skilled workers. How can problem situations and imponderabilities in virtual environments be handled and solved? Both the opportunities and the risks of Artificial Intelligence and assistance systems for vocational education and training are reflected in this article.
Chapter
Full-text available
Digital work is becoming ubiquitous across a range of fields, ranging from production to services. Besides the effects of automation on the job market, it changes job contents and job demands for those holding jobs. Such jobs are characterized by high information load, higher levels of autonomy, performance diversity and growth potential. Respective jobs, tasks and work environments are often characterized with the term complexity. Paradigms, strategies, tools, and practices of work design must keep up with the affordances of so-called complex sociotechnical systems. However, understanding and conceptualization of complexity in work design are still rather superficial. In healthcare, sometimes labeled as a paradigm for complexity, a rising dissatisfaction with this state can be noticed and a lack of progress in patient safety is lamented. Drawing upon systems theory and its variant systems thinking, an integrated approach to work design is sketched out with reference to healthcare. This approach allows for a more systematic treatment of complexity with its two main strategies of complexity reduction and complexity management. Finally, the transfer of this approach into teaching is discussed within the field of work & organizational psychology at a university of applied science.
Article
Full-text available
Zusammenfassung Mit fortschreitender Digitalisierung wurden zahlreiche Studien zur Identifizierung neuer Kompetenzanforderungen durchgeführt. Nur wenige jedoch beschäftigten sich mit bestehenden Kompetenzanforderungen, die vor allem in Nicht-Routine-Situationen (NRS) weiterhin aktuell sind. In NRS müssen Fachkräfte eine Fülle von Kenntnissen und Fähigkeiten ad hoc mobilisieren, um schnell und kompetent Entscheidungen zu treffen. Diese Kenntnisse und Fähigkeiten werden jedoch aufgrund der Automatisierung im Routinefall nur noch selten benötigt und sind daher in Gefahr, vergessen zu werden. Die Problematik wurde bereits in Hochrisikobranchen mit hohem Automatisierungsgrad erforscht, für die chemische oder pharmazeutische Produktion gibt es jedoch bisher keine empirischen Untersuchungen. Das vorliegende Forschungsprojekt möchte diese Lücke schließen. Praktische Relevanz: Wenn in Nicht-Routine-Situationen (NRS) der chemischen und pharmazeutischen Produktion nicht kompetent gehandelt wird, so kann dies zu (teilweise hohen) Kosten oder auch zur Gefährdung von Personen führen. Durch das vermehrte Arbeiten in automatisierten Arbeitsumgebungen steigt die Gefahr des Verlustes eben jener Kompetenzen, die in den NRS relevant sind. Diese Problematik genau zu erfassen (welche Kompetenzen sind betroffen etc.) und die möglichen Einflussfaktoren (Art der Arbeitsaufgabe, individuelle Dispositionen der Fachkräfte etc.) zu erkennen, ist Grundlage dafür, mögliche Maßnahmen zur Verhinderung des Kompetenzverlustes zu planen und zu implementieren.
Article
Full-text available
Vehicle automation promises to reduce the demands of the driving task, making driving less fatiguing, more convenient, and safer. Nevertheless, Level 3 automated vehicles rely on a human driver to be ready to resume control, requiring the driver to reconstruct situation awareness (SA) and resume the driving task. Understanding the interaction between non-driving-related task (NDRT) use, SA, and takeover capacity is important because an effective takeover is entirely dependent on, and scaffolds from, effectively reconstructed SA. While a number of studies have looked at the behavioural impact of being ‘in- or on-the-loop’, fewer consider the cognitive impact, particularly the consequences for SA. The present study exposed participants to an extended simulated automated drive involving two critical takeover scenarios (early- and late-drive). We compared automated vehicle (AV) operators who were required to passively monitor the vehicle to those engaging with self-selected NDRTs. Monitoring operators demonstrated lower total- and schema-specific SA count scores following a fatiguing drive compared to those engaging with self-selected NDRTs. NDRT engagement resulted in no significant difference in SA count scores early- and late�drive. Assessment of differences in the type and sensory modality of NDRTs indicated operators make funda�mentally different selections about the NDRTs they engage with in an automated driving environment compared to a manual environment. The present study provides further evidence linking SA and AV operator behaviour and underscores the need to understand the role of SA in takeover capacity. Our findings suggest that although SA declines over time regardless of driving task requirements (Monitoring versus NDRT engagement), NDRT use may facilitate better SA construction, with implications for the regulation of NDRT use in AVs as the technology progresses.
Article
Full-text available
The “industry of the future” highlights technological transformation via the digital revolution as a major axis for building future industrial performance, promising flexibility and improved working conditions. Many of these technologies aim to be autonomous. We will question this desire for machine autonomy and more especially the principles on which it is based.We present the main results of three autonomous device design projects: the introduction of an industrial cobot, the design of an autonomous agricultural robot, and the design of autonomous shuttles. For each of these three projects, we will demonstrate the limits of autonomy, when it means leaving aside the design of the relationships between the entities, including those that are autonomous, in the future system.By avoiding these issues, the designers reduce the complexity of their task. But the contextual conditions that favour the effectiveness of the performances targeted by the introduction of the technology are then often missing. In such conditions, the human operator remains the main adjustment variable through which integration, albeit degraded, remains possible.In line with previous proposals in activity ergonomics, we defend the importance of addressing the question of the relationship very early on when designing devices that integrate autonomous technology. To this end we defend the utility of a design approach based on the extension of action capacities, as well as the utility of paying greater attention to the political dimension of projects and thus to the vision of future work that these emerging technologies implicitly support.
Article
Full-text available
The article proposes an analytical perspective on artificial intelligence (AI) that can be fruitful in the sociology of work. The practical logic of new forms of AI (connectionist AI) is described as an interplay of social and technical processes of opening and closing possibilities of knowledge and action. In order to develop this argument, it is first shown in which sense AI can be understood as a contingency-generating technology in socio-technical contexts. The architecture based on neural networks is elaborated as a decisive feature of connectionist AI that not only opens up technical possibilities but can also shape social processes and structures by ‘selectivity’. However, this shaping does not take place solely on the part of the AI, but only becomes apparent in the interplay with specific restrictions that lie both in the social context of use and in the algorithmic architecture of the AI itself. For research in the sociology of work, this means that contingency theory approaches must be linked with approaches that emphasise the limits of (‘intelligent’) digitalisation. The yield of such a perspective is outlined in relation to the control of work with AI.
Article
The advent of automated and algorithmic technology requires people to consider them when assigning responsibility for something going wrong. We focus on a focal question: who or what should be responsible when both human and machine drivers make mistakes in human–machine shared-control vehicles? We examined human judgments of responsibility for automated vehicle (AV) crashes (e.g., the 2018 Uber AV crash) caused by the distracted test driver and malfunctioning automated driving system, through a sequential mixed-methods design: a text analysis of public comments after the first trial of the Uber case (Study 1) and vignette-based experiment (Study 2). Studies 1 and 2 found that although people assigned more responsibility to the test driver than the car manufacturer, the car manufacturer is not clear of responsibility from their perspective, which is against the Uber case’s jury decision that the test driver was the only one facing criminal charges. Participants allocated equal responsibility to the normal driver and car manufacturer in Study 2. In Study 1, people gave different and sometimes antagonistic reasons for their judgments. Some commented that human drivers in AVs will inevitably feel bored and reduce vigilance and attention when the automated driving system is operating (called “passive error”), whereas others thought the test driver can keep attentive and should not be distracted (called “active error”). Study 2’s manipulation of passive and active errors, however, did not influence responsibility judgments significantly. Our results might offer insights for building a socially-acceptable framework for responsibility judgments for AV crashes.
Article
Echoing past waves of transformation, the public sphere is awash with anxiety about automation now driven by the rise of intelligent machines. Emerging technologies encompass a wider and wider range of work, and the disruptions that will accompany the transformation of work involve pressing problems for research and practice. Communication scholarship is distinctively well equipped for the study of automation today because communication itself is increasingly the focus of automation, because the automation of work is a communication process, and because deliberations about automation will shape how we manage those disruptions. This article reviews scholarship in communication that focuses on automation, highlighting research that focuses on communication as the substance of automation, discourse about automation, and communicative practice of automation.
Chapter
No longer limited to the factory hall, automation and digitization increasingly change, complement, and replace the human workplace also in the sphere of knowledge work. Technology offers the possibility of creating economically rational, autonomously acting software—the machina economica. This complements human beings who are far from being a rational homo economicus and whose behavior is biased and prone to errors. This includes behaviors that lack responsibility and sustainability. Insights from behavioral economics suggest that in the modern workplace, humans who team up with a variety of digital assistants can improve their decision-making to achieve more corporate social responsibility. Equipped with artificial intelligence (AI), machina economica can nudge human behavior to arrive at more desirable outcomes. Following the idea of augmented human-centered management (AHCM), this chapter outlines underlying mechanisms, opportunities, and threats of AI-based digital nudging.
Article
Full-text available
Research on employee turnover since L. W. Porter and R. M. Steers's analysis of the literature reveals that age, tenure, overall satisfaction, job content, intentions to remain on the job, and commitment are consistently and negatively related to turnover. Generally, however, less than 20% of the variance in turnover is explained. Lack of a clear conceptual model, failure to consider available job alternatives, insufficient multivariate research, and infrequent longitudinal studies are identified as factors precluding a better understanding of the psychology of the employee turnover process. A conceptual model is presented that suggests a need to distinguish between satisfaction (present oriented) and attraction/expected utility (future oriented) for both the present role and alternative roles, a need to consider nonwork values and nonwork consequences of turnover behavior as well as contractual constraints, and a potential mechanism for integrating aggregate-level research findings into an individual-level model of the turnover process. (62 ref)
Article
Full-text available
As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.
Article
Since complete automation may be an Utopean idea, the control engineer has to cope with man/machine systems. Examples are given of cases where human factors influence technical design. The interaction with social and political changes is also indicated. Social scientists have much to offer to control engineers. A brief survey will be given of progress in the scientific analyses of human capabilities, limitations, needs and motivations. Also experimental techniques specific to the social sciences will be touched upon. The design of man/machine systems is discussed, taking human factors into account right from the start. Modern technology can catalyse changes resulting from the application of job enrichment, group technology and worker participation to eliminate some human problems at work. Finally, reference is made to the recommendations by the IFAC Workshop on Productivity and Man.
Article
Rational design of a process control system using an on-line computer requires a definition of the total control task and an allocation of function between the human operator and the machine. A knowledge of the historical development of the role assigned to the human operator provides useful guidance in making the allocation decision. This development is described, with emphasis on the function performed by the operator in modern computer control systems, on the importance of different process characteristics, on the increased understanding of the operator's role obtained from attempts to automate it completely and on the need to choose appropriate systems when carrying out experimental studies of the operator.
Chapter
The generally safe and dependable commercial aviation industry has never had properly designed Caution and Warning Systems (CAWS) to alert the aircrew to operational or system malfunctions or emergency situations. When flight systems were simpler, relatively crude CAWS were manageable. Today, however, the complexity and size of modern avionics systems makes it crucial to have optimal systems to alert the crew to problems, and to assist them in handling them.
Chapter
The classical formula for training is simple enough. To train someone to do anything requires only: (1) opportunities to practise; (2) tests to check performance after practice; and, if practice and testing do not of themselves suffice, (3) hints, explanations or other information not intrinsic to performing the task. Industrial fault diagnosis training can present serious difficulties on all three counts.
Chapter
Within the context of this conference, we want to know the factors which affect human ability to detect and diagnose process failures, as a basis for console and job design. Understandably, human factors engineers want fully specified models for the human operator’s behaviour. It is also understandable that these engineers should make use of modelling procedures which are available from control engineering. These techniques are attractive for two reasons. They are sophisticated and well understood. They have also been very successful at giving first-order descriptions of human compensatory tracking performance in fast control tasks such as flying. In this context they are sufficiently useful for the criticism, that they are inadequate as a psychological theory of this behaviour, to be irrelevant for many purposes. Engineers have therefore been encouraged to extend the same concepts to other types of control task. In this paper we will be considering particularly the control of complex slowly changing industrial processes, such as steel making, petrochemicals and power generation.
Article
Systems whose failure can cause loss of life or large economic loss need to be tolerant to faults (i.e. faults in system hardware, software, and procedures). Examples of such systems include airplane autopilots in the automatic landing mode, electricity utility power generation plants, and telephone electronic switching systems (ESS). Such systems are characterized by high reliability; they fail infrequently and recover quickly when a fault does occur. The user usually cannot respond fast enough if and when a fault is detected. Even if he could respond, his proficiency would not be high because the fault occurs infrequently.
Article
This chapter discusses comparative study in different man–machine systems for human control tasks. Potential man–machine problems are born in the design phase of the construction process. With the help of the data obtained in the interview with a member of the technical management, a number of characteristics of the plant hardware, the control system, and the man-machine interface are formulated. Some formal characteristics of the organizational system are obtained by the means of the interview with a member of the management. The factor achievement in the job satisfaction questionnaire is positively related with the dimension activities (ACT), controllability of the process (CONT), and system ergonomics (ERG). The present analysis may lead to the conclusion that a comparative study of quite different man–machine systems, which implies an analysis on the level of the system and not on that of the individual operator, can provide meaningful results in regard to the human aspects of man–machine systems.
Article
The rapid technological advancements of the past decade, and the availability of higher levels of automation which resulted, have aroused interest in the role of man in complex systems. Should the human be an active element in the control loop, operating manual manipulators in response to signals presented by various instruments and displays? Or should the signals be coupled directly into an automatic controller, delegating the human to the monitoring role of a supervisor of the system’s operation?
Article
This symposium with its title, Human Detection and Diagnosis of System Failures, clearly implies that, at least in the immediate future, complex systems may have to resort to the skills of the human operator when problems arise during operation. However, the human attributes particularly appropriate to faultfinding are not inherent in the organism; operators of complex systems must be trained if they are to be efficient diagnosticians. This paper describes the development of a training programme specifically designed to help trainee process operators learn to recognise process plant breakdowns from an array of control room instruments. Although developed originally to train fault-finding in the context of continuous process chemical plant, it is probable that the techniques we are going to describe may prove to be equally effective in other industries. For example, power plants, crude oil refineries and oil production platforms all involve continuous processes which are operated from a central control room.
Article
The paper describes (1) the development of a simulator and (2) the first results of a training technique for the identification of plant failures from control panel indications. Input or signal features of the task present more simulation fidelity problems than its response or output features. Current, techniques for identifying effective signals, e.g. ‘ blanking-off ’ information, or protocol analysis, bias any description of problem solving since they require serial reporting, if not serial collection, of information by the operator. They also require inferences as to what is an effective item of information. It is therefore argued that simulation should preserve all those features which may in principle provide, or influence acquisition of, diagnostic information, specifically panel layout, instrument design and display size. Further fidelity problems are the stress from operating in a dangerous environment; stress from hazards or sanctions following mistaken diagnosis; and the stress of diagnosing in a short time interval. The simulator uses bock-projection to life size of slides of control panel mock-ups by a random access projector. Under an adaptive cumulative part regime, trainees saw on average 89 failure arrays in 30 min, an obvious advantage over the operational situation. In a test 24 hr after training, consisting of the eight faults each presented four times in random order, 4 out of 17 trainees made only one error in 32 diagnosos; the other trainees performed perfectly. Subjects' reports indicate very different solution strategies, e.g., recognition of alarm patterns; serial instrument checking determined by heuristics of plant functioning. Several features of performance arc consistent with the view that trainees use a minimal number of dimensions for correct discrimination and that these change as the number of different fault arrays increases. It is argued that this training regime should reduce stress. In particular it is argued that, according to current theories of stress, the fewer dimensions needed for diagnosis, the more robust will be diagnostic performance in dangerous environments.
Article
[reviews] research on adult age differences in human memory . . . conducted very largely within the framework of current theoretical views of memory / organized in terms of the topics and concepts suggested by these approaches / the literature on memory and aging is now so extensive that the review must be selective—we [the authors] focus on topics of current debate and largely on research reported in the last 10 years approaches to the study of memory [memory stores, processing models, memory systems] / empirical evidence [sensory and perceptual memory, short-term and working memory, age differences in working memory] / age differences in encoding [qualitative differences in encoding] / age differences in retrieval / age differences in nonverbal memory / age differences in memory of the past and for the future / aging and memory systems (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Modes of human-computer interaction in the control of dynamicsystems are discussed, and the problem of allocating tasks betweenhuman and computer considered. Models of human performance in avariety of tasks associated with the control of dynamic systems arereviewed. These models are evaluated in the context of a designexample involving human-computer interaction in aircraftoperations. Other examples include power plants, chemical plants,and ships.
Article
Typescript. Thesis--University of Illinois at Urbana-Champaign. Vita. Includes bibliographical references (leaves 102-108). Photocopy.
Article
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
Article
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
Article
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
Article
A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.
Article
Much ergonomics research is published in non-archival form, eg, government reports. Sometimes such reports are withheld from general circulation because they are judged to be militarily sensitive. Thus, potentially useful information becomes restricted to a limited number of scientists who are on an initial distribution list. Worse, since the work reported in such papers is not referenced it goes unknown among a large population of workers who have entered the field since the first, limited publication and who have no way of knowing of its existence. Results of experiments carried out some years ago have been rewritten for publication in Applied Ergonomics. The reasons for this are that: (a) the original reports have been regarded as "unclassified" and (b) the substantive problem, the effects of dividing tasks between men and computers in an on-line information system, continues to be of interest to ergonomists and others.
Article
A computer algorithm employing fading-memory system identification and linear discriminant analysis is proposed for real-time detection of human shifts of attention in a control and monitoring situation. Experimental results are presented that validate the usefulness of the method. Application of the method to computer-aided decisionmaking in multitask situations is discussed.
Mathematical equations or processing routines?
  • L. Bainbridge
  • L. Bainbridge
Training for fault diagnosis in industrial process plant
  • K.D. Duncan
  • K.D. Duncan
Trends in operator-process communication development
  • Jervis
Jervis, M. W. and R. H. Pope (1977). Trends in operator-process communication development. Central Electricity Generating Board, E/REP/054/77.
Commercial air crew detection of system failures: state of the art and future trends Flight-deck automation: promises and problems
  • D A Thompson
  • E L Wiener
  • R E Curry
Thompson, D. A. (1981). Commercial air crew detection of system failures: state of the art and future trends. In J. Rasmussen and W. B. Rouse (Eds.), op. cit., pp. 37-48, Wiener, E. L. and R. E. Curry (1980), Flight-deck automation: promises and problems. Ergonomics, 23, 995.
Verbal presentation. NATO Symposium on Human Detection and Diagnosis of System Failures
  • A R Ephrath
Ephrath, A. R. (1980). Verbal presentation. NATO Symposium on Human Detection and Diagnosis of System Failures, Roskilde, Denmark.
Problem solving behaviour of pilots in abnormal and emergency situations
  • Johannsen
Johannsen, G. and W. B. Rouse (1981). Problem solving behaviour of pilots in abnormal and emergency situations. Proc. 1st European Ann. Conf. on Human Decision Making and Manual Control, Delft University, pp. 142-150.
Researches on the measurement of human performance Selected Papers on Human Factors in the Design and Use of Control Systems
  • Mackworth