Article

Ironies of Automation

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the 'classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Well-being & Health (9.15%, N=14): This domain refers to the management and prevention of health-related disorders and illnesses, or interactions with health data or with healthcare providers. 5 One thread of work involves assisting practitioners in providing better care. For example, Yang et al. [176] designed a GPT-3-based decision support tool that draws on the biomedical literature to generate AI suggestions. ...
... On the other hand, Liao et al. [98] interviewed 23 UX practitioners to explore the design space around LLMs supporting ideation, including their needs around model transparency. 5 While some health-related conditions may fall under accessibility, such as chronic illness [47], we decide according to how the condition was treated: papers that adopt a social model of the condition or disability (i.e. that the incompatible design of society with the person's condition is the "problem") are Accessibility, and those that adopt a medical model (i.e. that the person's condition is the "problem") are classified here under Well-being & Health [52]. ...
... Across tasks, validation via human or formal methods is often needed to quality-check an LLM's outputs. These evaluations are vital, but the human effort needed to structure and faithfully execute them may exceed the utility of using the LLM in the first place-what Bainbridge [5] calls the "automation trap". z S How will the performance of your LLM-powered research tool affect the validity of your research? ...
Preprint
Full-text available
Large language models (LLMs) have been positioned to revolutionize HCI, by reshaping not only the interfaces, design patterns, and sociotechnical systems that we study, but also the research practices we use. To-date, however, there has been little understanding of LLMs' uptake in HCI. We address this gap via a systematic literature review of 153 CHI papers from 2020-24 that engage with LLMs. We taxonomize: (1) domains where LLMs are applied; (2) roles of LLMs in HCI projects; (3) contribution types; and (4) acknowledged limitations and risks. We find LLM work in 10 diverse domains, primarily via empirical and artifact contributions. Authors use LLMs in five distinct roles, including as research tools or simulated users. Still, authors often raise validity and reproducibility concerns, and overwhelmingly study closed models. We outline opportunities to improve HCI research with and on LLMs, and provide guiding questions for researchers to consider the validity and appropriateness of LLM-related work.
... in 2023, the Ergonomics journal released a special issue commemorating the 40 th anniversary of the publication of Bainbridge's (1983) 'ironies of automation' , one of the most influential and highly cited papers in the field of human factors and ergonomics (hF/e). as discussed in Bainbridge's (1983) article, automated devices are intended to reduce the physical and mental burdens on human users and make their jobs easier and more satisfying. ...
... in 2023, the Ergonomics journal released a special issue commemorating the 40 th anniversary of the publication of Bainbridge's (1983) 'ironies of automation' , one of the most influential and highly cited papers in the field of human factors and ergonomics (hF/e). as discussed in Bainbridge's (1983) article, automated devices are intended to reduce the physical and mental burdens on human users and make their jobs easier and more satisfying. instead of performing manual work or struggling under information overload, humans in the system theoretically need only supervise automated devices and ensure they operate satisfactorily. ...
... They also note that the use of the automated tool increases the need for higher technical expertise of the moderators. This example suggests that, like in other automated industrial processes, semi-automated moderation can reduce human workload while making oversight more crucial and complex (Bainbridge, 1983). ...
Article
Full-text available
This article explores the human rights standards relevant to ensuring human involvement requirements in EU legislation related to automated content moderation. The opinions given by different experts and human rights bodies emphasise the human rights relevance of the way in which platforms distribute automated and human moderators in their services. EU secondary legislation establishes basic requirements for these structures that are called to be read under a human rights perspective. This article examines the justifications given for incorporating human involvement in content moderation, the different types of human involvement in content moderation, and the specific requirements for such involvement under EU secondary law. Additionally, it analyses the human rights principles concerning procedural safeguards for freedom of expression within this legal framework.
... At around the same time, Bainbridge [56] published her seminal 'Ironies of Automation' article, which highlighted some of the key dilemmas of human-automation pairing that still exist today and are relevant to human-AI teaming. As an example, as automation increases, human work can require exhausting monitoring tasks, so that rather than needing less training, operators need to be trained more to be ready for the rare but crucial interventions. ...
Article
Full-text available
The advent of Artificial Intelligence in the cockpit and the air traffic control centre in the coming decade could mark a step-change improvement in aviation safety, or else could usher in a flush of 'AI-induced' accidents. Given that contemporary AI has well-known weaknesses, from data biases and edge or corner effects, to outright 'hallucinations', in the mid-term AI will almost certainly be partnered with human expertise, its outputs monitored and tempered by human judgement. This is already enshrined in the EU Act on AI, with adherence to principles of human agency and oversight required in safety-critical domains such as aviation. However, such sound policies and principles are unlikely to be enough. Human interactions with current automation in the cockpit or air traffic control tower require extensive requirements, methods, and validations to ensure a robust (accident-free) partnership. Since AI will inevitably push the boundaries of traditional human-automation interaction, there is a need to revisit Human Factors to meet the challenges of future human-AI interaction design. This paper briefly reviews the types of AI and 'Intelligent Agents' along with their associated levels of AI autonomy being considered for future aviation applications. It then reviews the evolution of Human Factors to identify the critical areas where Human Factors can aid future human-AI teaming performance and safety, to generate a detailed requirements set organised for Human AI Teaming design. The resultant requirements set comprises eight Human Factors areas, from Human-Centred Design to Organisational Readiness, and 165 detailed requirements, and has been applied to three AI-based Intelligent Agent prototypes (two cockpit, one air traffic control tower). These early applications suggest that the new requirements set is scalable to different design maturity levels and different levels of AI autonomy, and acceptable as an approach to Human-AI Teaming design teams.
... However, overreliance even increased when participants only had short interactions with the AI, which were spread over a longer period. The results indicate that providing end-to-end recommendations makes it difficult for users to remain meaningfully engaged with the decision-making task, similar to the difficulty of supervisory control in automation [2]. ...
Preprint
Full-text available
How can we use generative AI to design tools that augment rather than replace human cognition? In this position paper, we review our own research on AI-assisted decision-making for lessons to learn. We observe that in both AI-assisted decision-making and generative AI, a popular approach is to suggest AI-generated end-to-end solutions to users, which users can then accept, reject, or edit. Alternatively, AI tools could offer more incremental support to help users solve tasks themselves, which we call process-oriented support. We describe findings on the challenges of end-to-end solutions, and how process-oriented support can address them. We also discuss the applicability of these findings to generative AI based on a recent study in which we compared both approaches to assist users in a complex decision-making task with LLMs.
... However, overreliance even increased when participants only had short interactions with the AI, which were spread over a longer period. The results indicate that providing end-to-end recommendations makes it difficult for users to remain meaningfully engaged with the decision-making task, similar to the difficulty of supervisory control in automation [2]. ...
Conference Paper
Full-text available
How can we use generative AI to design tools that augment rather than replace human cognition? In this position paper, we review our own research on AI-assisted decision-making for lessons to learn. We observe that in both AI-assisted decision-making and generative AI, a popular approach is to suggest AI-generated end-to-end solutions to users, which users can then accept, reject, or edit. Alternatively, AI tools could offer more incremental support to help users solve tasks themselves, which we call process-oriented support. We describe findings on the challenges of end-to-end solutions , and how process-oriented support can address them. We also discuss the applicability of these findings to generative AI based on a recent study in which we compared both approaches to assist users in a complex decision-making task with LLMs.
... When an accident that is not directly attributable to the AV itself occurs, a level of blame will be assigned to it, and trust will be diminished [17,30], affecting the potential acceptance, adoption and continued usage of the technology. Such ironies of automation are not new: they were predicted more than 40 years ago [31], with the effects extended by others [32] and recently some AV sceptics [33]. Trust is a key enabler to the adoption and continued usage of many technologies, which have been stressed within Human Factors and related fields for decades, and often in response to advances in automation [34], including AVs [35]. ...
Article
Full-text available
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The aim of the current paper is to investigate the efficacy of informational assistants (IAs) varying by anthropomorphism (humanoid robot vs. no robot) and dialogue style (conversational vs. informational) on trust in and blame on a highly autonomous vehicle in the event of an accident. The accident scenario involved a pedestrian violating the Highway Code by stepping out in front of a parked bus and the AV not being able to stop in time during an overtake manoeuvre. The humanoid (Nao) robot IA did not improve trust (across three measures) or reduce blame on the AV in Experiment 1, although communicated intentions and actions were perceived by some as being assertive and risky. Reducing assertiveness in Experiment 2 resulted in higher trust (on one measure) in the robot condition, especially with the conversational dialogue style. However, there were again no effects on blame. In Experiment 3, participants had multiple experiences of the AV negotiating parked buses without negative outcomes. Trust significantly increased across each event, although it plummeted following the accident with no differences due to anthropomorphism or dialogue style. The perceived capabilities of the AV and IA before the critical accident event may have had a counterintuitive effect. Overall, evidence was found for a few benefits and many pitfalls of anthropomorphising an AV with a humanoid robot IA in the event of an accident situation.
... Such technologies can harm train drivers' cognitive performance (Naghiyev et al., 2016;Naweed, 2014): attending to new in-cab interfaces conflicts with monitoring the environment, and technologies like ERTMS/ETCS can reduce anticipation and impair decision-making. This is in line with a long research tradition revealing how automation can negatively affect human performance (Bainbridge, 1983;Parasuraman & Manzey, 2010;Parasuraman & Wickens, 2008). Thus, unless train drivers are fully removed from the cab, it should be investigated how the envisioned AI-based technological support would impact their ability to establish situation awareness. ...
Preprint
Full-text available
When trains collide with obstacles, the consequences are often severe. To assess how artificial intelligence might contribute to avoiding collisions, we need to understand how train drivers do it. What aspects of a situation do they consider when evaluating the risk of collision? In the present study, we assumed that train drivers do not only identify potential obstacles but interpret what they see in order to anticipate how the situation might unfold. However, to date it is unclear how exactly this is accomplished. Therefore, we assessed which cues train drivers use and what inferences they make. To this end, image-based expert interviews were conducted with 33 train drivers. Participants saw images with potential obstacles, rated the risk of collision, and explained their evaluation. Moreover, they were asked how the situation would need to change to decrease or increase collision risk. From their verbal reports, we extracted concepts about the potential obstacles, contexts, or consequences, and assigned these concepts to various categories (e.g., people’s identity, location, movement, action, physical features, and mental states). The results revealed that especially for people, train drivers reason about their actions and mental states, and draw relations between concepts to make further inferences. These inferences systematically differ between situations. Our findings emphasise the need to understand train drivers’ risk evaluation processes when aiming to enhance the safety of both human and automatic train operation.
... Die mit der Höherautomatisierung der Systeme einhergehende Komplexitätssteigerung, schwer zu harmonisierenden Arbeitsprofile und häufige Anpassungen der Systeme führen dazu, dass Operateure weniger Zeit haben, Kompetenzen zu entwickeln und aufrecht zu erhalten. Statt einer Vereinfachung der Arbeit durch Automatisierung erzeugen kooperative Szenarien von Mensch und hochautomatisierter Maschine unter Umständen das Gegenteil (Ironies of Automation), (Bainbridge 1983). Hier ist eine Förderliche Gestaltung (eng. ...
... GenUI systems might necessitate re-skilling users, such as learning prompt engineering, similar to other automated systems [2,14,19,26]. Users must become proficient in crafting precise and effective prompts to interact with AI-driven interfaces. This re-skilling requirement can present a significant learning curve and may involve ongoing training to stay abreast of evolving AI capabilities and functionalities. ...
Conference Paper
Full-text available
This paper addresses the promising concept of Generative UI, which suggests using AI capabilities to create dynamic user interfaces that reflect user needs at the moment. We list critical considerations relating to AI-inherent issues, implementation and use, and the broader context of the use of generative UI. We propose further research directions investigating such a novel concept and argue for a different perspective focused on interaction and its automation.
Article
The rapid development of driving automation systems (DAS) in the automotive industry aims to support drivers by automating longitudinal and lateral vehicle control. As vehicle complexity increases, it is crucial that drivers comprehend their responsibilities and the limitations of these systems. This work investigates the role of the driver’s perception for the understanding of DAS by cross-analysing four empirical studies. Study I investigated DAS usage across different driving contexts via an online survey conducted in Germany, Spain, China, and the United States. Study II explored contextual DAS usage and the factors influencing drivers’ understanding through a Naturalistic Driving Study (NDS), followed by in-depth interviews. Study III employed a Wizard-of-Oz on-road driving study to simulate a vehicle offering Level 2 and Level 4 DAS, paired with pre- and post-driving interviews. Study IV following up used a Wizard-of-Oz on-road driving study to simulate Level 2 and Level 3 DAS and subsequent in-depth interviews. The findings from these studies allowed the identification of aspects constituting a driver’s understanding and factors influencing their perception of DAS. The identified aspects and factors were consolidated into a unified conceptual model, describing the process of how perception shapes the driver’s mental model of a driving automation system.
Article
Level of autonomy (LOA) has become one of the determining factors for human-swarm control. Because of the autonomy restriction in current unmanned system, the human-swarm cooperation has gradually become the common and necessary task mode. To enable the adaptive LOA in the heterogenous team, we present a human-swarm authority game model to make decisions on the LOA. The goal of this approach is to obtain the suitable LOA in the dynamic task environment on the basis of the swarm performance and operator trust. Due to the uncertainty of the human trust, a trust model toward swarm is built via imitation learning (IL) method. Thus, the optimal strategy on the LOA could be calculated by the human-swarm evolutionary game model considering the cases with incomplete information. The simulation experiments are conducted on the three-dimension virtual platform. The simulation results show the correctness and accuracy of the prediction model.
Article
Self-driving cars have the potential to drastically reduce accidents caused by human errors, saving significant amounts of money in damages as well as human lives. However, public acceptance of the technology operating on public roads still needs to improve, as most Americans are uncomfortable sharing the road with a self-driving car. The challenge for policymakers is to craft regulations that not only enhance the safety of self-driving technology but also foster public trust and acceptance. This study examines how specific policies—requiring visual cues to indicate when a vehicle is operating in self-driving mode and certification requirements for users—impact public acceptance of self-driving cars. To evaluate the impact of the policies, we theorize how policies may influence people’s trust and how trust, in turn, may affect acceptance of technology. Furthermore, we examine how these effects vary across political affiliations, as prior research suggests that Republicans and Democrats differ in their trust in government oversight and technological innovation. Our findings confirm that Republicans are generally less willing to share the road with self-driving vehicles than Democrats, largely due to lower trust in the government to regulate the technology effectively. We find that a visual cue policy increases trust in government but decreases trust in the technology, leading to increased acceptance among Republicans but a neutral or negative effect for Democrats. Conversely, a certification requirement increases trust in government and in other drivers, positively impacting acceptance for both Republicans and Democrats. Finally, additional analysis revealed that a combined policy implementing both measures proves to be the most effective at increasing overall public acceptance by strengthening trust across multiple dimensions. These insights provide valuable guidance for policymakers seeking to improve the integration of self-driving cars into public roadways.
Article
Huettig and Christiansen in an earlier issue argue that large language models (LLMs) are beneficial to address declining cognitive skills, such as literacy, through combating imbalances in educational equity. However, we warn that this technosolutionism may be the wrong frame. LLMs are labor intensive, are economically infeasible, and pollute the environment, and these properties may outweigh any proposed benefits. For example, poor quality air directly harms human cognition, and thus has compounding effects on educators' and pupils' ability to teach and learn. We urge extreme caution in facilitating the use of LLMs, which like much of modern academia run on private technology sector infrastructure, in classrooms lest we further normalize: pupils losing their right to privacy and security, reducing human contact between learner and educator, deskilling teachers, and polluting the environment. Cognitive scientists instead can learn from past mistakes with the petrochemical and tobacco industries and consider the harms to cognition from LLMs.
Article
Full-text available
Control rooms play a crucial role in monitoring and managing safety-critical systems, such as power grids, emergency response, and transportation networks. As these systems become increasingly complex and generate more data, the role of human operators is evolving amid growing reliance on automation and autonomous decision-making. This paper explores the balance between leveraging automation for efficiency and preserving human intuition and ethical judgment, particularly in high-stakes scenarios. Through an analysis of control room trends, operator attitudes, and models of human-computer collaboration, this paper highlights the benefits and challenges of automation, including risks of deskilling, automation bias, and accountability. The paper advocates for a hybrid approach of collaborative autonomy, where humans and systems work in partnership to ensure transparency, trust, and adaptability.
Technical Report
Full-text available
Recent accidents such as the B737-MAX8 crashes highlight the need to address and improve the current aircraft certification process. This includes understanding how design characteristics of automated systems influence flightcrew behavior and how to evaluate the design of these systems to support robust and resilient outcomes. We propose a process which could be used to evaluate the compliance of automated systems looking through the lens of the 3Rs: Reliability, Robustness, and Resilience. This process helps determine where additional evidence is needed in a certification package to support the flightcrew in interacting with automated systems. Two diagrams, the Design Characteristic Diagram (DCD) and the What’s Next diagram, are used to uncover scenarios which complicate flightcrew response. The DCD is used to look at the relationship between characteristics in design and potential vulnerabilities which commonly occur when design does not support the flightcrew. The What’s Next diagram looks at the ability of the design to support the flightcrew in anticipating what will happen next. In our process, claims surrounding the 3Rs that are present in a certification package are systematically evaluated using these two diagrams to uncover additional areas of support for the flightcrew. Questions about when these claims may break down which are identified using the DCD can be tested using scenarios developed on the What’s Next diagram. Further vignettes looking at different versions of a scenario can be assessed to increase the robustness in the design. The FAA has sponsored this research through the Center of Excellence for Technical Training and Human Performance. However, the FAA neither endorses nor rejects the findings of this research. The dissemination of this research is in the interest of invoking academic or technical community comments on the results and conclusions of the research.
Conference Paper
Full-text available
In organizations, the interest in automation is long-standing. However , adopting automated processes remains challenging, even in environments that appear highly standardized and technically suitable for it. Through a case study in Amsterdam Airport Schiphol, this paper investigates automation as a broader sociotechnical system influenced by a complex network of actors and contextual factors. We study practitioners' collective understandings of automation and subsequent efforts taken to implement it. Using imaginaries as a lens, we report findings from a qualitative interview study with 16 practitioners involved in airside automation projects. Our findings illustrate the organizational dynamics and complexities surrounding automation adoption, as reflected in the captured problem formulations, conceptions of the technology, envisioned human roles in autonomous operations, and perspectives on automation fit in the airside ecosystem. Ultimately, we advocate for contextual automation design, which carefully considers human roles, accounts for existing organizational politics, and avoids techno-solutionist approaches.
Conference Paper
Astronauts on future long-duration human spaceflight (LDHSF) missions will collaborate with artificial agents to enable crew medical autonomy and support Earth-independent clinical decision-making, working together as Cyber-Physical Human (CPH) teams. Although trust is a well-understood pillar of successful team collaboration, its incorporation into the design of onboard medical systems and clinical decision support (CDS) interfaces has not been systematically addressed. The work presented in this paper advances the development of onboard medical systems and CDS interfaces by integrating CPH team trust considerations from the early design stages. First, we present a framework to facilitate transdisciplinary stakeholder collaboration to envision solutions in the LDHSF future(s) context. Next, we describe the developed design research tools that allow stakeholders to consider CPH trust in the context of future LDHSF missions. Lastly, we illustrate a case-study application of the tools to derive trust-driven future CPH interface requirements and demonstrate how they are reflected within the conceptual development of the Exploration Medical Ecosystem Design Interface (ExMEDI).
Chapter
This paper explores, under the key concept of selectivity, various structuring effects of subsymbolic artificial intelligence (AI) as a social phenomenon, from targeted development to specific technical functionality, to its embedding in usage contexts, associated with latent societal adaptation processes. In doing so, the paper extends discussions about discrimination and data bias to include further aspects of latent social design and technology-immanent structuring. Based on the systematization of eleven AI selectivities, central questions of a changing human–AI resp. human–technology relationship are discussed, and a guiding principle for a possible future relationship beyond competition resp. linear substitution is outlined.
Article
Full-text available
This article examines the transformation of design work under the influence of managerialism and the rise of Generative Artificial Intelligence (GenAI). Drawing on John Maynard Keynes's projections of technological unemployment and the evolving nature of work, it argues that despite advancements in automation, work has not diminished but rather devalued. Design, understood as a type of knowledge work, faces an apparent existential crisis. GenAI grows adept at mimicking the output of creative processes. The article explores how the fear of the end of design work fueled by the rise of GenAI is rooted in a misunderstanding of design work. This misunderstanding is driven by managerialism— an ideology that prioritizes efficiency and quantifiable outcomes over the intrinsic value of work. Managerialism seeks to instrumentalize and automate design, turning it into a controllable procedure to generate quantifiable creative outputs. The article argues why design work cannot be turned into a procedure and automated using GenAI. Advocates of these systems claim they enhance productivity and open new opportunities. However, evidence so far shows that flawed GenAI models produce disappointing outcomes while operating at a significant environmental cost. The article concludes by arguing for a robust theory of design—one that acknowledges the unique ontological and epistemic boundaries of design work and underscores why design cannot be reduced to a procedural output.
Chapter
Full-text available
Far-reaching transformations in the world of work are being discussed vividly under the term ‘artificial intelligence’. Questions are arising about the changing role of humans in work processes and their freedom to determine work content and conditions. This book explores these potential areas of conflict and examines how algorithmic decision-making systems influence the job autonomy of service workers. Using case studies from outpatient care and banking services, this book shows under which organisational conditions positive experiences of autonomy are enabled. Dr Gina Glock is a work sociologist, who researches the interplay of work and digitalisation.
Article
Full-text available
Many have recently argued that there are weighty reasons against making high-stakes decisions solely on the basis of recommendations from artificially intelligent (AI) systems. Even if deference to a given AI system were known to reliably result in the right action being taken, the argument goes, that deference would lack morally important characteristics: the resulting decisions would not, for instance, be based on an appreciation of right-making reasons. Nor would they be performed from moral virtue; nor would they have moral worth. I argue that, even if these characteristics all have intrinsic value, that intrinsic value has no practical relevance to decisions about whether to defer to AI. I make that point by drawing on a lesson from the literature on moral testimony. Once it is granted that deference to a reliable source is the policy most likely to bring about right action, a refusal to defer carries with it a heightened risk of wronging and mistreating people. And that heightened risk of wrongdoing, I argue, cannot be justified by appeal to the intrinsic value of striving for a morally exemplary decision-making process.
Chapter
Contrary to early beliefs, maritime autonomous surface ships (MASS) will not displace humans from their multifaceted involvement in maritime transportation. Rather, human roles will change with the progressing implementation of MASS but will remain crucial for ensuring the safety and operability of shipping. These potential changes, along with their impact and far-reaching implications on seafarers’ training and the job market, are herein discussed. With MASS being, to date, far from their wide-scale implementation, there are still more open questions than definite and verifiable answers. The discussion raised in this chapter may be found interesting by all parties engaged in introducing MASS to the marine industry, the Maritime Education and Training (MET) representatives, and, finally, the most interested actors of the maritime market—current seafarers.
Article
This paper presents an inquiry of scholarly literature published in the last decade pertaining to the development of robot grippers for compressed fabric parts, which are both rigid and porous. The study is narrow and targeted. Previous literature reviews investigating technologies suitable for materials with similar properties were analysed, and the need for recent works addressing stiff and simultaneously permeable materials was identified. This work aspires to fill that gap with a systematic approach, for which the PRISMA reporting methodology is adopted. It entails scouting for publications with defined keywords, filtering based on predetermined constraints, and thoroughly examining the articles obtained from scientific databases. The study reveals that vacuum grippers are quite prevalent despite the inherent porosity of fibrous materials. The use of the Bernoulli and Coanda effects, and unconventional technologies like electro-adhesion are also gaining popularity. Intrusive instruments like needles are utilised regardless of their tendency to do surface damage. Moreover, hybrid grasping contraptions can be devised to overcome the limitations of their individual constituents. The operational efficiency of grippers can be further boosted with predictive modelling and sensors to execute a closed-loop system. Overall, the study conveys the latest advancements in multiple mechanisms available at designers' disposal, which can be implemented and optimised for the specific type of material, and its main technical contribution is in a specific gap in the literature by focusing on gripper technologies for handling rigid and porous fabric parts, which are commonly used in the automotive industry for acoustic applications.
Article
Full-text available
It is widely recognized that airspace capacity must increase over the coming years. It is also commonly accepted that meeting this challenge while balancing concerns around safety, efficiency, and workforce issues will drive greater reliance on automation. However, if automation is not properly developed and deployed, it represents something of a double-edged sword, and has been linked to several human–machine system performance issues. In this article, we argue that human–automation function and task allocation may not be the way forward, as it invokes serialized interactions that ultimately push the human into a problematic supervisory role. In contrast, we propose a flight-based allocation strategy in which a human controller and digital colleague each have full control authority over different flights in the airspace, thereby creating a parallel system. In an exploratory human-in-the-loop simulation exercise involving six operational en route controllers, it was found that the proposed system was considered acceptable after the users gained experience with it during simulation trials. However, almost all controllers did not follow the initial flight allocations, suggesting that allocation schemes need to remain flexible and/or be based on criteria capturing interactions between flights. In addition, the limited capability of and feedback from the automation contributed to this result. To advance this concept, future work should focus on substantiating flight-centric complexity in driving flight allocation schemes, increasing automation capabilities, and facilitating common ground between humans and automation.
Article
Full-text available
Legislation and ethical guidelines around the globe call for effective human oversight of AI-based systems in high-risk contexts – that is oversight that reliably reduces the risks otherwise associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., inaccurate classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to better understand the conditions for effective human oversight. We argue that the reliable detection of errors (as an umbrella term for inaccuracies and unfairness) is crucial for effective human oversight. We then propose that Signal Detection Theory (SDT) offers a promising framework for better understanding what affects people’s sensitivity (i.e., how well they are able to detect errors) and response bias (i.e., the tendency to report errors given a perceived evidence of an error) in detecting errors. Whereas an SDT perspective on the detection of inaccuracies is straightforward, we demonstrate its broader applicability by detailing the specifics for an SDT perspective on unfairness detection, including the need to choose a standard for (un)fairness. Additionally, we illustrate that an SDT perspective helps to better understand the conditions for effective error detection by showing examples of task-, system-, and person-related factors that may affect the sensitivity and response bias of humans tasked with detecting unfairness associated with the use of AI-based systems. Finally, we discuss future research directions for an SDT perspective on error detection.
Article
Purpose To evaluate the role of robots, artificial intelligence and service automation in mitigating the labour shortages in tourism and hospitality. Design/methodology/approach This is a conceptual paper. Findings Robots, artificial intelligence and service automation have substitution, enhancement and transformational effects on tasks and jobs. The automatability of jobs depends on the automatability of the tasks they include. Cognitive, repetitive, standardised tasks are easier to automate. Tourism jobs with more physical tasks are more difficult to automate. Originality/value The paper sheds light on the mechanisms through which tourism and hospitality jobs can be automated to mitigate labour shortages.
Article
In this commentary, we argue that the field of Ergonomics and Human Factors (EHF) has the tendency to present itself as a thriving and impactful science, while in reality, it is losing credibility. We assert that EHF science (1) has introduced terminology that is internally inconsistent and hardly predictive-valid, (2) has virtually no impact on industrial practice, which operates within frameworks of regulatory compliance and profit generation, (3) repeatedly employs the same approach of conducting lab experiments within unrealistic paradigms in order to complete deliverables, (4) suggests it is a cumulative science, but is neither a leader nor even an adopter of open-science initiatives that are characteristic of scientific progress and (5) is being assimilated by other disciplines as well as Big Tech. Recommendations are provided to reverse this trend, although we also express a certain resignation as our scientific discipline loses significance.
Article
Full-text available
Autonomy in weapon systems is already a genuine concern. States try to come up with their own definitions of these systems and pay utmost effort to impose their own understanding of these systems upon other states. For a fairly high number of states barring a total ban on such weapons would be the ideal solution; however, such states that are anxious about the increase in autonomy in war-making capabilities, as adopts a second-best scenario to contain risks created by the deployment of such systems. To this end, placing them under meaningful human control emerges as an important political and legal objective. The author believes that placing autonomous weapons under human supervision, despite its initial promise, will yield negative results. This is due to the fact that humans tend rather to be too willing to follow the solutions generated by autonomous systems. First observed in other industries of civilian nature like aviation or health, automation bias has the potential to negate most if not all of supervision measures expected to ensure proper implementation of international humanitarian law.
Article
Full-text available
Research on employee turnover since L. W. Porter and R. M. Steers's analysis of the literature reveals that age, tenure, overall satisfaction, job content, intentions to remain on the job, and commitment are consistently and negatively related to turnover. Generally, however, less than 20% of the variance in turnover is explained. Lack of a clear conceptual model, failure to consider available job alternatives, insufficient multivariate research, and infrequent longitudinal studies are identified as factors precluding a better understanding of the psychology of the employee turnover process. A conceptual model is presented that suggests a need to distinguish between satisfaction (present oriented) and attraction/expected utility (future oriented) for both the present role and alternative roles, a need to consider nonwork values and nonwork consequences of turnover behavior as well as contractual constraints, and a potential mechanism for integrating aggregate-level research findings into an individual-level model of the turnover process. (62 ref)
Article
Full-text available
As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.
Article
Since complete automation may be an Utopean idea, the control engineer has to cope with man/machine systems. Examples are given of cases where human factors influence technical design. The interaction with social and political changes is also indicated. Social scientists have much to offer to control engineers. A brief survey will be given of progress in the scientific analyses of human capabilities, limitations, needs and motivations. Also experimental techniques specific to the social sciences will be touched upon. The design of man/machine systems is discussed, taking human factors into account right from the start. Modern technology can catalyse changes resulting from the application of job enrichment, group technology and worker participation to eliminate some human problems at work. Finally, reference is made to the recommendations by the IFAC Workshop on Productivity and Man.
Article
Rational design of a process control system using an on-line computer requires a definition of the total control task and an allocation of function between the human operator and the machine. A knowledge of the historical development of the role assigned to the human operator provides useful guidance in making the allocation decision. This development is described, with emphasis on the function performed by the operator in modern computer control systems, on the importance of different process characteristics, on the increased understanding of the operator's role obtained from attempts to automate it completely and on the need to choose appropriate systems when carrying out experimental studies of the operator.
Chapter
The generally safe and dependable commercial aviation industry has never had properly designed Caution and Warning Systems (CAWS) to alert the aircrew to operational or system malfunctions or emergency situations. When flight systems were simpler, relatively crude CAWS were manageable. Today, however, the complexity and size of modern avionics systems makes it crucial to have optimal systems to alert the crew to problems, and to assist them in handling them.
Chapter
The classical formula for training is simple enough. To train someone to do anything requires only: (1) opportunities to practise; (2) tests to check performance after practice; and, if practice and testing do not of themselves suffice, (3) hints, explanations or other information not intrinsic to performing the task. Industrial fault diagnosis training can present serious difficulties on all three counts.
Chapter
Within the context of this conference, we want to know the factors which affect human ability to detect and diagnose process failures, as a basis for console and job design. Understandably, human factors engineers want fully specified models for the human operator’s behaviour. It is also understandable that these engineers should make use of modelling procedures which are available from control engineering. These techniques are attractive for two reasons. They are sophisticated and well understood. They have also been very successful at giving first-order descriptions of human compensatory tracking performance in fast control tasks such as flying. In this context they are sufficiently useful for the criticism, that they are inadequate as a psychological theory of this behaviour, to be irrelevant for many purposes. Engineers have therefore been encouraged to extend the same concepts to other types of control task. In this paper we will be considering particularly the control of complex slowly changing industrial processes, such as steel making, petrochemicals and power generation.
Article
Systems whose failure can cause loss of life or large economic loss need to be tolerant to faults (i.e. faults in system hardware, software, and procedures). Examples of such systems include airplane autopilots in the automatic landing mode, electricity utility power generation plants, and telephone electronic switching systems (ESS). Such systems are characterized by high reliability; they fail infrequently and recover quickly when a fault does occur. The user usually cannot respond fast enough if and when a fault is detected. Even if he could respond, his proficiency would not be high because the fault occurs infrequently.
Article
This chapter discusses comparative study in different man–machine systems for human control tasks. Potential man–machine problems are born in the design phase of the construction process. With the help of the data obtained in the interview with a member of the technical management, a number of characteristics of the plant hardware, the control system, and the man-machine interface are formulated. Some formal characteristics of the organizational system are obtained by the means of the interview with a member of the management. The factor achievement in the job satisfaction questionnaire is positively related with the dimension activities (ACT), controllability of the process (CONT), and system ergonomics (ERG). The present analysis may lead to the conclusion that a comparative study of quite different man–machine systems, which implies an analysis on the level of the system and not on that of the individual operator, can provide meaningful results in regard to the human aspects of man–machine systems.
Article
The rapid technological advancements of the past decade, and the availability of higher levels of automation which resulted, have aroused interest in the role of man in complex systems. Should the human be an active element in the control loop, operating manual manipulators in response to signals presented by various instruments and displays? Or should the signals be coupled directly into an automatic controller, delegating the human to the monitoring role of a supervisor of the system’s operation?
Article
This symposium with its title, Human Detection and Diagnosis of System Failures, clearly implies that, at least in the immediate future, complex systems may have to resort to the skills of the human operator when problems arise during operation. However, the human attributes particularly appropriate to faultfinding are not inherent in the organism; operators of complex systems must be trained if they are to be efficient diagnosticians. This paper describes the development of a training programme specifically designed to help trainee process operators learn to recognise process plant breakdowns from an array of control room instruments. Although developed originally to train fault-finding in the context of continuous process chemical plant, it is probable that the techniques we are going to describe may prove to be equally effective in other industries. For example, power plants, crude oil refineries and oil production platforms all involve continuous processes which are operated from a central control room.
Article
The paper describes (1) the development of a simulator and (2) the first results of a training technique for the identification of plant failures from control panel indications. Input or signal features of the task present more simulation fidelity problems than its response or output features. Current, techniques for identifying effective signals, e.g. ‘ blanking-off ’ information, or protocol analysis, bias any description of problem solving since they require serial reporting, if not serial collection, of information by the operator. They also require inferences as to what is an effective item of information. It is therefore argued that simulation should preserve all those features which may in principle provide, or influence acquisition of, diagnostic information, specifically panel layout, instrument design and display size. Further fidelity problems are the stress from operating in a dangerous environment; stress from hazards or sanctions following mistaken diagnosis; and the stress of diagnosing in a short time interval. The simulator uses bock-projection to life size of slides of control panel mock-ups by a random access projector. Under an adaptive cumulative part regime, trainees saw on average 89 failure arrays in 30 min, an obvious advantage over the operational situation. In a test 24 hr after training, consisting of the eight faults each presented four times in random order, 4 out of 17 trainees made only one error in 32 diagnosos; the other trainees performed perfectly. Subjects' reports indicate very different solution strategies, e.g., recognition of alarm patterns; serial instrument checking determined by heuristics of plant functioning. Several features of performance arc consistent with the view that trainees use a minimal number of dimensions for correct discrimination and that these change as the number of different fault arrays increases. It is argued that this training regime should reduce stress. In particular it is argued that, according to current theories of stress, the fewer dimensions needed for diagnosis, the more robust will be diagnostic performance in dangerous environments.
Article
[reviews] research on adult age differences in human memory . . . conducted very largely within the framework of current theoretical views of memory / organized in terms of the topics and concepts suggested by these approaches / the literature on memory and aging is now so extensive that the review must be selective—we [the authors] focus on topics of current debate and largely on research reported in the last 10 years approaches to the study of memory [memory stores, processing models, memory systems] / empirical evidence [sensory and perceptual memory, short-term and working memory, age differences in working memory] / age differences in encoding [qualitative differences in encoding] / age differences in retrieval / age differences in nonverbal memory / age differences in memory of the past and for the future / aging and memory systems (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Modes of human-computer interaction in the control of dynamicsystems are discussed, and the problem of allocating tasks betweenhuman and computer considered. Models of human performance in avariety of tasks associated with the control of dynamic systems arereviewed. These models are evaluated in the context of a designexample involving human-computer interaction in aircraftoperations. Other examples include power plants, chemical plants,and ships.
Article
Typescript. Thesis--University of Illinois at Urbana-Champaign. Vita. Includes bibliographical references (leaves 102-108). Photocopy.
Article
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
Article
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
Article
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
Article
A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.
Article
Much ergonomics research is published in non-archival form, eg, government reports. Sometimes such reports are withheld from general circulation because they are judged to be militarily sensitive. Thus, potentially useful information becomes restricted to a limited number of scientists who are on an initial distribution list. Worse, since the work reported in such papers is not referenced it goes unknown among a large population of workers who have entered the field since the first, limited publication and who have no way of knowing of its existence. Results of experiments carried out some years ago have been rewritten for publication in Applied Ergonomics. The reasons for this are that: (a) the original reports have been regarded as "unclassified" and (b) the substantive problem, the effects of dividing tasks between men and computers in an on-line information system, continues to be of interest to ergonomists and others.
Article
A computer algorithm employing fading-memory system identification and linear discriminant analysis is proposed for real-time detection of human shifts of attention in a control and monitoring situation. Experimental results are presented that validate the usefulness of the method. Application of the method to computer-aided decisionmaking in multitask situations is discussed.
Mathematical equations or processing routines?
  • L. Bainbridge
  • L. Bainbridge
Training for fault diagnosis in industrial process plant
  • K.D. Duncan
  • K.D. Duncan
Trends in operator-process communication development
  • Jervis
Jervis, M. W. and R. H. Pope (1977). Trends in operator-process communication development. Central Electricity Generating Board, E/REP/054/77.
Commercial air crew detection of system failures: state of the art and future trends Flight-deck automation: promises and problems
  • D A Thompson
  • E L Wiener
  • R E Curry
Thompson, D. A. (1981). Commercial air crew detection of system failures: state of the art and future trends. In J. Rasmussen and W. B. Rouse (Eds.), op. cit., pp. 37-48, Wiener, E. L. and R. E. Curry (1980), Flight-deck automation: promises and problems. Ergonomics, 23, 995.
Verbal presentation. NATO Symposium on Human Detection and Diagnosis of System Failures
  • A R Ephrath
Ephrath, A. R. (1980). Verbal presentation. NATO Symposium on Human Detection and Diagnosis of System Failures, Roskilde, Denmark.
Problem solving behaviour of pilots in abnormal and emergency situations
  • Johannsen
Johannsen, G. and W. B. Rouse (1981). Problem solving behaviour of pilots in abnormal and emergency situations. Proc. 1st European Ann. Conf. on Human Decision Making and Manual Control, Delft University, pp. 142-150.
Researches on the measurement of human performance Selected Papers on Human Factors in the Design and Use of Control Systems
  • Mackworth