Article

Ironies of Automation

Authors:
If you want to read the PDF, try requesting it from the authors.

Abstract

This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the 'classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Activities related to I4.0 and OAS in CPPS typically imply a significant transformation of the roles of people and technology, as well as the nature of work on shop floors (Moencks et al., 2022c;. Additionally, with an increased degree of automation technology in industrial environments, it appears to be more challenging to operate and maintain production systems (Bainbridge, 1983;Kremer, 1993). Further, many manufacturing organizations capability to remain competitive depends on their ability to increase workforce productivity and effectively train human operators (Penesis et al., 2017;Chryssolouris et al., 2013). ...
... Remaining publications after abstract scan Scheuermann et al. (2016), Gallala et al. (2019), Caputo et al. (2018b, Becker and Stern (2016), Linstone (1989), Landscheidt and Kans (2016), Saggiomo et al. (2016), Marzano et al. (2015), Cheng et al. (2018), Parmar et al. (2018), Munro and Noori (1988), Pierdicca et al. (2017), Lithoxoidou et al. (2017), Pretorius and Wet (2000), Gerbert et al. (2015), Muller et al. (2018), Bainbridge (1983), Sipsas et al. (2016), Fitts (1954), De Pace et al. (2018, Larrinaga et al. (2018), Hurst et al. (2019), Hartley (1928), Gutsche and Griffith (2017), Welford (1968), Salomon et al. (2017), Wang et al. (2018), Sellen and Norman (1986), Romer and Bruder (2015), Lemm et al. (2014), Qasem et al. (2018), Rasmussen (1983) (2016), Gattullo et al. (2015), Paelke and Röcker (2015), Heinze et al. (2015), Senderek and Geisler (2019), Kerber and Lessel (2015), Stocker et al. (2014), Jentsch et al. (2013), Wiedemann and Wolff (2013), Serras et al. (2020), Minnetti et al. (2020), Gualtieri et al. (2020), Rauch et al. (2020a), Lopik et al. (2020), Molino et al. (2020), Calzavara et al. (2020, Fantini et al. (2020), Sgarbossa et al. (2020), Kaasinen et al. (2020), Peruzzini et al. (2020), Ito et al. (2020), Romero et al. (2020), Angelopoulou et al. (2020), Manghisi et al. (2020), Nygaard et al. (2020), Eder et al. (2020, Mattsson et al. (2020), Segura et al. (2020), Pilati et al. (2020), Wolfartsberger et al. (2020), Cicconi and Raffaeli (2020), Sorko et al. (2020), Kucek and Leitner (2020) (2012)). ...
... Remaining publications after quality and relevance appraisal Scheuermann et al. (2016), Gallala et al. (2019), Caputo et al. (2018b, Becker and Stern (2016), Linstone (1989), Landscheidt and Kans (2016), Marzano et al. (2015), Cheng et al. (2018), Parmar et al. (2018), Munro and Noori (1988), Pierdicca et al. (2017), Lithoxoidou et al. (2017), Pretorius and Wet (2000), Gerbert et al. (2015), Bainbridge (1983), Sipsas et al. (2016), Fitts (1954) Hayes and Jaikumar (1988), Kleindorfer and Partovi (1990) ...
Preprint
In industry, the Fourth Industrial Revolution is transforming the roles of people, technology and work on the shop floor. Despite ongoing strides towards automation, people are anticipated to remain integral contributors in future manufacturing. Where full automation is ineffective or in-feasible, Operator Assistance Systems (OAS) can augment workers' cognitive or physical capabilities. We frame OAS as a subset of Human-Computer Interaction (HCI) systems designed for the purpose of workforce augmentation in production systems. However, while OAS are anticipated to address key needs in industry, a challenge for both OAS researchers and industrial practitioners is to identify the most promising applications of OAS and justify them from a value-added perspective. This paper addresses this challenge by presenting a systematic literature review of 2,928 papers, revealing (a) 11 application areas for OAS; and (b) 12 approaches for assessing the value-added of OAS. Moreover, we discuss implications for OAS, with a particular focus on integrating OAS in industry.
... Automation often works best in clearly defined tasks. However, during unanticipated or emergency situations [1], or when many automation systems are available to the pilots in parallel [2], automation can actually increase the requirements on the system operator or pilot. In unfortunate circumstances, automation can even contribute to accidents, as has happened in the fixed-wing domain [3]. ...
... Fig. 3 shows the rendering of the outside world as it is shown to the pilots in the simulator, depicting target waypoints one (red ground markings at the bottom of the figure) and two (red ground markings in the distance, only visible on the left-hand side of the figure) of the utilised example experiment course. 1 The current leg, which comprises reaching the next target from the ownship position, is called "active leg". ...
... This is not a new result. The possible downsides of automation have been discussed extensively [1,26]. However, it is important to analyse and discuss these general findings while taking into consideration the actual system design and characteristics. ...
Article
This paper investigates the effects of different automation design philosophies for a helicopter navigation task. A baseline navigation display is compared with two more advanced systems: an advisory display, which provides a discrete trajectory suggestion; and a constraint-based display, which provides information about the set of possible trajectory solutions. The results of a human-in-the-loop experiment with eight pilot participants show a significant negative impact of the advisory display on pilot trajectory decision-making: out of the 16 encountered off-nominal situations across the experiment, only 6 were solved optimally. The baseline and constraint-based display both lead to better decisions, with 14 out of 16 being optimal. However, pilots still preferred the advisory display, in particular in off-nominal situations. These results highlight that even when a support system is preferred by pilots, it can have strong inadvertent negative effects on their decision-making.
... However, given the dependability required of autonomous systems, these emergencies will be rare exceptions. It is thus very likely that the ROC operators responding to them will face many of the same problems related to "clumsy automation" that transitions between other levels of automation have faced in other domains [7,8]. It would thus likely be valuable if human-centred design factors could be identified for maritime autonomy emergency response systems, i.e., aspects of, for instance, organisations or interactions that can be purposely structured to avoid automation-related problems. ...
... Symptomatic of this perception was often to define system failures as "human error" and blame the user for using tools, equipment or systems in the wrong way. Research provided the engineering community with automation-related design concepts and guidelines with warning examples of consequences when human-centred approaches had not been sufficiently applied, such as "ironies of automation" [7], "clumsy automation" [18], "automation induced surprises" [19] and "situation awareness" [20]. Nevertheless, these design concepts and guidelines have not been translated from the human factors community to the actual methods used by the engineering community. ...
Article
Full-text available
Commercial deployment of maritime autonomous surface ships (MASSs) is close to becoming a reality. Although MASSs are fully autonomous, the industry will still allow remote operations centre (ROC) operators to intervene if a MASS is facing an emergency the MASS cannot handle by itself. A human-centred design for the associated emergency response systems will require attention to the ROC operator workplace, but also, arguably, to the behaviour-shaping constraints on the engineers building these systems. There is thus a need for an engineer-centred design of engineering organisations, influenced by the current discourse on human factors. To contribute to the discourse, think-aloud protocol interviewing was conducted with well-informed maritime operators to elicit fundamental demands on cognition and collaboration by maritime autonomy emergency response systems. Based on the results, inferences were made regarding both design factors and methodological choices for future, early phase engineering of emergency response systems. Firstly, engineering firms have to improve their informal gathering and sharing of information through gatekeepers and/or organisational liaisons. To avoid a too cautious approach to accountability, this will have to include a closer integration of development and operations. Secondly, associated studies taking the typical approach of exposing relevant operators to new design concepts in scripted scenarios should include significant flexibility and less focus on realism.
... This vulnerability of automation has brought challenges to safety, for example, civil flight deck automation has caused deadly flight accidents (Mumaw, Sarter, et al., 2000). Bainbridge (1983) defined a classic phenomenon of "ironies of automation:" the higher the degree of automation, the less the operator's attention to the system; in an emergency state, it is more difficult for the operator to control the system through manual control. Endsley (2017) believes that in an autonomous system, with the improvement of the "automation" level of individual functions, and as the autonomy of the overall system increases, the operator's attention to these functions and situation awareness will decrease in emergency situations, so that the possibility of the "out-of-the-loop" effect will also increase. ...
... At present, we are in the transition from humancentered automation to human-controlled autonomy. The effort to address the classic "ironies of automation" issue has existed over 30 years, but is still not completely solved (Bainbridge, 1983;Strauch, 2017). Today, we encounter new ironies: autonomous systems that exhibit unique characteristics compared with traditional automation. ...
Article
Full-text available
While AI has benefited humans, it may also harm humans if not appropriately developed. The priority of current HCI work should focus on transiting from conventional human interaction with non-AI computing systems to interaction with AI systems. We conducted a high-level literature review and a holistic analysis of current work in developing AI systems from an HCI perspective. Our review and analysis highlight the new changes introduced by AI technology and the new challenges that HCI professionals face when applying the human-centered AI (HCAI) approach in the development of AI systems. We also identified seven main issues in human interaction with AI systems, which HCI professionals did not encounter when developing non-AI computing systems. To further enable the implementation of the HCAI approach, we identified new HCI opportunities tied to specific HCAI-driven design goals to guide HCI professionals addressing these new issues. Finally, our assessment of current HCI methods shows the limitations of these methods in support of developing HCAI systems. We propose the alternative methods that can help overcome these limitations and effectively help HCI professionals apply the HCAI approach to the development of AI systems. We also offer strategic recommendation for HCI professionals to effectively influence the development of AI systems with the HCAI approach, eventually developing HCAI systems.
... Well known »ironies of automation« [12,13] get even worse. Specifically, sufficient experience cannot be developed under normal conditions, while the knowledge workers' vanishing competence under such conditions is still needed in case of system failurewhich, due to non-transparent behavior, is even hardly detectible. ...
Conference Paper
Full-text available
Under labels like »AI« or »Machine Learning«, adaptive systems using methods of function approximation for adapting their performance to given sets of data from the environment are increasingly being deployed in the domain of knowledge work. Their design and effective use raise new questions with respect to their specific qualities. Summarizing relevant experiences from more than four decades of human-centered design of software artifacts and computer-assisted work processes, lessons learned are being reflected on with respect to what new challenges they bring about, and what new research questions they raise, respectively.
... Teleoperated robots can automate tasks at a distance and in environments that would be unsafe for humans. But, as has been demonstrated time and time again, automation does not replace human work, it changes human work [8,75]. And so, telerobotics comes with its own set of problems that shared control is used to overcome. ...
Conference Paper
Full-text available
Shared control is an emerging interaction paradigm in which a human and an AI partner collaboratively control a system. Shared control unifies human and artificial intelligence, making the human’s interactions with computers more accessible, safe, precise, effective, creative, and playful. This form of interaction has independently emerged in contexts as varied as mobility assistance, driving, surgery, and digital games. These domains each have their own problems, terminology, and design philosophies. Without a common language for describing interactions in shared control, it is difficult for designers working in one domain to share their knowledge with designers working in another. To address this problem, we present a dimension space for shared control, based on a survey of 55 shared control systems from six different problem domains. This design space analysis tool enables designers to classify existing systems, make comparisons between them, identify higher-level design patterns, and imagine solutions to novel problems.
... However, CAD systems still need human drivers for supervision, adjustment, maintenance, expansion, and improvement in case of automation failures as the role of human is not fully eliminated in the CAD mode. In fact, the more advanced a control system is, so the more vital may be the role of the human operator (Bainbridge 1983). While drivers in the CAD condition are not expected to be constantly and fully aware of their driving environment, they are required to take over the manual control of the vehicle in case of automation failures or if the system reaches the end of its operational design domain. ...
Article
Full-text available
The objective of this study was to assess the effects of unreliable automation, non-driving related tasks (NDRTs), and takeover time budget (TOTB) on drivers' takeover performance and cognitive workload when faced with critical incidents. Automated vehicles are expected to improve traffic safety. However, there are still some concerns about the effects of automation failures on driver performance and workload. Twenty-eight drivers participated in a driving simulation study. The findings suggested that drivers require at least 8s of TOTB to safely take over the control of the vehicle. In addition, drivers exhibited safer takeover performance under the conditionally automated driving situation than negotiating the critical incident in the manual driving condition. The results of drivers' cognitive workload were inconclusive, which might be due to the individual and recall biases in subjective measures that could not capture subtle differences in workload during takeover requests.
... Even in cases where users do not override the automation at any time, a higher basic trust can be found due to the provided possibility of overriding [18]. Further research has shown that users of automated systems are generally influenced in their manual driving skills [1,21]. Due to the lack of situation awareness or calibration of the user as a controller in the driver-vehicle control loop, overriding the automation by the user mostly poses a cognitive challenge and thus a potential threat to road safety [5]. ...
Preprint
Full-text available
Highly automated vehicles represent one of the most crucial development efforts in the automotive industry. In addition to the use of research vehicles, production vehicles for the general public are realistic in the near future. However, to fully exploit the benefits of these systems, it is fundamental that users have an appropriate level of trust in automation. Recent studies indicate that more research is needed in this area. Furthermore, beyond the management of user trust, the system should also convey a perceptible added value to realize not only trust, but also acceptance and thus use. The EMMI project pushes both, the management of user trust while conveying added value to the user. Therefore, an advanced socio-emotional user model for estimating user trust and various user-centered HMI systems with unique UX are being developed. Together, the systems are employed to induce changes in the users' trust in automated vehicles.
... Although automation technology in high-risk industries is reliably, system failures occur [10,11]. To prevent catastrophic events, operators in high-risk industries should be prepared for and be able to respond to and recover from non-routine situations, including system failures [12]. ...
Article
Full-text available
The infrequent use of skills relevant in non-routine situations in highly automated and high-risk industries is a major safety issue. The infrequent use of skills can lead to skill decay. Research on skill decay has a long history, but not much is known about the relevant factors and re-fresher interventions to attenuate skill decay in highly automated environments. In the present study, a scoping review was conducted to determine whether the well-known factors in skill decay research are also relevant for complex cognitive skill decay and to identify refresher interventions that are deemed effective for attenuating decay. A scoping review aims at identifying, summarizing, and mapping the body of literature on a given topic. Searches in electronic databases, including PsycArticles, PsyINFO, and Psyndex, via EBSCOhost and Web of Science and Google Scholar were conducted, and documents were analyzed regarding the research question, which resulted in n = 58 studies. The findings demonstrate the relevance of task characteristics and method-related (cogni-tive-based, behavioral-based training) and person-related factors (e.g., cognitive ability, experience, motivation) to mitigate decay. Additionally, the results demonstrate that minor refresher interventions are effective at attenuating complex cognitive skill decay. Implications for industry and training providers that aim to implement training and refresher interventions to attenuate skill decay in high-risk industries are provided. Researchers may use the information about the influences of person and method-related factors, task characteristics, and refresher interventions presented in this scoping review as a starting point to conduct further empirical research by taking skill acquisition, retention, and transfer into account.
... Yet these processes radically alter the activity landscape and potentially freedom of human action. The emerging ironies of autonomy, which echo the older 'ironies of automation' described by Bainbridge (1983), is that with the introduction of autonomy we frequently create rather than eliminate problems. The irony of autonomy is that such advancing developments both improves task completion while decreasing associated human skills as the task becomes rapidly vestigial (Hancock 2016). ...
... The human factors community has long had an interest in understanding the interactions between humans and automation, that is, the tasks executed by a machine agent of a function previously performed by a human (Parasuraman & Riley, 1997;Rasmussen, 1983). Central topics of research include understanding the benefits and concerns of replacing humans with automation (e.g., Bainbridge, 1983;Strauch, 2018), the need for appropriate design of automation (Norman, 1990), the effect of automation failures on human take-over responses (Endsley & Kiris, 1995), factors pertaining to automation use, disuse, and misuse (Parasuraman & Riley, 1997), human performance in taking over from automation (Eriksson & Stanton, 2017;Hergeth et al., 2017;Weaver & DeLucia, 2020), and the consequences of levels of automation on Situation Awareness (SA), mental workload, and operator performance (Endsley & Kaber, 1999;Onnasch et al., 2014). Combined, these studies culminate to the notion of an automation conundrum (Endsley, 2017), which is the problem that the more reliable and robust automation becomes, the less likely it is that a human supervisor will notice critical information and will be able to effectively intervene when required. ...
Article
Objective: In this review, we investigate the relationship between agent transparency, Situation Awareness, mental work-load, and operator performance for safety critical domains. Background: The advancement of highly sophisticated automation across safety critical domains poses a challenge for effective human oversight. Automation transparency is a design principle that could support humans by making the automation's inner workings observable (i.e., "seeing-into"). However, experimental support for this has not been systematically documented to date. Method: Based on the PRISMA method, a broad and systematic search of the literature was performed focusing on identifying empirical research investigating the effect of transparency on central Human Factors variables. Results: Our final sample consisted of 17 experimental studies that investigated transparency in a controlled setting. The studies typically employed three human-automation interaction types: responding to agent-generated proposals, supervisory control of agents, and monitoring only. There is an overall trend in the data pointing towards a beneficial effect of transparency. However, the data reveals variations in Situation Awareness, mental workload, and operator performance for specific tasks, agent-types, and level of integration of transparency information in primary task displays. Conclusion: Our data suggests a promising effect of automation transparency on Situation Awareness and operator performance , without the cost of added mental workload, for instances where humans respond to agent-generated proposals and where humans have a supervisory role. Application: Strategies to improve human performance when interacting with intelligent agents should focus on allowing humans to see into its information processing stages, considering the integration of information in existing Human Machine Interface solutions.
... In some cases, this adds cost, complexity, and hazard without clear benefit (Dhanani et al., 2021;Kim & Anger, 2010;LaMattina et al., 2018). As well as the "Ironies" (Bainbridge, 1983) and "Surprises" (Woods et al., 1997) of automation associated with many devices, their introduction into the OR also presents challenges for physical, procedural, team, and organizational integration , while often being unrecognized within a culture that has been slow to adopt systems engineering principles (Russ et al., 2013;Waterson & Catchpole, 2016). ...
Article
Objective Using the example of robotic-assisted surgery (RAS), we explore the methodological and practical challenges of technology integration in surgery, provide examples of evidence-based improvements, and discuss the importance of systems engineering and clinical human factors research and practice. Background New operating room technologies offer potential benefits for patients and staff, yet also present challenges for physical, procedural, team, and organizational integration. Historically, RAS implementation has focused on establishing the technical skills of the surgeon on the console, and has not systematically addressed the new skills required for other team members, the use of the workspace, or the organizational changes. Results Human factors studies of robotic surgery have demonstrated not just the effects of these hidden complexities on people, teams, processes, and proximal outcomes, but also have been able to analyze and explain in detail why they happen and offer methods to address them. We review studies on workload, communication, workflow, workspace, and coordination in robotic surgery, and then discuss the potential for improvement that these studies suggest within the wider healthcare system. Conclusion There is a growing need to understand and develop approaches to safety and quality improvement through human-systems integration at the frontline of care. Precis: The introduction of robotic surgery has exposed under-acknowledged complexities of introducing complex technology into operating rooms. We explore the methodological and practical challenges, provide examples of evidence-based improvements, and discuss the implications for systems engineering and clinical human factors research and practice.
... There is suspicion or outright rejection of the technology [25]. ...
Article
Full-text available
Robotics are set to play a significant role in the maintenance of rail infrastructure. However, the introduction of robotics in this environment requires new ways of working for individuals, teams and organisations and needs to reflect societal attitudes if it is to achieve sustainable goals. The following paper presents a qualitative analysis of interviews with 25 experts from rail and robotics to outline the human and organisational issues of robotics in the rail infrastructure environment. Themes were structured around user, team, organisational and societal issues. While the results point to many of the expected issues of robotics (trust, acceptance, business change), a number of issues were identified that were specific to rail. Examples include the importance of considering the whole maintenance task lifecycle, conceptualizing robotic teamworking within the structures of rail maintenance worksites, the complex upstream (robotics suppliers) and downstream (third-party maintenance contractors) supply chain implications of robotic deployment and the public acceptance of robotics in an environment that often comes into direct contact with passenger and people around the railways. Recommendations are made in the paper for successful, human-centric rail robotics deployment.
... Automation demands organizations to solve issues related to necessary infrastructure (space, electricity, or IT) or measures against noise or temperature burdens [26,99]. However, the most important risks in automation are potential downtimes, because automation could lead to loss of personnel's manual skills; thus, a switch to manual processes becomes difficult over time [99,100]. ...
Article
Full-text available
Biomedical research is a prominent case of knowledge work, often driven by data and information. A major limiting factor in biomedical research is access to information when and where it is needed, namely on the job. Biomedical research is very data-driven and even the amount of data that one researcher generates can be overwhelming. Consequently, researchers are prone to develop a psychological state called information overload, which hampers creative thinking. In order to facilitate optimal innovation strategies, research organizations are advised to implement assistance systems, which provide opportunities for digital data management in experimental laboratories directly at the work bench. Assistance systems have the potential to improve efficiency, quality, and reliability at the same time, while supporting researchers with the “dull-side” of keeping records and entering data.This article provides a detailed technical overview of recent innovative solutions for the specific problems of experimental work in biomedical research listed below: 1) automation of standardized, repetitive methodological routines; 2) establishment of ubiquitous computing environments to facilitate access to and storage of digital information at various locations in wet labs; 3) replacement of paper-bound notebooks with electronic laboratory notebooks, which are enterprise software applications; 4) integration of office and lab work space into single lab benches with tabletop systems; 5) electronic guidance through complex pipetting experiments, which are automatically recorded; 6) helping researchers to remain focused on hands-on activities with augmented reality provided by smartglasses and; 7) voice assistance as a tool to keep hands free, in order to improve processes and increase efficiency .Since none of the reviewed innovations have become mainstream in research organizations yet, they were identified as disruptive technologies. This article will give a broad overview of those technologies and their characteristics and attempts to gauge their potential for future deployment in research laboratories.
... For example, the potentially adverse impact of highly automated systems on user situation awareness and workload, along with the potential for over-reliance and automation bias, became apparent decades ago in a series of transportation accidents and incidents. 20 21 These 'ironies of automation' 22 can arise when technology is designed and implemented without due consideration of the impact on human roles or the interaction between people and the technology, which can result in inadequate demands on the human, such as lengthy periods of passive monitoring, the need to respond to abnormal situations under time pressure and difficulties in understanding what the technology is doing and why. Alarm fatigue, that is, the delayed response or reduced response frequency to alarms, is another phenomenon associated with automated systems that has been identified from major industrial accidents, such as the 1994 explosion and fires at the Texaco Milford Haven refinery. ...
Article
Full-text available
Full text open access: https://informatics.bmj.com/content/29/1/e100516.full
... Almost 40 years ago, Bainbridge (1983) reflected on ironies of automation. Her brief but astute and strongly referenced paper oscillates around the fact that machines work more precisely and more reliably than their operators, although "the more advanced a control system is, so the more crucial may be the contribution of the human operator" (ibid., p. 775) in case of anomalies. ...
Article
Full-text available
When Lisanne Bainbridge wrote about counterintuitive consequences of the increasing human–machine interaction, she concentrated on the resulting issues for system performance, stability, and safety. Now, decades later, however, the automized work environment is substantially more pervasive, sophisticated, and interactive. Current advances in machine learning technologies reshape the value, meaning, and future of the human workforce. While the ‘human factor’ still challenges automation system architects, inconspicuously new ironic settings have evolved that only become distinctly evident from a human-centered perspective. This brief essay discusses the role of the human workforce in human–machine interaction as machine learning continues to improve, and it points to the counterintuitive insight that although the demand for blue-collar workers may decrease, exactly this labor class increasingly enters more privileged working domains and establishes itself henceforth as ‘blue collar with tie.’
... B. in der Gestaltung der Arbeitszeit oder der Automatisierung von Prozesssteuerungsanlagen und Fahrzeugen (vgl. Bainbridge 1983). ...
... Humans working with automated systems is not a new phenomenon, but some issues related to human-automation cooperation remain to a large extent unsolved. Baxter et al. (2012) reviewed work that had been carried out during the 30 years since Bainbridge (1983) formulated the now well-known ironies (e.g. that the human's task shifts from working with a process to monitoring an automation that now works with the process; that this is a shift of tasks rather than only a reduction in workload; that humans do not perform well in passive monitoring tasks). Baxter showed that some of these ironies remained to be solved. ...
Article
Full-text available
Lack of support for handling a reduction of autonomy in a highly autonomous automation may lead to a stressful situation for a human when forced to take over. We present a design approach, the Reduced Autonomy Workspace, to address this. The starting point is that the human and the automation work together in parallel control processes, but at different levels of autonomy cognitive control, such as setting goals or implementing plans, which is different from levels of automation. When autonomy is reduced, the automation should consult the human by providing information that has been aligned to the level at which the human is working, and the timing of the provision should be adapted to suit the human’s work situation. This is made possible by allowing the automation to monitor the human in a separate process. The combination of these processes, information level alignment and timing of the presentation, are the key characteristics of the Reduced Autonomy Workspace. The Reduced Autonomy Workspace consists of four phases: Identification of the need; evaluation of whether, and, if so, when, and how to present information; perception and response by the human; implementation of a solution by the automation. The timing of the information presentation should be adapted in real-time to provide flexibility, while the level of the information provided should be tuned offline and kept constant to provide predictability. Use of the Reduced Autonomy Workspace can reduce the risk for surprising, stressful hand-over situations, and the need to monitor the automation to avoid them.
... This safety-critical situation was further supported by the notion of automation surprise (cf. Bainbridge, 1983), which the focus group explicitly considered to be a serious issue related to the introduction of ADAS. To facilitate safe usage of ADAS or ADS, the focus group urged to have ADAS intuitive, easy, and fun, and (perhaps most importantly) to not have drivers be required to monitor an ADS as a key task during driving, nor to have it be a part of future driver training as that was considered not viable, since it requires extensive training and certain personal characteristics not possesses by everybody (cf. ...
Article
Full-text available
The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately—though not necessarily directly—in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding
Article
Full-text available
This study aims to systematically map and assess performance requirements for collision avoidance manoeuvring for two cases. A case where the navigator performs collision avoidance, and a case where collision avoidance is performed by collision avoidance system where the navigator acts as its supervisor. An appraisal of collision avoidance manoeuvring was performed based on three data sources: the collision avoidance regulations, a ferry operator’s procedures, and interviews with navigators including in situ observations. A framework was established in which the gathered data was structured and analysed using a cognitive task analysis approach. Based on the results, performance requirements and information needs were established. Further work will focus on detailing the navigator’s information needs and the corresponding system’s transparency requirements to support effective human performance.
Article
Besides radically altering work, advances in automation and intelligent technologies have the potential to bring significant societal transformation. These transitional periods require an approach to analysis and design that goes beyond human-machine interaction in the workplace to consider the wider sociotechnical needs of envisioned work systems. The Sociotechnical Influences Space, an analytical tool motivated by Rasmussen's risk management model, promotes a holistic approach to the design of future systems, attending to societal needs and challenges, while still recognising the bottom-up push from emerging technologies. A study explores the concept and practical potential of the tool when applied to the analysis of a large-scale, 'real-world' problem, specifically the societal, governmental, regulatory, organisational, human, and technological factors of significance in mixed human-artificial agent workforces. Further research is needed to establish the feasibility of the tool in a range of application domains, the details of the method, and the value of the tool in design. Practitioner summary: Emerging automation and intelligent technologies are not only transforming workplaces, but may be harbingers of major societal change. A new analytical tool, the Sociotechnical Influences Space, is proposed to support organisations in taking a holistic approach to the incorporation of advanced technologies into workplaces and function allocation in mixed human-artificial agent teams.
Chapter
In der Betriebspraxis bewährte Theorien, Konzepte und Methoden der menschengerechten Gestaltung von Arbeitsaufgaben und Arbeitsbedingungen werden vorgestellt und auf die neuen Herausforderungen übertragen, die sich für die Arbeitsgestaltung im Kontext von „New Work“ und „Arbeit 4.0“ ergeben. Dabei wird deutlich, dass diese Konzepte und Methoden weiterhin anwendbar sind, dass dabei allerdings angemessene Formen der Partizipation, eine Stärkung der Kompetenzen zur Mit- und Selbstgestaltung der eigenen Arbeitssituation sowie ein präventives und prospektives Herangehen an die Arbeitsgestaltung immer bedeutsamer werden.
Article
Full-text available
This article targets designers of tools for developing utility‐critical systems, including safety‐critical, usability‐critical, and productivity‐critical systems, as well as consumer products. Analyses of operational failure case studies indicate that many of them are due to insufficient support for coping with operational complexity. Operational complexity may be defined in terms of exceptional situations. Prior studies indicate the feasibility of reducing the operational complexity by merging theories from various domains with practices employed in various industries. The article presents a semi‐quantitative study of a universal HSI model, consisting of seven layers of generic mini models (GMM) used to cope with exceptions. Following the APA guidelines, the paper has four sections: introduction, method, results, and discussion.
Article
Supervision of automated systems is an ubiquitous aspect of most of our everyday life activities which is even more necessary in high risk industries (aeronautics, power plants, etc.). Performance monitoring related to our own error making has been widely studied. Here we propose to assess the neurofunctional correlates of system error detection. We used an aviation-based conflict avoidance simulator with a 40% error-rate and recorded the electroencephalographic activity of participants while they were supervising it. Neural dynamics related to the supervision of system's correct and erroneous responses were assessed in the time and time-frequency domains to address the dynamics of the error detection process in this environment. Two levels of perceptual difficulty were introduced to assess their effect on system's error detection-related evoked activity. Using a robust cluster-based permutation test, we observed a lower widespread evoked activity in the time domain for errors compared to correct responses detection, as well as a higher theta-band activity in the time-frequency domain dissociating the detection of erroneous from that of correct system responses. We also showed a significant effect of difficulty on time-domain evoked activity, and of the phase of the experiment on spectral activity: a decrease in early theta and alpha at the end of the experiment, as well as interaction effects in theta and alpha frequency bands. These results improve our understanding of the brain dynamics of performance monitoring activity in closer-to-real-life settings and are a promising avenue for the detection of error-related components in ecological and dynamic tasks.
Article
At the present early stage of application, automatic train driving systems still require constant supervision by the driver, and the continuous attention level of the driver is very important. This study investigated the changes in drivers’ concentration’ levels during the automatic driving of a high-speed EMU (electric multiple units) train, as well as the influence of their attention on take-over performance when necessary. It was found that the degree of concentration decreased over time, with the inattention degree ranging from 20% to 57%. Attention was weakly correlated to the take-over time and strongly correlated to the stable running time. When drivers inattention degree was controlled between 20% and 33%, their response speed to a take-over request might be improved, but inattention would affect their judgment of braking power after a take-over. When the inattention degree was over 33%, all take-over performances became worse.
Chapter
This chapter provides an overview of important topics in human resource management (HRM) that are affected by digitalization and automation. It is outlined how work in HRM is changing in areas such as mental health at work, work design, leadership, and personnel development. The last section shifts focus and introduces a new way of working in HRM, known as HR analytics or people analytics. The fact that the various topics are not independent of each other and indeed intersect with each other is illuminated in the individual sections.
Chapter
Defence technologies, such as early-warning systems, are subject to exogenous and endogenous threats. The former may issue from jamming or, in a combat situation, anti-radiation missiles. The latter may issue from latent errors (Reason in Human Error. Cambridge University Press, Cambridge, 1990) introduced into the system at the initial design stage or during an upgrade, that is, through reactive patching (Weir in Debates in Risk Management. UCL Press, London, pp 114–126, 1996). It is easier to defend against exogenous than endogenous threats. Nevertheless, mindfulness when designing or upgrading a defence system reduces the risk of latent or embedded errors compromising reliability. This chapter will argue that systems that permit manual intervention, that is, manual override, are more reliable than systems that provide little or no opportunity for intervention. Referencing a Cold War near-miss, the chapter posits a negative relationship between coupling and reliability. That is, the more tightly coupled—that is, automated and linear—a system’s architecture, the less reliable it will be (other things remaining equal). It has become fashionable to characterise the human component as a liability—a latent error. The manner in which the Cold War crisis described below was resolved demonstrates the unfairness, indeed, recklessness of this characterisation.KeywordsDefenceSocio-technical systemsCouplingReliabilityHuman componentAsset
Article
The share of artificial intelligence (AI) jobs in total job postings has increased from 0.20% to nearly 1% between 2010 and 2019, but there is significant heterogeneity across cities in the United States (US). Using new data on AI job postings across 343 US cities, combined with data on subjective well-being and economic activity, we uncover the central role that service-based cities play to translate the benefits of AI job growth to subjective well-being. We find that cities with higher growth in AI job postings witnessed higher economic growth. The relationship between AI job growth and economic growth is driven by cities that had a higher concentration of modern (or professional) services. AI job growth also leads to an increase in the state of well-being. The transmission channel of AI job growth to increased subjective well-being is explained by the positive relationship between AI jobs and economic growth. These results are consistent with models of structural transformation where technological change leads to improvements in well-being through improvements in economic activity. Our results suggest that AI-driven economic growth, while still in the early days, could also raise overall well-being and social welfare, especially when the pre-existing industrial structure had a higher concentration of modern (or professional) services.
Article
Full-text available
The development of next-generation, intelligent manufacturing relies on the realization of human-cyber-physical systems. This key area of transdisciplinary research seeks to interrogate ways of supporting and integrating human cognition and expertise in complex cyber-physical systems, using modelling methods from artificial intelligence. This paper uses the exemplar of wear classification in thermal spraying to outline how relevant cognitive processes can be elicited and combined with technical variables in a singular, efficient and cognitively plausible modelling framework. To this end, eye tracking and high-resolution voltage measurements were performed in a pilot study, and a representative data set was generated for small data problems. Two multidimensional fuzzy pattern classification models were derived. Results show that both human and technical models are significant and complement one another. While the models still require optimizing, they clearly show that the integrative approach is practicable and fruitful. Our findings call for further transdisciplinary research of the development of cognitive assistance as well as predictive maintenance and quality assurance.
Article
Full-text available
Automated explosive detection systems for cabin baggage screening (EDSCB) highlight areas in X-ray images of passenger bags that could contain explosive material. Several countries have implemented EDSCB so that passengers can leave personal electronic devices in their cabin baggage. This increases checkpoint efficiency, but also task difficulty for screeners. We used this case to investigate whether the benefits of decision support systems depend on task difficulty. 100 professional screeners conducted a simulated baggage screening task. They had to detect prohibited articles built into personal electronic devices that were screened either separately (low task difficulty) or inside baggage (high task difficulty). Results showed that EDSCB increased the detection of bombs built into personal electronic devices when screened separately. When electronics were left inside the baggage, operators ignored many EDSCB alarms, and many bombs were missed. Moreover, screeners missed most unalarmed explosives because they over-relied on the EDSCB’s judgment. We recommend that when EDSCB indicates that the bag might contain an explosive, baggage should always be examined further in a secondary search using explosive trace detection, manual opening of bags and other means.
Article
Developing vehicle automation that accommodates other road users and exhibits familiar behaviors may enhance traffic safety, efficiency, and fairness, leading to tolerance of the technology. However, the interdependence between vehicle automation and other road users makes them more challenging than typical control and path planning tasks. Through the lens of joint activity theory, we model driver and pedestrian behavior to explore how they balance and negotiate competing risk and velocity goals through movement. Joint activity theory informs an interpretation of these movements as signals, which can be associated with perceptual processes. We use simulation-based inference to estimate parameters of coupled driver and pedestrian perceptual models using naturalistic driving data. Perceptual models provide links between the processes guiding evaluation of risk and velocity maintenance, and how they govern driver acceleration and pedestrian walking. We found that the coupled simulations describe how drivers adjust their yielding behavior in the face of pedestrian risk, and how risk affects pedestrians’ decisions to cross. Dynamic risk and velocity parameters predicted safety, efficiency, and fairness outcomes, suggesting that the parameters and their dynamic perceptual models describe important components of the interactions. Traditional approaches employ static, summary predictors, which may fail to capture their continuous evolution and negotiation over time. Dynamic models of the interaction between drivers and pedestrians can inform vehicle automation by identifying deviations from communication norms, extracting interaction features, and evaluating communication and coordination.
Article
Full-text available
Objective The present study compared the performance, workload, and stress associated with driver vigilance in two types of vehicle: a traditional, manually operated vehicle, and a partially automated vehicle. Background Drivers of partially automated vehicles must monitor for hazards that constitute automation failures and the need for human intervention, but recent research indicates that a driver’s ability to do so declines as a function of time. That research lacked a comparison measure of driving without vehicle automation, so it is unknown to what degree these effects are specific symptoms of monitoring the roadway during an automated drive. Drivers in manual control of their vehicle must similarly monitor for hazards and may suffer similar vigilance decrements. Method Participants completed a simulated 40-minute drive while monitoring for hazards. Half of participants completed the drive with an automated driving system that maintained speed and lane position; the remaining half manually controlled the vehicle’s speed and lane position. Results Driver sensitivity to hazards decreased and tendency to make false alarms increased over time in the automated control condition, but not in the manual control condition. Drivers in both conditions detected fewer hazards as the drive progressed. Ratings of workload and task-induced stress were elevated similarly in both conditions. Conclusion Partially automated driving appears to uniquely impair driver vigilance by reducing the ability to discriminate between benign and dangerous events in the driving environment as the drive progresses. Application Applied interventions should target improvements in driver sensitivity to hazardous situations that signal potential automation failures.
Article
Human reliability analysis plays an important role in the safety assessment and management of rail operations. This paper discusses how the increasing availability of operational data can be used to develop an understanding of train driver reliability. The paper derives human reliability data for two driving tasks, stopping at red signals and controlling speed on approach to buffer stops. In the first of these cases, a tool has been developed that can estimate the number of times a signal is approached at red by trains on the Great Britain (GB) rail network. The tool has been developed using big data techniques and ideas, recording and analysing millions of pieces of data from live operational feeds to update and summarise statistics from thousands of signal locations in GB on a daily basis. The resulting driver reliability data are compared to similar analyses of other train driving tasks. This shows human reliability approaching the currently accepted limits of human performance. It also shows higher error rates amongst freight train drivers than passenger train drivers for these tasks. The paper highlights the importance of understanding the task specific performance limits if further improvements in human reliability are sought. It also provides a practical example of how big data could play an increasingly important role in system error management, whether from the perspective of understanding normal performance and the limits of performance for specific tasks or as the basis for dynamic safety indicators which, if not leading, could at least become closer to real time.
Article
Purpose This paper aims to analyse three decision-making approaches that involve humans and artificial autonomous agents, namely, human “in the loop”, “on the loop” and “out of the loop” and identifies the decision characteristics that determine the choice of a decision-making approach. Design/methodology/approach This is a conceptual paper that analyses the relationships between the human and the artificial autonomous agents in the decision-making process from the perspectives of the agency theory, sustainability, legislation, economics and operations management. Findings The paper concludes that the human “out of the loop” approach is most suitable for quick, standardised, frequent decisions with low negative consequences of a wrong decision by the artificial intelligence taken within a well-defined context. Complex decisions with high outcome uncertainty that involve significant ethical issues require human participation in the form of a human “in the loop” or “on the loop” approach. Decisions that require high transparency need to be left to humans. Originality/value The paper evaluates the decision-making approaches from the perspectives of the agency theory, sustainability, legislation, economics and operations management and identifies the decision characteristics that determine the choice of a decision-making approach.
Article
Humans routinely miss important information that is ‘right in front of our eyes’, from overlooking typos in a paper to failing to see a cyclist in an intersection. Recent studies on these ‘Looked But Failed To See’ (LBFTS) errors point to a common mechanism underlying these failures, whether the missed item was an unexpected gorilla, the clearly defined target of a visual search, or that simple typo. We argue that normal blindness is the by-product of the limited-capacity prediction engine that is our visual system. The processes that evolved to allow us to move through the world with ease are virtually guaranteed to cause us to miss some significant stimuli, especially in important tasks like driving and medical image perception.
Article
The industrial Air Separations Unit (ASU) is a complicated and tightly operated process. The use of dynamic process analytics is also a key element of safe and economic operation of these processes, with increasing focus on predictive analytics to take preemptive actions. With the availability of real‐time data from hundreds of sensors, the data analysis process should also consider the topology of the data, as seen in sensor networks. In this paper a novel tool is presented that considers the complex connectivity patterns in the sensor network and uses local adaptive disturbance estimations to predict global network‐scale trends. The paper introduces the emerging field of Graph Signal Processing (GSP) and presents a rigorous derivation of the tool starting from the extraction of the sensor‐network (in a graph theoretical sense) from the data. This network, which is in the form of a matrix, is then used to derive a Kalman‐filter type of state‐space model driven by input disturbances. Multiple disturbance models (e.g., step, ramp, periodic) are included to allow the model to have different kinds of disturbance propagation. Each graph node (representing the sensors used) dynamically adapts to the most recent detected disturbance individually. These estimated disturbances are propagated to the global network using the graph. Modifications to ensure stability are also discussed. The fidelity of the tool is tested on certain downtime events and the paper concludes by discussing the advantages of the method and planned future improvements.
Article
Introduction There is an increasing number of healthcare AI applications in development or already in use. However, the safety impact of using AI in healthcare is largely unknown. In this paper we explore how different stakeholders (patients, hospital staff, technology developers, regulators) think about safety and safety assurance of healthcare AI. Methods 26 interviews were undertaken with patients, hospital staff, technology developers and regulators to explore their perceptions on the safety and the safety assurance of AI in healthcare using the example of an AI-based infusion pump in the intensive care unit. Data were analysed using thematic analysis. Results Participant perceptions related to: the potential impact of healthcare AI, requirements for human-AI interaction, safety assurance practices and regulatory frameworks for AI and the gaps that exist, and how incidents involving AI should be managed. Conclusion The description of a diversity of views can support responsible innovation and adoption of such technologies in healthcare. Safety and assurance of healthcare AI need to be based on a systems approach that expands the current technology-centric focus. Lessons can be learned from the experiences with highly automated systems across safety-critical industries, but issues such as the impact of AI on the relationship between patients and their clinicians require greater consideration. Existing standards and best practices for the design and assurance of systems should be followed, but there is a need for greater awareness of these among technology developers. In addition, wider ethical, legal, and societal implications of the use of AI in healthcare need to be addressed.
Article
As algorithms become an influential component of government decision-making around the world, policymakers have debated how governments can attain the benefits of algorithms while preventing the harms of algorithms. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. Despite the widespread turn to human oversight, these policies rest on an uninterrogated assumption: that people are able to effectively oversee algorithmic decision-making. In this article, I survey 41 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a shift from human oversight to institutional oversight as the central mechanism for regulating government algorithms. This institutional approach operates in two stages. First, agencies must justify that it is appropriate to incorporate an algorithm into decision-making and that any proposed forms of human oversight are supported by empirical evidence. Second, these justifications must receive democratic review and approval before the agency can adopt the algorithm.
Chapter
Mit dem Einsatz von digital-vernetzten, KI-unterstützten Mensch-Maschine-Systemen (MMS) in der Arbeit ergeben sich neue gesundheitliche Risiken, aber auch neue Wege einer präventiven Arbeitsgestaltung sowie neue Chancen für Arbeitsschutz und arbeitsbezogene Gesundheitsförderung. Gleichzeitig könnten sich mit der verstärkten Nutzung von „Machine Learning“ und anderen Formen der so bezeichneten „Artificial Intelligence“ als „Enabler of Autonomation“ die seit Langem beschriebenen Ironien der Automatisierung weiter zuspitzen – mit erheblichen Risiken für Sicherheit und Gesundheit weit über den Kreis der Arbeitenden hinaus. Gefordert wird ein Paradigmenwechsel im Prozess der MMS-Gestaltung hin zu partizipativem Vorgehen und „soziotechnischer Systemgestaltung 2.0“.
Chapter
Personenbezogene Arbeit bezeichnet Tätigkeiten bei denen der Arbeitsgegenstand ein Mensch ist. Beratungs-, Lehr- und Trainingstätigkeiten gehören ebenso dazu wie ärztliche, therapeutische, pflegerische oder künstlerische Tätigkeiten. In diesem Beitrag werden Gemeinsamkeiten und Unterschiede dieses Handlungsfeldes sowie dessen charakteristische Merkmale – letztere am Beispiel der professionellen Pflege – vorgestellt. Anschließend wird beleuchtet, wie der Einsatz digitaler Technologien diese Merkmale verändern kann, welche Folgen damit für Erwerbstätige verbunden sein können und wie es möglich ist, digitalisierte personenbezogene Arbeit menschengerecht zu gestalten. Die Autorinnen arbeiten in ihrem Beitrag heraus, weshalb eine Prämisse für den Technologieeinsatz bei personenbezogener Arbeit dessen positive Wirkungen auf die Interaktion – d. h. auf die Beziehungsarbeit mit Klient*innen, Patient*innen oder Kund*innen – sein muss.
Article
Full-text available
The present study examined how task priority influences operators’ scanning patterns and trust ratings toward imperfect automation. Previous research demonstrated that participants display lower trust and fixate less frequently toward a visual display for the secondary task assisted with imperfect automation when the primary task demanded more attention. One account for this phenomenon is that the increased primary task demand induced the participants to prioritize the primary task than the secondary task. The present study asked participants to perform a tracking task, system monitoring task, and resource management task simultaneously using the Multi-Attribute Task Battery (MATB) II. Automation assisted the system monitoring task with 70% reliability. Task load was manipulated via difficulty of the tracking task. Participants were explicitly instructed to either prioritize the tracking task over all other tasks (tracking priority condition) or reduce tracking performance (equal priority condition). The results demonstrate the effects of task load on attention distribution, task performance and trust ratings. Furthermore, participants under the equal priority condition reported lower performance-based trust when the tracking task required more frequent manual input (tracking condition), while no effect of task load was observed under the tracking priority condition. Task priority can modulate automation trust by eliminating the adverse effect of task load in a dynamic multitasking environment.
Article
Full-text available
Highly automated vehicles are making their way towards implementation in many modes of transportation, including shipping. From the safety perspective, it is critically important that such vehicles or the operators overseeing them maintain their sense of the environment, also referred to as situational awareness. The present study investigates the worldwide research effort focusing on situational awareness for autonomous transport and explores how the maritime domain could benefit from it. The results indicate that most of the research originates from the automotive sector, but the topic is developing fast in other transportation modes too. Some findings have been shared across the modes of transportation, but only to a limited extent. Although technology development is performed based on the achievements within basic research domains, there has been little feedback from applied sciences. Similarly, collaborative research is not strongly developed.
Article
This paper provides a systematic literature review (SLR) regarding risk management technological advances, including methods combined with proactive, interactive, and predictive measures that are currently being used to mitigate risks in the aviation sector. Predictive and interactive methods can create error tolerant systems, prevent accidents, and improve quality and safety systems by providing feedback to the system. This study began with a preliminary review of the human error and risk management fields of study. An initial string was created using the keywords of these initial references. This study developed an iterative protocol, searching and selecting articles in an iterative process refining the research scope. Once the scope was clearly defined, the articles were chosen using three selection criteria. The findings of the systematic literature review indicate that current risk tools and models are reactive, but there has been a significant recent effort to study proactive, interactive, and predictive analysis methods. There is an opportunity to use and develop advanced data analysis tools or artificial intelligence (AI) to mitigate risk in a more predictive way for the aviation sector.
Article
Smart home inhabitants can specify trigger-condition-action rules to control the home's behavior. As the number of rules and their complexity grow, however, so does the probability of issues such as inconsistencies and redundancies. These can lead to unintended behavior, including security vulnerabilities and wasted resources, which harms the inhabitants' trust in the system. Existing approaches to handle unintended behavior typically require inhabitants to define all-encompassing, permanent solutions by modifying the rules. Although this is fitting in certain situations, some unforeseen situations might occur. We argue that the user always must have the last word to avoid unwanted behaviors, without altering the overall behavior. With FortClash, we present an approach to predict many different types of unintended behavior, and contribute four novel mechanisms to mediate them that rely on making one-time exceptions. With FortClash, inhabitants gain a new tool to deal with unintended behavior in the short-term that is compatible with existing long-term approaches such as editing rules.
Article
Full-text available
Zusammenfassung Der Beitrag skizziert Grundüberlegungen zu einem Konzept komplementärer Arbeitsgestaltung. Es wird die These aufgestellt, dass KI potenziell neue Wertschöpfungskonzepte ermöglicht, die auf Rationalisierung durch Komplementarität und nicht durch Ersatz von Arbeit (Substitution) abzielen. Durch erhöhte Adaptivität sind neue, komplementäre Formen der Zusammenarbeit zwischen KI und Menschen möglich. Mit Automatisierungsgrenzen wird begründet, weshalb Komplementaritäten dauerhaft nötig und funktional sind. Für eine komplementäre Arbeitsgestaltung fehlen jedoch sowohl entsprechende Organisationskonzepte als auch Leitbilder. Auf Basis konzeptioneller Überlegungen und empirischer Fallstudien skizziert der Beitrag ein erstes Grundgerüst einer komplementären Arbeitsgestaltung auf den Ebenen Mensch-Maschine-Interaktion, Arbeitsorganisation und Managementstrategie. Praktische Relevanz: Unternehmen stehen vor der Herausforderung, KI (seien es Software-Assistenten oder kollaborative Roboter), in die bestehende Arbeitspraxis zu integrieren. Geschieht dies jedoch allein mit Blick auf Rationalisierung durch Substitution von Arbeit, werden die besonderen Potenziale von KI, die in ihrer spezifischen Flexibilität und Adaptivität liegen, nicht vollständig genutzt. Denn KI ermöglicht auch neue Formen der Interaktion und Organisation – und damit neue Wertschöpfungskonzepte. Mit dem Grundgerüst einer komplementären Arbeitsgestaltung liefert der Beitrag einen Denkanstoß und eine Heuristik, um traditionelle Pfade zu hinterfragen und neue Optionen zu reflektieren – mit Blick auf Rationalisierungsstrategien wie auch auf die Humanisierung von Arbeit.
Chapter
Betriebliches Gesundheitsmanagement (BGM) wird, wie die Arbeit selbst und wie Gesundheitsdienstleistungen allgemein, digitalisiert. Das betrifft den gesamten BGM-Prozess von der Analyse bis hin zu einzelnen Interventionen und umschließt den Arbeitsschutz, die Prävention und Gesundheitsförderung und das betriebliche Wiedereingliederungsmanagement. Im digitalen Betrieblichen Gesundheitsmanagement (dBGM) kommen verschiedene digitale Methoden und Instrumente zum Einsatz. Das Spektrum an digitalen Methoden ist breit, es reicht von Online-Befragungstools über Apps, Wearables oder Sensoren in Kleidungsstücken hin zu Online-Trainings im Bereich der Verhaltens- und Verhältnisprävention, Onlinecoachings und digitalen Assistenzsystemen. Digitale Angebote können als Einzelmaßnahme angeboten oder auf Plattformen als integriertes Angebot aufbereitet werden. Sie können analoge Maßnahmen ergänzen oder auch ersetzen. Die Einsatzmöglichkeiten digitaler Tools reichen von der Aufzeichnung gesundheitsbezogener Daten, der Verknüpfung von Daten aus unterschiedlichen Datenbeständen, (individualisiertem) Feedback bis hin zu konkreten Vorschlägen zur Verhaltens- und oder Verhältnisänderung. Der Beitrag zeigt Qualitätsstandards für digitale Anwendungen im Kontext des dBGM und richtet das Augenmerk auf die gesunde Gestaltung digitaler Arbeit.
Article
Full-text available
Research on employee turnover since L. W. Porter and R. M. Steers's analysis of the literature reveals that age, tenure, overall satisfaction, job content, intentions to remain on the job, and commitment are consistently and negatively related to turnover. Generally, however, less than 20% of the variance in turnover is explained. Lack of a clear conceptual model, failure to consider available job alternatives, insufficient multivariate research, and infrequent longitudinal studies are identified as factors precluding a better understanding of the psychology of the employee turnover process. A conceptual model is presented that suggests a need to distinguish between satisfaction (present oriented) and attraction/expected utility (future oriented) for both the present role and alternative roles, a need to consider nonwork values and nonwork consequences of turnover behavior as well as contractual constraints, and a potential mechanism for integrating aggregate-level research findings into an individual-level model of the turnover process. (62 ref)
Article
Full-text available
As human and computer come to have overlapping decisionmaking abilities, a dynamic or adaptive allocation of responsibilities may be the best mode of human-computer interaction. It is suggested that the computer serve as a backup decisionmaker, accepting responsibility when human workload becomes excessive and relinquishing responsibility when workload becomes acceptable. A queueing theory formulation of multitask decisionmaking is used and a threshold policy for turning the computer on/off is proposed. This policy minimizes event-waiting cost subject to human workload constraints. An experiment was conducted with a balanced design of several subject runs within a computer-aided multitask flight management situation with different task demand levels. It was found that computer aiding enhanced subsystem performance as well as subjective ratings. The queueing model appears to be an adequate representation of the multitask decisionmaking situation, and to be capable of predicting system performance in terms of average waiting time and server occupancy. Server occupancy was further found to correlate highly with the subjective effort ratings.
Article
Since complete automation may be an Utopean idea, the control engineer has to cope with man/machine systems. Examples are given of cases where human factors influence technical design. The interaction with social and political changes is also indicated. Social scientists have much to offer to control engineers. A brief survey will be given of progress in the scientific analyses of human capabilities, limitations, needs and motivations. Also experimental techniques specific to the social sciences will be touched upon. The design of man/machine systems is discussed, taking human factors into account right from the start. Modern technology can catalyse changes resulting from the application of job enrichment, group technology and worker participation to eliminate some human problems at work. Finally, reference is made to the recommendations by the IFAC Workshop on Productivity and Man.
Article
Rational design of a process control system using an on-line computer requires a definition of the total control task and an allocation of function between the human operator and the machine. A knowledge of the historical development of the role assigned to the human operator provides useful guidance in making the allocation decision. This development is described, with emphasis on the function performed by the operator in modern computer control systems, on the importance of different process characteristics, on the increased understanding of the operator's role obtained from attempts to automate it completely and on the need to choose appropriate systems when carrying out experimental studies of the operator.
Chapter
The generally safe and dependable commercial aviation industry has never had properly designed Caution and Warning Systems (CAWS) to alert the aircrew to operational or system malfunctions or emergency situations. When flight systems were simpler, relatively crude CAWS were manageable. Today, however, the complexity and size of modern avionics systems makes it crucial to have optimal systems to alert the crew to problems, and to assist them in handling them.
Chapter
The classical formula for training is simple enough. To train someone to do anything requires only: (1) opportunities to practise; (2) tests to check performance after practice; and, if practice and testing do not of themselves suffice, (3) hints, explanations or other information not intrinsic to performing the task. Industrial fault diagnosis training can present serious difficulties on all three counts.
Chapter
Within the context of this conference, we want to know the factors which affect human ability to detect and diagnose process failures, as a basis for console and job design. Understandably, human factors engineers want fully specified models for the human operator’s behaviour. It is also understandable that these engineers should make use of modelling procedures which are available from control engineering. These techniques are attractive for two reasons. They are sophisticated and well understood. They have also been very successful at giving first-order descriptions of human compensatory tracking performance in fast control tasks such as flying. In this context they are sufficiently useful for the criticism, that they are inadequate as a psychological theory of this behaviour, to be irrelevant for many purposes. Engineers have therefore been encouraged to extend the same concepts to other types of control task. In this paper we will be considering particularly the control of complex slowly changing industrial processes, such as steel making, petrochemicals and power generation.
Article
Systems whose failure can cause loss of life or large economic loss need to be tolerant to faults (i.e. faults in system hardware, software, and procedures). Examples of such systems include airplane autopilots in the automatic landing mode, electricity utility power generation plants, and telephone electronic switching systems (ESS). Such systems are characterized by high reliability; they fail infrequently and recover quickly when a fault does occur. The user usually cannot respond fast enough if and when a fault is detected. Even if he could respond, his proficiency would not be high because the fault occurs infrequently.
Article
This chapter discusses comparative study in different man–machine systems for human control tasks. Potential man–machine problems are born in the design phase of the construction process. With the help of the data obtained in the interview with a member of the technical management, a number of characteristics of the plant hardware, the control system, and the man-machine interface are formulated. Some formal characteristics of the organizational system are obtained by the means of the interview with a member of the management. The factor achievement in the job satisfaction questionnaire is positively related with the dimension activities (ACT), controllability of the process (CONT), and system ergonomics (ERG). The present analysis may lead to the conclusion that a comparative study of quite different man–machine systems, which implies an analysis on the level of the system and not on that of the individual operator, can provide meaningful results in regard to the human aspects of man–machine systems.
Article
The rapid technological advancements of the past decade, and the availability of higher levels of automation which resulted, have aroused interest in the role of man in complex systems. Should the human be an active element in the control loop, operating manual manipulators in response to signals presented by various instruments and displays? Or should the signals be coupled directly into an automatic controller, delegating the human to the monitoring role of a supervisor of the system’s operation?
Article
This symposium with its title, Human Detection and Diagnosis of System Failures, clearly implies that, at least in the immediate future, complex systems may have to resort to the skills of the human operator when problems arise during operation. However, the human attributes particularly appropriate to faultfinding are not inherent in the organism; operators of complex systems must be trained if they are to be efficient diagnosticians. This paper describes the development of a training programme specifically designed to help trainee process operators learn to recognise process plant breakdowns from an array of control room instruments. Although developed originally to train fault-finding in the context of continuous process chemical plant, it is probable that the techniques we are going to describe may prove to be equally effective in other industries. For example, power plants, crude oil refineries and oil production platforms all involve continuous processes which are operated from a central control room.
Article
The paper describes (1) the development of a simulator and (2) the first results of a training technique for the identification of plant failures from control panel indications. Input or signal features of the task present more simulation fidelity problems than its response or output features. Current, techniques for identifying effective signals, e.g. ‘ blanking-off ’ information, or protocol analysis, bias any description of problem solving since they require serial reporting, if not serial collection, of information by the operator. They also require inferences as to what is an effective item of information. It is therefore argued that simulation should preserve all those features which may in principle provide, or influence acquisition of, diagnostic information, specifically panel layout, instrument design and display size. Further fidelity problems are the stress from operating in a dangerous environment; stress from hazards or sanctions following mistaken diagnosis; and the stress of diagnosing in a short time interval. The simulator uses bock-projection to life size of slides of control panel mock-ups by a random access projector. Under an adaptive cumulative part regime, trainees saw on average 89 failure arrays in 30 min, an obvious advantage over the operational situation. In a test 24 hr after training, consisting of the eight faults each presented four times in random order, 4 out of 17 trainees made only one error in 32 diagnosos; the other trainees performed perfectly. Subjects' reports indicate very different solution strategies, e.g., recognition of alarm patterns; serial instrument checking determined by heuristics of plant functioning. Several features of performance arc consistent with the view that trainees use a minimal number of dimensions for correct discrimination and that these change as the number of different fault arrays increases. It is argued that this training regime should reduce stress. In particular it is argued that, according to current theories of stress, the fewer dimensions needed for diagnosis, the more robust will be diagnostic performance in dangerous environments.
Article
[reviews] research on adult age differences in human memory . . . conducted very largely within the framework of current theoretical views of memory / organized in terms of the topics and concepts suggested by these approaches / the literature on memory and aging is now so extensive that the review must be selective—we [the authors] focus on topics of current debate and largely on research reported in the last 10 years approaches to the study of memory [memory stores, processing models, memory systems] / empirical evidence [sensory and perceptual memory, short-term and working memory, age differences in working memory] / age differences in encoding [qualitative differences in encoding] / age differences in retrieval / age differences in nonverbal memory / age differences in memory of the past and for the future / aging and memory systems (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Modes of human-computer interaction in the control of dynamicsystems are discussed, and the problem of allocating tasks betweenhuman and computer considered. Models of human performance in avariety of tasks associated with the control of dynamic systems arereviewed. These models are evaluated in the context of a designexample involving human-computer interaction in aircraftoperations. Other examples include power plants, chemical plants,and ships.
Article
Typescript. Thesis--University of Illinois at Urbana-Champaign. Vita. Includes bibliographical references (leaves 102-108). Photocopy.
Article
A full mission simulation of a civil air transport scenario that had two levels of workload was used to observe the actions of the crews and the basic aircraft parameters and to record heart rates. The results showed that the number of errors was very variable among crews but the mean increased in the higher workload case. The increase in errors was not related to rise in heart rate but was associated with vigilance times as well as the days since the last flight. The recorded data also made it possible to investigate decision time and decision order. These also varied among crews and seemed related to the ability of captains to manage the resources available to them on the flight deck.
Article
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
Article
In order to study the effects different logic systems might have on interrupted operation, an algebraic calculator and a reverse polish notation calculator were compared when trained users were interrupted during problem entry. The RPN calculator showed markedly superior resistance to interruption effects compared to the AN calculator although no significant differences were found when the users were not interrupted. Causes and possible remedies for interruption effects are speculated. It is proposed that because interruption is such a common occurrence, it be incorporated into comparative evaluation tests of different logic system and control/display system and that interruption resistance be adopted as a specific design criteria for such design.
Article
A four stage model is presented for the control mode man-computer interface dialogue. It consists of context development, semantic development syntactic development, and command execution. Each stage is discussed in terms of the operator skill levels (naive, novice, competent, and expert) and pertinent human factors issues. These issues are human problem solving, human memory, and schemata. The execution stage is discussed in terms of the operators typing skills. This model provides an understanding of the human process in command mode activity for computer systems and a foundation for relating system characteristics to operator characteristics.
Article
Much ergonomics research is published in non-archival form, eg, government reports. Sometimes such reports are withheld from general circulation because they are judged to be militarily sensitive. Thus, potentially useful information becomes restricted to a limited number of scientists who are on an initial distribution list. Worse, since the work reported in such papers is not referenced it goes unknown among a large population of workers who have entered the field since the first, limited publication and who have no way of knowing of its existence. Results of experiments carried out some years ago have been rewritten for publication in Applied Ergonomics. The reasons for this are that: (a) the original reports have been regarded as "unclassified" and (b) the substantive problem, the effects of dividing tasks between men and computers in an on-line information system, continues to be of interest to ergonomists and others.
A computer algorithm employing fading-memory system identification and linear discriminant analysis is proposed for real-time detection of human shifts of attention in a control and monitoring situation. Experimental results are presented that validate the usefulness of the method. Application of the method to computer-aided decisionmaking in multitask situations is discussed.
Mathematical equations or processing routines?
  • L. Bainbridge
  • L. Bainbridge
Training for fault diagnosis in industrial process plant
  • K.D. Duncan
  • K.D. Duncan
Trends in operator-process communication development
  • Jervis
Jervis, M. W. and R. H. Pope (1977). Trends in operator-process communication development. Central Electricity Generating Board, E/REP/054/77.
Commercial air crew detection of system failures: state of the art and future trends Flight-deck automation: promises and problems
  • D A Thompson
  • E L Wiener
  • R E Curry
Thompson, D. A. (1981). Commercial air crew detection of system failures: state of the art and future trends. In J. Rasmussen and W. B. Rouse (Eds.), op. cit., pp. 37-48, Wiener, E. L. and R. E. Curry (1980), Flight-deck automation: promises and problems. Ergonomics, 23, 995.
Verbal presentation. NATO Symposium on Human Detection and Diagnosis of System Failures
  • A R Ephrath
Ephrath, A. R. (1980). Verbal presentation. NATO Symposium on Human Detection and Diagnosis of System Failures, Roskilde, Denmark.
Problem solving behaviour of pilots in abnormal and emergency situations
  • Johannsen
Johannsen, G. and W. B. Rouse (1981). Problem solving behaviour of pilots in abnormal and emergency situations. Proc. 1st European Ann. Conf. on Human Decision Making and Manual Control, Delft University, pp. 142-150.
Researches on the measurement of human performance Selected Papers on Human Factors in the Design and Use of Control Systems
  • Mackworth