Article

Humans and Automation: Use, Misuse, Disuse, Abuse

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Unfortunately, misalignment between human trust and the trustworthiness of automation is common. Fig. 1 depicts the phenomenon, based on seminal work by Parasuraman and Riley [31]. The terms misuse and disuse explain failures from flawed partnerships between humans and automation. ...
... Calibration (blue arrows) is efforts taken to remedy misalignment between trust and trustworthiness. Resolution represents how "precisely a judgment of trust differentiates levels of automation" [31]. The gray area shows an example of poor resolution, as different levels of trustworthiness map to the same trust. ...
... Second, trusted AI refactoring involves developing the UI to foster developers' trust by customizing the delivery of LLM-based recommendations in the IDE. Previous research on human factors suggests that misalignment between trust and trustworthiness can severely impede industry adoption [31] we aspire to calibrate these two sides of trust. ...
Preprint
In the software industry, the drive to add new features often overshadows the need to improve existing code. Large Language Models (LLMs) offer a new approach to improving codebases at an unprecedented scale through AI-assisted refactoring. However, LLMs come with inherent risks such as braking changes and the introduction of security vulnerabilities. We advocate for encapsulating the interaction with the models in IDEs and validating refactoring attempts using trustworthy safeguards. However, equally important for the uptake of AI refactoring is research on trust development. In this position paper, we position our future work based on established models from research on human factors in automation. We outline action research within CodeScene on development of 1) novel LLM safeguards and 2) user interaction that conveys an appropriate level of trust. The industry collaboration enables large-scale repository analysis and A/B testing to continuously guide the design of our research interventions.
... Thus, there are two perspectives to consider when analyzing reliance on any type of automation: 1) the operator's assessment of the situation, and 2) the operator's evaluation of the automation capabilities. Misjudgment of either factor can lead to inappropriate reliance, resulting in either misuse (using automation in unintended ways) or disuse (rejecting its capabilities) [3,5]. ...
... These errors of misuse and disuse have significant consequences across various fields, including aviation and automotive industries [5]. Misuse of automation can lead to decision biases and monitoring failures, as evidenced by incidents like the crash of Eastern Flight 401 [5]. ...
... These errors of misuse and disuse have significant consequences across various fields, including aviation and automotive industries [5]. Misuse of automation can lead to decision biases and monitoring failures, as evidenced by incidents like the crash of Eastern Flight 401 [5]. In the realm of AVs, the impact of these errors extends beyond the driver to encompass other road users. ...
Article
Full-text available
Inappropriate automation usage is a common cause of incidents in semi-autonomous vehicles. Predicting and understanding the factors influencing this usage is crucial for safety. This study aims to evaluate machine learning models in predicting automation usage from behavioral data; and analyze how workload, environment, performance, and risk influence automation usage for different conditions. An existing dataset from a driving simulator study with 16 participants across four automation conditions (Speed High, Speed Low, Full High, and Full Low) was used. Five machine learning models were trained, using different splitting techniques, to predict automation usage. The input to these models were features related to workload, environment, performance, and risk, pre-processed and optimized to reduce computational time. The best-performing model was used to analyze the impact of each factor on automation usage. Random Forest models consistently demonstrated the highest prediction power, with accuracy exceeding 79% for all conditions, providing a robust foundation for enhancing vehicle safety and optimizing human-automation collaboration. Additionally, factors influencing automation usage ranked: Workload>Environment>Performance>Risk., contrasting with literature on pre-drive intentions to use automation. This study offers insights into real-time prediction of automation usage in semi-autonomous vehicles and quantifies the importance of key factors across different automation conditions. The findings reveal variations in prediction accuracy and factor importance across conditions, providing valuable implications for adaptive automated driving system design. Additionally, the hierarchy of factors influencing automation usage reveals a contrast between real-time decisions and pre-drive intentions, emphasizing the need for adaptive systems in dynamic driving conditions.
... T can be defined as the attitude of an individual towards the belief that an automated system will assist in accomplishing their objectives, particularly in situations where uncertainty and vulnerability are present (Lee and See, 2004). Existing literature has consistently shown that T plays a crucial role in determining the level of acceptance towards automated systems (Riley, 1994;Muir and Moray, 1996;Parasuraman and Riley, 1997;Lee and See, 2004). Factors such as age and automation experience (Bekier et al., 2011) act as drivers towards automation acceptance in air traffic management. ...
... T had a significant impact on BI which is consistent with previous research findings (Riley, 1994;Muir and Moray, 1996;Parasuraman and Riley, 1997;Lee and See, 2004). T showed a moderate, positive association with BI which further replicates the important relationship of T and BI in Trust and TAM model introduced by Gefen et al. (2003). ...
Article
Full-text available
The adoption of Electronic Flight Strips (EFS) is a global trend aimed at streamlining the routine tasks for Air Traffic Controllers (ATCs), yet Sri Lanka continues to use traditional paper-based systems. This study investigates the potential acceptance of EFS automation by Sri Lankan ATCs, examining factors such as perceived usefulness, perceived ease of use, trust, and attitude toward automation within the Technology Acceptance Model framework. Data from 48 Sri Lankan ATCs revealed that all factors were positively correlated with automation acceptance, with perceived ease of use emerging as the most influential construct. A classification by work unit showed Approach Controllers perceived EFS as having lower usefulness compared to Tower and Area Controllers, likely due to their unique workflow. Additionally, ATCs with prior automation experience have demonstrated stronger positive attitudes, trust, and willingness to adopt EFS, emphasizing the role of experience in fostering automation acceptance. Furthermore, age and gender showed no significant impact on acceptance levels. These findings provide critical insights for EFS system designers and management to tailor training and implementation strategies, highlighting the importance of designing intuitive interfaces, building trust in safety, and leveraging experienced ATCs to champion adoption.
... This discrepancy can lead to over-trust or under-trust, each presenting unique challenges. Specifically, when trust exceeds trustworthiness, misuse may occur, and when trustworthiness surpasses trust, disuse is likely [91]. Figure 6.2 demonstrates how the relationship between user trust and AI trustworthiness affects user engagement with AI. ...
... Disuse refers to the insufficient application of AI capabilities, such as ignoring or turning off AI-driven functionalities that could enhance safety or efficiency. Abuse involves applying AI without careful consideration of its potential negative impacts on human systems [91]. These scenarios highlight the critical need for balancing trust with the proven reliability and ethical deployment of AI technologies to prevent potential negative consequences. ...
Chapter
Full-text available
The emergence of large language models (LLMs), exemplified by ChatGPT, has ushered in a significant transformation in our interactions with artificial intelligence (AI). However, the widespread integration of AI technologies has raised significant concerns about user privacy protection. To establish a robust relationship between users and AI while safeguarding user privacy, trust, and trustworthiness are fundamental factors in building user confidence and ensuring the responsible and ethical use of AI. Within this context, the authors reviewed previous research to examine the current global landscape of public trust in ChatGPT and its evolution over time. Furthermore, this chapter investigates the trustworthiness of AI by examining the risks and threats posed by ChatGPT from technical, legal, and ethical perspectives. The authors also explore the factors that influence user trust in AI, and propose strategies aimed at enhancing the trustworthiness of AI systems in safeguarding user privacy. In doing so, supporting people to be more aware and sensible of what they should and should not trust online with regard to AI. This chapter offers insights to a broad audience, including industry professionals, policymakers, educators, and the general public, all collaborating to achieve a harmonious equilibrium between user trust and AI trustworthiness. The goal is to enable the public to harness the benefits of advanced technology while safeguarding their privacy in the ChatGPT era.
... Trust in automation is defined as the "attitude that a technology benefits the goal and intention of a human interaction partner in a situation characterized by risk, uncertainty, and vulnerability" [27, p. 54]. It is well established that trust considerably influences how we interact with automated systems [28][29][30] including AVs [31,32]. When pedestrians encounter AVs for the first time, they have not formed a reliable mental model and substantiated expectations regarding the vehicles' capabilities and behavior. ...
... Miscalibrated trust, conversely, can lead to inappropriate and dangerous interactions with the system [38]. A person with insufficient trust tends to fail to utilize the system's capabilities while someone with excessive trust (overtrust) tends to use a system beyond its intended scope, possibly leading to dangerous outcomes [30]. Applied to pedestrian-AV interactions, this suggests that pedestrians with too little trust in AVs may be overly hesitant to cross and those with overtrust may cross without ensuring safe passage [25,39]. ...
Article
Full-text available
In recent years, there has been a debate on whether automated vehicles (AVs) should be equipped with novel external human–machine interfaces (eHMIs). Many studies have demonstrated how eHMIs influence pedestrians’ attitudes (e.g., trust in AVs) and behavior when they activate (e.g., encourage crossing by lighting up). However, very little attention has been paid to their effects when they do not activate (e.g., discourage crossing by not lighting up). We conducted a video-based laboratory study with a mixed design to explore the potential of two different eHMI messages to facilitate pedestrian-AV interactions by means of activating or not activating. Our participants watched videos of an approaching AV equipped with either a state eHMI (“I am braking”) or intent eHMI (“I intend to yield to you”) from the perspective of a pedestrian about to cross the road. They indicated when they would initiate crossing and repeatedly rated their trust in the AV. Our results show that the activation of both the state and intent eHMI was effective in communicating the AV’s intent to yield and both eHMIs drew attention to a failure to yield when they did not activate. However, the two eHMIs differed in their potential to mislead pedestrians, as decelerations accompanied by the activation of the state eHMI were repeatedly misinterpreted as an intention to yield. Despite this, user experience ratings did not differ between the eHMIs. Following a failure to yield, trust declined sharply. In subsequent trials, crossing behavior recovered quickly, while trust took longer to recover.
... However, despite their potential, one of the most significant challenges remains the establishment of factors that influence the interaction between clinicians and these AI-driven tools, such as trust, perceived usefulness, and ease of use. No matter how advanced or effective a CDSS may be, its success attaches to clinicians' willingness to use and integrate its recommendations into their practice [12]. Thus, trust is a fundamental factor in successfully adopting and utilizing AI in healthcare settings [6]. ...
... However, Mayer's model primarily focused on interpersonal trust. In the context of human-technology interaction, Parasuraman et al. [12] expanded this understanding by examining how users interact with automation technology. Their study explored aspects such as usage patterns, reliance, and potential for misuse or overreliance on automation. ...
Preprint
Full-text available
Advances in machine learning have created new opportunities to develop artificial intelligence (AI)-based clinical decision support systems using past clinical data and improve diagnosis decisions in life-threatening illnesses such breast cancer. Providing explanations for AI recommendations is a possible way to address trust and usability issues in black-box AI systems. This paper presents the results of an experiment to assess the impact of varying levels of AI explanations on clinicians' trust and diagnosis accuracy in a breast cancer application and the impact of demographics on the findings. The study includes 28 clinicians with varying medical roles related to breast cancer diagnosis. The results show that increasing levels of explanations do not always improve trust or diagnosis performance. The results also show that while some of the self-reported measures such as AI familiarity depend on gender, age and experience, the behavioral assessments of trust and performance are independent of those variables.
... Therefore, it is crucial to understand the mechanisms behind the decision to accept or reject AI input [28], and how the AI behavior might in uence these processes. This is especially problematic in high-stakes situations where decisions affect human lives such as self-driving cars and medical diagnosis [28,29,30]. ...
... To de ne morally challenging situations, 128 situations with different risks, bene ts, and costs combinations were presented in a rst pre-test phase to a group of 5 expert military o cers, who were asked to identify the most morally challenging possible scenarios. The situations contained combinations reporting the following parameters: (1) the strategic importance of the target (i.e., the military advantage in case of attack or the loss in case of non-attack), with either an advantage at the tactical (winning a battle), operational (winning a campaign) or strategic (winning the war) level; (2) the potential destruction of civilian objects and infrastructure, expressed as risk (0%, 25%, 50%, 75%, 100%) and value (low, medium, high value); (3) the potential loss of civilian life or civilian injury, expressed as risk (0%, 25%, 50%, 75%, 100%) and number (1,10,20,30,50, 100 + persons); (4) the potential loss of allied forces (personnel and material) expressed as risk (0%, 25%, 50%, 75%, 100%); and (5) the criticality of the consequences in case of collateral damage from the attack or if the target is not hit (tactical, operational, strategic). The combinations of the various parameters were meant to create situations of uncertainty, imposing moral demands to the decision (having to choose between probable bene ts and probable costs, with various material and human values at stake). ...
Preprint
Full-text available
There is a growing interest in understanding the effects of human-machine interaction on moral decision-making (Moral-DM) and sense of agency (SoA). Here, we investigated whether the “moral behavior” of an AI may affect both moral-DM and SoA in a military population, by using a task in which cadets played the role of drone operators on a battlefield. Participants had to decide whether or not to initiate an attack based on the presence of enemies and the risk of collateral damage. By combining three different types of trials (Moral vs. two No-Morals) in three blocks with three type of intelligent system support (No-AI support vs. Aggressive-AI vs. Conservative-AI), we showed that participants' decisions in the morally challenging situations were influenced by the inputs provided by the autonomous system. Furthermore, by measuring implicit and explicit agency, we found a significant increase in the SoA at the implicit level in the morally challenging situations, and a decrease in the explicit responsibility during the interaction with both AIs. These results suggest that the AI behavior influences human moral decision-making and alters the sense of agency and responsibility in ethical scenarios. These findings have implications for the design of AI-assisted decision-making processes in moral contexts.
... It is defined as the belief that an agent will assist in achieving an operator's goals in uncertain and vulnerable situations [36]. Early research on human trust in automation showed that excessive trust leads to misuse-over-reliance on automation resulting in poor monitoring and decision biases-while insufficient trust leads to disuse, where automation is underutilized [37]. Trust is dynamic, influenced by the gap between expectation and observation [38]. ...
... Setting the appropriate level of automation involves balancing performance gains from automation against the potential risks associated with automation failures [43]. This process includes modeling various factors, such as the risk of human out-of-the-loop (OOTL) issues, task requirements, team cognition, human trust, and situation awareness [37]. Identifying the appropriate level of automation can be approached as a reasoning or optimization problem [47,48]. ...
Article
Full-text available
This positioning paper explores integrating smart in-process inspection and human–automation symbiosis within human–cyber–physical manufacturing systems. As manufacturing environments evolve with increased automation and digitalization, the synergy between human operators and intelligent systems becomes vital for optimizing production performance. Human–automation symbiosis, a vision widely endorsed as the future of human–automation research, emphasizes closer partnership and mutually beneficial collaboration between human and automation agents. In addition, to maintain high product quality and enable the in-time feedback of process issues for advanced manufacturing, in-process inspection is an efficient strategy that manufacturers adopt. In this regard, this paper outlines a research framework combining smart in-process inspection and human–automation symbiosis, enabling real-time defect identification and process optimization with cognitive intelligence. Smart in-process inspection studies the effective automation of real-time inspection and defect mitigation using data-driven technologies and intelligent agents to foster adaptability in complex production environments. Concurrently, human–automation symbiosis focuses on achieving a symbiotic human–automation relationship through cognitive task allocation and behavioral nudges to enhance human–automation collaboration. It promotes a human-centered manufacturing paradigm by integrating the studies in advanced manufacturing systems, cognitive engineering, and human–automation interaction. This paper examines critical technical challenges, including defect inspection and mitigation, human cognition modeling for adaptive task allocation, and manufacturing nudging design and personalization. A research roadmap detailing the technical solutions to these challenges is proposed.
... Disuse refers to failures resulting from an operator rejecting the capabilities of the automation and disabling, ignoring, or spending excessive time crosschecking the actions and decisions of the technology. Alternatively, misuse refers to failures resulting from an operator inadvertently violating critical assumptions and not monitoring the automation enough or depending on the automation when it should not be used (see Parasuraman & Riley, 1997). Thus, to provide clarity, a definition of trust is required, as it is often identified as the operant variable in disparate human-automation interaction and humanautonomy teaming paradigms. ...
... Second, the study lacked behavioral outcome measures. Generally, the purpose for investigating trust (and other predictor variables) is to determine automation use and how that use affects joint human-automation performance (Parasuraman & Riley, 1997). In laboratory settings, it is often easier to experimentally impose event rates that require human intervention, to study the effects of trust on behavior (e.g., Chancey et al., 2015Chancey et al., , 2017. ...
Article
Full-text available
Trust development will play a critical role in remote vehicle operations transitioning from automated (e.g., requiring human oversight) to autonomous systems. Factors that affect trust development were collected during a high-fidelity remote uncrewed aerial system (UAS) simulation. Six UAS operators participated in this study, which consisted of 17 trials across two days per participant. Trust in two highly automated systems were measured pre- and post-study. Perceived risk and familiarity with the systems were measured before the study. Main effects showed performance-based trust and purpose-based trust increased between the pre- and post-study measurements. System familiarity predicted process-based trust. An interaction indicated that operators who rated the systems as riskier showed an increase in a single-item trust scale between the pre- and post-study measurement, whereas participants that rated the systems as less risky maintained a higher trust rating. Individual differences showed operators adapted to why the automation was being used, and trust improved between measurements. Qualitative analysis of open-ended responses revealed themes related to behavioral responses of the aircraft and transparency issues with the automated systems. Results can be used to support training interventions and design recommendations for appropriate trust in increasingly autonomous remote operations, as well as guide future research.
... Evidence from social psychology literature suggests that automation might induce distinct types of bias, arising from human processing of automated outputs. Notably, research in psychology on automated support systems (which precede AI) has shown that individuals are susceptible to "automation bias" or default deference to automated systems (Parasuraman & Riley, 1997;Skitka, Mosier & Burdick, 1999;Skitka, Mosier, Burdick & Rosenblatt, 2000;Mosier et al., 2001;Cummings, 2006; for a systematic review, see: Lyell & Coiera, 2017). The potential sources of automation bias are said to stem on the one hand, from the belief in the perceived inherent authority or superiority of automated systems and on the other, from "cognitive laziness", a reluctance to engage in cognitively complex mental processes and thorough information search and processing . ...
Article
Full-text available
Our contribution aims to propose a novel research agenda for behavioural public administration (BPA) regarding one of the most important developments in the public sector nowadays: the incorporation of artificial intelligence into public sector decision-making. We argue that this raises the prospect of distinct set of biases and challenges for decision-makers and citizens, that arise in the human-algorithm interaction, and that thus far remain under- investigated in a bureaucratic context. While BPA scholars have focused on human biases and data scientists on ‘machine bias’, algorithmic decision-making arises at the intersection between the two. In light of the growing reliance on algorithmic systems in the public sector, fundamentally shaping the way governments make and implement policy decisions, and given the high-stakes nature of their application in these settings, it becomes pressing to remedy this oversight. We argue that behavioural public administration is well-positioned to contribute to critical aspects of this debate. Accordingly, we identify concrete avenues for future research, and develop theoretical propositions.
... H3b. competence positively influences intrinsic value. automation describes the execution by a machine agent of placing human tasks (Parasuraman & riley, 1997). automation helps humans enhance affordability and simplicity (luor et al., 2015). ...
Article
Full-text available
In the digital transformation scenario, banks must strengthen business sustainability through digital technologies, such as artificial intelligence-based chatbots. Crucial evidence illustrates that using chatbots allows banks to enhance business performance and bank–customer relationship. This study aims to unveil how chatbots bring customer experiences and whether customers adopt behavioral outcomes. Structural equation modeling is calculated to examine the hypotheses of a postulated research model with a valid sample of 336 respondents. The results reveal that extrinsic value is investigated because of the central indispensability of chatbot attributes, including understandability, automation, and competence. Meanwhile, understandability, personalization, interaction, and intimacy are of paramount importance in increasing the intrinsic value toward chatbot usage. Intrusiveness tends to inhibit customers’ intrinsic value. Furthermore, extrinsic and intrinsic values significantly predict customer satisfaction and continuance intention toward banking chatbots. Additionally, satisfaction is a motivation for customer intention to sustain the use of banking chatbots. In light of the obtained findings, it is imperative for banks to deem chatbot characteristics as the determinative motivations for customer experience because chatbots provide customers with valuable information and have the capability to address financial issues.
... Trust calibration is the dynamic change of trust to an appropriate level that matches system capability. A poor understanding of automation may lead to human misuse and disuse (Parasuraman & Riley, 1997). Misuse refers to overreliance on automation, which can result in failures of monitoring or decision biases, while disuse is the neglect or underutilisation of automation. ...
Article
Full-text available
As vehicles transition between driving automation levels, drivers need to be continually aware of the automation mode and the resulting driver responsibilities. This study investigates the impact of visual user interfaces (UIs) on drivers' mode awareness in SAE Level 2 automated vehicles. It focuses on their understanding of speed and distance control, steering control, and the hands-on steering wheel requirement presented through UIs. Forty-five UIs were generated, presenting the activation of Lane Keeping Assist (LKA) and Adaptive Cruise Control (ACC) and the hands-on steering wheel requirement. Through an online questionnaire with 1080 respondents with experience of SAE Level 2, the study evaluated how these visual UIs influenced users' understanding of control responsibilities, information usability, and trust in automated vehicles. The results show a limited role of UI in shaping users' understanding of control. ACC UIs and LKA UIs had no significant effects, and apparently, the understanding of speed and distance control and steering control was independent of the ACC UI and LKA UI. A large variance in responses regarding the understanding of steering control and speed and distance control indicates confusion caused by mode ambiguity, suggesting that drivers do not well understand how the speed and distance control and steering control task is shared between the driver and the automation. However, the hands-on steering wheel UIs significantly improved the understanding of the hands-on steering wheel requirement. The hands-on steering wheel UI combining the hands on the wheel icon and the text "Keep hands on steering wheel" yielded 94.4% correct understanding and outperformed the UI with hands but without text (87.8% correct) or no UI (82.5% correct). In addition, the variation of visual UI did not affect trust. This study contributes to the understanding and design of visual UIs for effective communication of driver responsibilities in automated vehicles.
... Specifically, perceived reliability is commonly assumed to be a crucial predictor for dependence behavior [12,13]. If perceived reliability continues to be low despite the increasing actual reliability of available systems and this might lead to system disuse [45]. System disuse with highly reliable systems in turn leads to suboptimal joint performance of human-AI dyads [46,47]. ...
... The rapid rate at which Artificial Intelligence (AI) is developing and the accelerating rate at which it is becoming integrated into human life necessitates a thorough understanding of the dynamics of human trust in AI (Glikson and Woolley, 2020;Teaming, 2022). Addressing questions about the factors, or antecedents, influencing trust in specific AI systems and the thresholds for excessive or insufficient trust is crucial for using AI responsibly and preventing potential misuse (Parasuraman and Riley, 1997;Lockey et al., 2021). ...
Preprint
Full-text available
Information extraction from the scientific literature is one of the main techniques to transform unstructured knowledge hidden in the text into structured data which can then be used for decision-making in down-stream tasks. One such area is Trust in AI, where factors contributing to human trust in artificial intelligence applications are studied. The relationships of these factors with human trust in such applications are complex. We hence explore this space from the lens of information extraction where, with the input of domain experts, we carefully design annotation guidelines, create the first annotated English dataset in this domain, investigate an LLM-guided annotation, and benchmark it with state-of-the-art methods using large language models in named entity and relation extraction. Our results indicate that this problem requires supervised learning which may not be currently feasible with prompt-based LLMs.
... The internet has not to necessarily changed the way we physically process information, but rather the results that arise from the new conditions, including our attitude towards information (Ward, 2013). It has been already shown that people tend to over-rely on automation and therefore believe in erroneous advice from automation (Parasuraman et al., 1993;Parasuraman & Riley, 1997). It seems that people tend to keep beliefs about the external resource's utility naively (Weis & Wiese, 2019;Joachims et al., 2005) and quickly become addicted to getting information from external sources such as search engines because it's easy (Wang et al., 2017). ...
Article
Full-text available
The real-time availability of information and the intelligence of information systems have changed the way we deal with information. Current research is primarily concerned with the interplay between internal and external memory, i.e., how much and which forms of cognitively demanding processes we handle internally and when we use external storage media (analog on paper, digital on the computer). This interplay influences how and what we memorize and learn consequently. This study was motivated by the finding that people perform significantly worse in a quiz setting when they obtain content from external sources instead of reading the same content directly in the question system. In our experiments, we wanted to investigate whether interruption by the user interface (ethical appeal or forced time delay) can improve performance in the quiz when users obtain external content. We evaluated the results of 262 valid participants to complete one of three topics at random and then randomly assigned them to one of three possible conditions (ethical appeal, forced time delay, no interruption). The calculated one-way ANOVA shows a statically significant F-value (3.25,p<0.053.25,\,p < 0.05), and the separate t-tests show that an ethical appeal (t=2.29,p<0.05t = 2.29,\,p < 0.05) and an enforced delay (t=2.08,p<0.05t = 2.08,\,p < 0.05) lead to a significantly higher mean of quiz scores across all three topics. The mean values of the quiz scores of the ethical appeal and the forced time delay are not significantly different. The present results should contribute to further studies of proactive learning interfaces.
... This phenomenon, sometimes referred to as the 'automation paradox,' suggests that as trust increases, users may also experience heightened concerns about losing control, leading to a decrease in the likelihood of continued use (Parasuraman and Riley 1997). In the context of ChatGPT, this might explain why trust negatively moderates its adoption, as users who place too much trust in the system may become wary of the lack of human involvement in decision-making processes. ...
Article
Full-text available
ChatGPT transforms the shopping experience by providing responses in human‐like language about products, services, and brands to customers. This study investigated the influential drivers of intention to use ChatGPT to obtain shopping information. We extended the “extended unified theory of acceptance and use of technology” UTAUT2 by incorporating the direct and moderating effects of trust and technology anxiety. To test the model on data from 412 respondents, a hybrid Partial Least Squares—Artificial Neural Network (PLS‐ANN) approach was employed. This approach combines the strengths of PLS for modeling complex variable relationships and ANN for capturing nonlinear dependencies and interactions. PLS analysis identified performance expectancy, effort expectancy, facilitating conditions, hedonic motivation, and trust as significant drivers of ChatGPT usage. The associations between the intention to use ChatGPT and its predictors are negatively moderated by trust and technology anxiety. ANN analysis revealed that trust has the highest effect on the choice to use ChatGPT, followed by facilitating conditions, performance expectancy, hedonic motivation, and effort expectancy. By extending the UTAUT2 framework and applying the PLS‐ANN method, this study advances the theoretical understanding of technology adoption and provides practical insights for marketers and developers of AI‐driven text generators. It emphasizes the importance of building trust and alleviating technology anxiety to promote wider adoption of ChatGPT. The broader significance of this research lies in its contribution to shaping the future of retail and e‐commerce strategies by encouraging a more informed and user‐centric development of AI technologies in the shopping domain.
... It thus comes as no surprise that TRBs are the more objective method of observation due to the fact that people are not always consistent in their ratings, and may sincerely feel di erent levels of trust while performing similar TRBs [36]. Parasuraman and Riley [106] were interested in understanding the use of automation by humans, and de ned terms to describe that use. Here it is proposed that, by extension, those terms also apply to the behaviors of humans towards more advanced AIAs. ...
Preprint
People who design, use, and are affected by autonomous artificially intelligent agents want to be able to \emph{trust} such agents -- that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc, and have not been formally related to each other or to formal trust models. This paper presents a survey of \emph{algorithmic assurances}, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent's core functionality, with seven notable classes ranging from integral assurances (which impact an agent's core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.
... Regardless of the accuracy of these tools, overreliance on automation can lead to errors in decision-making. Automation bias denotes the phenomenon where decisions are influenced by an overreliance or excessive dependence on AI-based systems, even when these systems may be flawed or incorrect (Bond et al., 2019;Lyell & Coiera, 2017;Parasuraman & Riley, 1997). For example, a student unquestioningly accepts a high grade from an AI-based grading system without considering the validity of the feedback or their own understanding of the material. ...
Article
Full-text available
The integration of artificial intelligence (AI) in educational measurement has transformed assessment methods, allowing for automated scoring, swift content analysis, and personalized feedback through machine learning and natural language processing. These advancements provide valuable insights into student performance while also enhancing the overall assessment experience. However, the implementation of AI in education also raises significant ethical concerns regarding validity, reliability, transparency, fairness, and equity. Issues such as algorithmic bias and the opacity of AI decision-making processes risk perpetuating inequalities and affecting assessment outcomes. In response, various stakeholders, including educators, policymakers, and testing organizations, have developed guidelines to ensure the ethical use of AI in education. The National Council of Measurement in Education's Special Interest Group on AI in Measurement and Education (AIME) is dedicated to establishing ethical standards and advancing research in this area. In this paper, a diverse group of AIME members examines the ethical implications of AI-powered tools in educational measurement, explores significant challenges such as automation bias and environmental impact, and proposes solutions to ensure AI's responsible and effective use in education.
... This becomes problematic when drivers of automated vehicles do not understand that system functions do not work under all conditions due to their lack of knowledge [11]. This lack of understanding of DAS can not only affect road safety [24,25] and limit the expected benefits of automated driving [26]. It also leads to losing confidence in the system, causing users to choose not to purchase or use potentially helpful and safetyenhancing systems [11]. ...
Chapter
Studies show that shared automated vehicles (SAV) can increase public acceptance of automation and enable affordable on-demand mobility. However, the varying levels of knowledge about driver assistance systems (DAS) among users of automated vehicles can lead to their underuse or misuse. With the concept of vehicle sharing, familiarity with DAS varies depending on the individual user and the vehicle used. To address this issue, drivers of SAV need to be adequately informed about DAS and their intended use. This information could be provided before the trip through tutorials or during the trip through human-machine inter- faces (HMI). Understanding user characteristics and expectations is essential to make these information concepts appealing. Accordingly, we conducted an online survey to gain insights into drivers’ utilization patterns and preferences concerning shared vehicles and the use of automated driving features. The results indicate a desire for more information about the availability and intended use of DAS among users but a rejection of mandatory pre-driving introductions. The study highlights the importance of configurable and adaptive HMI concepts to provide information while driving, respecting the drivers’ needs and preferences.
... Research has shown that excessive trust in automated systems may lead to automation bias, where users are more likely to overlook their own assessments or fail to challenge AI-generated decisions [115,116]. Striking a balance between AI-driven automation and human oversight is essential to ensure responsible and safe infrastructure management [117]. ...
Article
Full-text available
This study explores the growing influence of artificial intelligence (AI) on structural health monitoring (SHM), a critical aspect of infrastructure maintenance and safety. This study begins with a bibliometric analysis to identify current research trends, key contributing countries, and emerging topics in AI-integrated SHM. We examine seven core areas where AI significantly advances SHM capabilities: (1) data acquisition and sensor networks, highlighting improvements in sensor technology and data collection; (2) data processing and signal analysis, where AI techniques enhance feature extraction and noise reduction; (3) anomaly detection and damage identification using machine learning (ML) and deep learning (DL) for precise diagnostics; (4) predictive maintenance, using AI to optimize maintenance scheduling and prevent failures; (5) reliability and risk assessment, integrating diverse datasets for real-time risk analysis; (6) visual inspection and remote monitoring, showcasing the role of AI-powered drones and imaging systems; and (7) resilient and adaptive infrastructure, where AI enables systems to respond dynamically to changing conditions. This review also addresses the ethical considerations and societal impacts of AI in SHM, such as data privacy, equity, and transparency. We conclude by discussing future research directions and challenges, emphasizing the potential of AI to enhance the efficiency, safety, and sustainability of infrastructure systems.
... When an accident that is not directly attributable to the AV itself occurs, a level of blame will be assigned to it, and trust will be diminished [17,30], affecting the potential acceptance, adoption and continued usage of the technology. Such ironies of automation are not new: they were predicted more than 40 years ago [31], with the effects extended by others [32] and recently some AV sceptics [33]. Trust is a key enabler to the adoption and continued usage of many technologies, which have been stressed within Human Factors and related fields for decades, and often in response to advances in automation [34], including AVs [35]. ...
Article
Full-text available
Despite the increasing sophistication of autonomous vehicles (AVs) and promises of increased safety, accidents will occur. These will corrode public trust and negatively impact user acceptance, adoption and continued use. It is imperative to explore methods that can potentially reduce this impact. The aim of the current paper is to investigate the efficacy of informational assistants (IAs) varying by anthropomorphism (humanoid robot vs. no robot) and dialogue style (conversational vs. informational) on trust in and blame on a highly autonomous vehicle in the event of an accident. The accident scenario involved a pedestrian violating the Highway Code by stepping out in front of a parked bus and the AV not being able to stop in time during an overtake manoeuvre. The humanoid (Nao) robot IA did not improve trust (across three measures) or reduce blame on the AV in Experiment 1, although communicated intentions and actions were perceived by some as being assertive and risky. Reducing assertiveness in Experiment 2 resulted in higher trust (on one measure) in the robot condition, especially with the conversational dialogue style. However, there were again no effects on blame. In Experiment 3, participants had multiple experiences of the AV negotiating parked buses without negative outcomes. Trust significantly increased across each event, although it plummeted following the accident with no differences due to anthropomorphism or dialogue style. The perceived capabilities of the AV and IA before the critical accident event may have had a counterintuitive effect. Overall, evidence was found for a few benefits and many pitfalls of anthropomorphising an AV with a humanoid robot IA in the event of an accident situation.
... Misplaced trust in an untrustworthy AI can lead to misuse of the technology with potentially very negative consequences (Lee, 2008;Parasuraman & Riley, 1997). International media recently reported on how the chatbot Replika (a personalised AI designed to become a "friend") encouraged a user to carry out an attack on the Queen of England (Singleton et al., 2023). ...
Article
Full-text available
The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation.
... Behaviour of trust. Users trust and depend upon a resource when they delegate and rely on it [Qian andWexler, 2024] [Wickens et al., 2015] and distrust when they reject it [Qian and Wexler, 2024] [Parasuraman and Riley, 1997]. In our experiments, we interpreted demonstrated trust when participants, after being required to use a particular language model in the first part of the task-standard ChatGPT for the control group and a custom GPT model for the treatment group-, chose to continue using it in the subsequent sections and followed its recommendations, even when its assistance became optional. ...
Preprint
Full-text available
Sycophancy refers to the tendency of a large language model to align its outputs with the user's perceived preferences, beliefs, or opinions, in order to look favorable, regardless of whether those statements are factually correct. This behavior can lead to undesirable consequences, such as reinforcing discriminatory biases or amplifying misinformation. Given that sycophancy is often linked to human feedback training mechanisms, this study explores whether sycophantic tendencies negatively impact user trust in large language models or, conversely, whether users consider such behavior as favorable. To investigate this, we instructed one group of participants to answer ground-truth questions with the assistance of a GPT specifically designed to provide sycophantic responses, while another group used the standard version of ChatGPT. Initially, participants were required to use the language model, after which they were given the option to continue using it if they found it trustworthy and useful. Trust was measured through both demonstrated actions and self-reported perceptions. The findings consistently show that participants exposed to sycophantic behavior reported and exhibited lower levels of trust compared to those who interacted with the standard version of the model, despite the opportunity to verify the accuracy of the model's output.
... Potential ironies of such a setup have long been discussed (Bainbridge, 1983;Baxter, Rooksby, Wang, & Khajeh-Hosseini, 2012;N. Leveson, 2020;Parasuraman & Riley, 1997). And the other way around: automation safeguards the system in case ROs cannot take over, for instance when commlink is not operational. ...
Chapter
Contrary to early beliefs, maritime autonomous surface ships (MASS) will not displace humans from their multifaceted involvement in maritime transportation. Rather, human roles will change with the progressing implementation of MASS but will remain crucial for ensuring the safety and operability of shipping. These potential changes, along with their impact and far-reaching implications on seafarers’ training and the job market, are herein discussed. With MASS being, to date, far from their wide-scale implementation, there are still more open questions than definite and verifiable answers. The discussion raised in this chapter may be found interesting by all parties engaged in introducing MASS to the marine industry, the Maritime Education and Training (MET) representatives, and, finally, the most interested actors of the maritime market—current seafarers.
... Autonomous driving has the potential to revolutionize transportation and urban mobility and has been a primary focus of research and development for the past two decades (Cui et al., 2024b). Traditional development of autonomous driving has primarily focused on achieving perfect black box safe navigation, and the role of humans in the system is often minimized, due to the doubts about the feasibility and benefits of integrating human and autonomous driving systems collaboratively (Parasuraman and Riley, 1997;Andrew, 2003). However, the human-like decision is still an important factor in designing autonomous driving systems. ...
... Studying trust within the context of Human-Robot Interaction is crucial for achieving optimal outcomes in collaborative environments. Properly calibrating trust is essential to prevent phenomena such as disuse, where a human avoids using a system even when it would be beneficial, and overreliance, where too much authority is given to a system, leading to the acceptance of inappropriate recommendations or decisions [23]. Given the complexity of trust, many works in the literature choose to study it within specific application domains, decomposing it into identifiable factors rather than treating it as a single, monolithic concept. ...
... Human users must also not become over-reliant on automation such that if the system makes a mistake, the human can still override the AI using situational knowledge and value judgments. Furthermore, automated systems may prompt over-reliance in emergency scenarios even after the systems have demonstrated a lack of reliability [41,48,57], or an under-reliance on a typically robust system after a single failure point [35]. Both can be configured in challenging but non-disorienting modes, with standard sensory information; or difficult and disorienting modes, with degraded information. ...
Article
Full-text available
Task allocation aims to assign manufacturing tasks to human operators and automation agents to optimize human-automation collaboration. Unlike conventional approaches that focus solely on production performance, this study identifies cognitive intelligence as a crucial aspect of human-automation task allocation. This involves modeling human cognition, which is characterized by cognitive states and human trust, to evaluate team cognitive performance. Cognitive states represent the operator’s cognitive load and attention, while human trust indicates the operator’s confidence in different levels of automation. To balance the optimization of production performance and team cognitive performance, the task allocation problem is formulated as a non-cooperative game—the Stackelberg game, where the leader optimizes production performance, and the follower optimizes team cognitive performance. The non-cooperative game implies how symbiosis is achieved in a human-automation team through optimizing different team performance measures. This formulation is essentially a bi-level optimization problem. To solve it, a Multi-Environment Genetic Algorithm is proposed. Finally, a case study of medical tube assembly task allocation is presented to validate the feasibility and effectiveness of the algorithm.
Article
The U.S. Army Aeromedical Research Laboratory (USAARL) Multi-Attribute Task Battery (MATB) represents a significant advancement in research platforms for human performance assessment and automation studies. The USAARL MATB builds upon the legacy of the traditional MATB, which has been refined over 30 years of use to include four primary aviation-like tasks. However, the USAARL MATB takes this foundation and enhances it to meet the demands of contemporary research, particularly in the areas of performance modeling, cognitive workload assessment, adaptive automation, and trust in automation. The USAARL MATB retains the four classic subtask types from its predecessors while introducing innovations such as subtask variations, dynamic demand transitions, and performance-driven adaptive automation handoffs. This paper introduces the USAARL MATB to the research community, highlighting its development history, key features, and potential applications.
Chapter
Following recent technological developments, organizations and businesses seek to improve their effectiveness by increasing the use of artificial agents in the workplace. Previous research suggests that humans react to the adoption of artificial agents in three ways: 1) some humans appreciate algorithmic advice (algorithm appreciation); 2) some humans oppose algorithmic advice (algorithm aversion); and 3) some humans fully relinquish control to artificial agents (automation bias). Using tools and methods form the field of systems thinking, we analyze the existing literature on human-machine interactions in organizational settings and develop a conceptual model that provides an underlying structural explanation for the emergence of algorithm appreciation, algorithm aversion, and automation bias in various contexts. In doing so, we create a powerful visual tool that can be used to ground discussions about the responsible adoption of artificial agents in the workplace and the long-term impact they cause for organizations and humans within them. We use the model to hypothesize possible behavioral outcomes produced by the proposed structure.
Chapter
This chapter focuses on a transfer from research on trust processes in the interaction with automated vehicles to the interaction with current AI systems like Large Language Models (LLMs). After discussing parallels between the domains and transferable research insights, the psychological processes in trust calibration are discussed along the central propositions of the Three Stages of Trust framework (Kraus in Psychological processes in the formation and calibration of trust in automation, 2020). The framework emphasises the dynamic role of trust and the iterative learning about a system’s trustworthiness with presented information prior and during system interaction. Furthermore, it underlines the role of situational context and individual user differences. Designing AI systems that support calibrated trust involves transparent communication of capabilities and limitations, integrating human-centered design principles, and offering adaptive information to aid decision-making. These strategies aim to foster a balanced and informed use of AI, eventually fostering an efficient and safe interaction.
Article
Prior research has shown that automation errors—false alarms and misses—differentially impact operator behavior, yet it is not clear why this difference exists. This study examined how the type of automation error, the automation’s reliability, and the number of experiences a person has with an automated system impacts their perception of the system. Participants responded to the correctness of an automation recommendation about the match between pairs of Mega Block shapes while automation reliability, number of trials, and error type varied. At the end of each block, participants provided their estimates of the automation’s reliability, ratings of their confidence in their reliability estimate, and their trust in the system. Our findings indicated that the type of automation error does not impact operator perceptions of the system. Instead, factors such as working memory limitations and pre-existing biases impact these perceptions.
Chapter
A servicer spacecraft equipped with robotic manipulators is a key technology for space debris removal and on-orbit servicing missions. The servicer is a composite system consisting of robotic manipulators, on-board cameras, proprioceptive and exteroceptive sensors, and the spacecraft actuation system. This hybrid nature of sensing and actuation on the spacecraft and manipulator subsystems poses several challenges to achieve the task of the robotic mission. This chapter outlines the challenges in such robotic missions and provides hardware design and control strategies, with a key focus on on-ground validation. Firstly, the mission aspects of the robotic capture of a satellite are introduced while considering the e.Deorbit mission study. The on-ground verification and validation of the control algorithms under micro-gravity condition is a necessity before the space mission commissioning. To this end, a state-of-art on-ground robotic facility, namely OOS-SIM, is described and used later for validation. From a control perspective, strategies for the orbital robot including vision-based semi-autonomy, shared control and teleoperation are reviewed. In particular, these controllers are designed to accomplish the mission task while considering the dynamic interaction between the spacecraft and the manipulator, the heterogeneity of the actuators available for control, and the lack of direct measurements of the complete state. In this context, the combined impedance control of the robotic manipulator and the satellite is specifically designed to be compliant against interaction with space targets. Finally, an end-to-end validation of a space-mission for on-orbit servicing is presented involving both on-board and on-ground segments, with realistic space communication characteristics.
Article
Complex Cyber-Physical-Human System (CPHS) integrate the human operator as an essential element to assist with different aspects of information monitoring and decision making across the system to achieve the desired goal. A crucial aspect of enhancing CPHS efficiency lies in understanding the interaction dynamics of human cognitive factors and Cyber-Physical Systems (CPS). This entails designing feedback mechanisms, reasoning processes, and compliance protocols with consideration of their psychological impacts on human operators, fostering shared awareness between humans and AI agents and calibrating feedback levels to ensure operators are informed without being overwhelmed. This study focuses on a specific CPHS scenarios involving a human operator interacting with a swarm of robots for search, rescue, and monitoring tasks. It explores the impact of swarm non-compliance rate and feedback levels from the robotic swarm on the human operators’ cognitive processes, and the extent to which individual differences in information processing influence this interaction dynamics. A human subject study with 20 participants experienced in strategic gaming involved nine scenarios with randomized robot compliance and AI feedback levels was conducted. Cognitive factors, categorized into brain-based features (mental engagement, workload, distraction) and eye-based features (pupil size, fixation, saccade, blink rate), were analyzed. Statistical analyses revealed that brain-based features, particularly mental workload, were predictive of the effect of compliance level, while feedback level and expertise affected both eye features and brain cognitive factors.
Article
Social identity theory is widely accepted to explain intergroup relations for any group. Decisions are influenced by people's social identity which moderates the agent’s sense of agency -one’s feelings of controlling their own actions; therefore, both should be considered while investigating human-generative AI interactions and possible challenges that arise from them. This review starts with discussing human-AI interactions in terms of Social Identity Theory; then, focuses on the sense of agency that plays out in human-AI interactions moderated by social identity; and finally, discusses consequences that would be raised from these correlations. Accountability is one of the concerns related to human-AI interaction. The diversity of the users and the data is another concern. We conclude the review by suggesting a future direction for empirical research on social aspects of the sense of agency in human-AI interactions and provide possible solutions to ethical and social concerns regarding the use of generative AI systems.
Article
With the increasing performance of text-to-speech systems and their generated voices indistinguishable from natural human speech, the use of these systems for robots raises ethical and safety concerns. A robot with a natural voice could increase trust, which might result in over-reliance despite evidence for robot unreliability. To estimate the influence of a robot's voice on trust and compliance, we design a study that consists of two experiments. In a pre-study ( N1=60N_{1}=60 ) the most suitable natural and mechanical voice for the main study are estimated and selected for the main study. Afterward, in the main study ( N2=68N_{2}=68 ), the influence of a robot's voice on trust and compliance is evaluated in a cooperative game of Battleship with a robot as an assistant. During the experiment, the acceptance of the robot's advice and response time are measured, which indicate trust and compliance respectively. The results show that participants expect robots to sound human-like and that a robot with a natural voice is perceived as safer. Additionally, a natural voice can affect compliance. Despite repeated incorrect advice, the participants are more likely to rely on the robot with the natural voice. The results do not show a direct effect on trust. Natural voices provide increased intelligibility, and while they can increase compliance with the robot, the results indicate that natural voices might not lead to over-reliance. The results highlight the importance of incorporating voices into the design of social robots to improve communication, avoid adverse effects, and increase acceptance and adoption in society.
Article
Objective The study investigates users’ tendency to access decision support (DS) systems as a function of the correlation between the DS information and the information users already have, the ongoing interaction with such systems, and the effect of correlated information on subjective trust. Background Previous research has shown inconclusive findings regarding whether people prefer information that correlates with information they already have. Some studies conclude that individuals recognize the value of noncorrelated information, given its unique content, while others suggest that users favor correlated information as it aligns with existing evidence. The impact of the level of correlation on performance, subjective trust, and the decision to use DS remains unclear. Method In an experiment ( N = 481), participants made classification decisions based on available information. They could also purchase additional DS with different degrees of correlation with the available information. Results Participants tended to purchase information more often when the DS was not correlated with the available information. Correlated information reduced performance, and the effect of correlation on subjective trust and performance depended on DS sensitivity. Conclusion Additional information may not improve performance when it is correlated with available information (i.e., it is redundant). Hence, the benefits of additional information and DS depend on the information the system and the operator use. Application It is essential to analyze the correlations between information sources and design the available information to allow optimal task performance and possibly minimize redundancy (e.g., by locating sensors in different positions to capture independent data).
Article
Algorithms are capable of advising human decision‐makers in an increasing number of management accounting tasks such as business forecasts. Due to expected potential of these (intelligent) algorithms, there are growing research efforts to explore ways how to boost algorithmic advice usage in forecasting tasks. However, algorithmic advice can also be erroneous. Yet, the risk of using relatively bad advice is largely ignored in this research stream. Therefore, we conduct two online experiments to examine this risk of using relatively bad advice in a forecasting task. In Experiment 1, we examine the influence of performance feedback (revealing previous relative advice quality) and source of advice on advice usage in business forecasts. The results indicate that the provision of performance feedback increases subsequent advice usage but also the usage of subsequent relatively bad advice. In Experiment 2, we investigate whether advice representation, that is, displaying forecast intervals instead of a point estimate, helps to calibrate advice usage towards relative advice quality. The results suggest that advice representation might be a potential countermeasure to the usage of relatively bad advice. However, the effect of this antidote weakens when forecast intervals become less informative.
Conference Paper
Trust plays a major role when introducing interactive robots into people’s personal spaces, which, in large part, depends on how they perceive the robot. This paper presents the initial results of an investigation into the perception of robot agency as a potential factor influencing trust. We manipulated a robot’s agency to see how trust would change as a result. Our preliminary results indicate age as a confounding factor while we did not find differences when priming robot autonomy.
Article
Flood alerts are a means of risk communication that alerts the public to potential floods. The purpose of this research was to investigate factors that affected drivers’ understanding and actions given a flood presented through a mobile navigation application. Two experiments were conducted to examine the effects of time pressure and type of flood information on drivers’ planned actions when faced with potential flooding. Participants were asked about their planned actions given one type of flood information in a driving scenario either with or without time pressure. Our results indicated significant differences in participants’ behaviors across the different flood information types, but not between the two time-pressure conditions. These results suggest that displaying the flood information is helpful in promoting drivers’ safe decisions to avoid the potentially flooded roadway, and that detailed information may help users better perceive the depth and risk of the potential flood.
Article
Full-text available
Many diagnostic tasks require that a threshold be set to convert evidence that is a matter of degree into a positive or negative decision. Although techniques of decision analysis used in psychology help one select the particular threshold that is appropriate to a given situation and purpose, just the concept of adjusting the threshold to the situation is not appreciated in many important practical arenas. Testing for the human immunodeficiency virus (HIV) and for dangerous flaws in aircraft structures are used here as illustrations. This article briefly reviews the relevant techniques and develops those two examples with data. It suggests that use of the decision techniques could substantially benefit individuals and society and asks how that use might be facilitated.
Article
Full-text available
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a ‘trust transfer function’ is developed using lime series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
Full-text available
Recent technological advances have made viable the implementation of intelligent automation in advanced tactical aircraft. The use of this technology has given rise to new human factors issues and concerns. Errors in highly automated aircraft have been linked to the adverse effects of automation on the pilot's system awareness, monitoring workload, and ability to revert to manual control. However adaptive automation, or automation that is implemented dynamically in response to changing task demands on the pilot, has been proposed to be superior to systems with fixed, or static automation. This report examines several issues concerning the theory and design of adaptive automation in aviation systems, particularly as applied to advanced tactical aircraft. An analysis of the relative costs and benefits of conventional (static) aviation automation provides the starting point for the development of a theory of adaptive automation. This analysis includes a review of the empirical studies investigating effects of automation on pilot performance. The main concepts of adaptive automation are then introduced, and four major methods for implementing adaptive automation in the advanced cockpit are described and discussed. Aircraft Automation, Pilot Situational Awareness, Aviation Human Factors, Pilot Workload.
Article
Full-text available
An automated detector designed to warn a system operator of a dangerous condition often has a low positive predictive value (PPV); that is, a small proportion of its warnings truly indicate the condition to be avoided. This is the case even for very sensitive detectors operating at very strict thresholds for issuing a warning because the prior probability of a dangerous condition is usually very low. As a consequence, operators often respond to a warning slowly or not at all. Reported here is a preliminary laboratory experiment designed in the context of signal detection theory that was conducted to examine the effects of variation in PPV on the latency of participants' response to a warning. Bonuses and penalties placed premiums on accurate performance in a background tracking task and on rapid response to the warnings. Observed latencies were short for high values of PPV, bimodal for middle-to-low values, and predominantly long for low values. The participants' response strategies for different PPVs were essentially optimal for the cost-benefit structure of the experiment. Some implications for system design are discussed. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.
Article
In this paper, we synthesize data from the laboratory, from pilot surveys and from accident and incident analysis, to identify five problem areas in flight deck automation, related to overtrust (complacency), mistrust, workload, situation awareness and perceived control. Causes of these problems are identified, and four categories of solutions are proposed in seeking the goal of pilot-centered automation: these solutions relate to simplification, display design, training, and corporate policy.
Article
A probabilistic methodology for evaluating hazard alerting systems is described that can be used in vehicle, transportation system, and process control applications. A means of showing the tradeoff between false alarms and missed detections is presented using signal detection theory concepts. The methodology accounts for uncertainties in measurement sources, alerting thresholds and displays, the human operator, and the situation dynamics. An example demonstration of the methodology is provided using the Traffic Alert and Collision Avoidance System (TCAS), an alerting system designed to prevent mid-air collisions between aircraft
Article
The evolution of information technology is characterized by three trends causing the rapid change in the interaction of devices and system with people. Devices can now perform tasks originally done by human beings. Moreover, the increasing versatility of machines is expanding the way by which people perform their job. Both human factors researchers and engineers are concerned with the question of how to design tools, appliances, vehicles, communication devices and other artifacts so they are well matched to the capabilities and limitations of human beings and contribute positively to the safety, comfort and productivity of the user. Finally, reflection on how information technology may continue to evolve and be used raises many human factors issues and questions that need to be addressed.
Article
Two proposed safety parameter display systems, of the type to be required in nuclear power plant control rooms, were evaluated using a training simulator and experienced crews undergoing refresher training. A decision analysis approach was used. The discussion addresses the effectiveness of the training situation as an evaluation tool and methodological issues.
Article
Automated procedural and decision aids may in some cases have the paradoxical effect of increasing errors rather than eliminating them. Results of recent research investigating the use of automated systems have indicated the presence automation bias, a term describing errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing (Mosier & Skitka, in press). Automation commission errors, i.e., errors made when decision makers take inappropriate action because they over-attend to automated information or directives, and automation omission errors, i.e., errors made when decision makers do not take appropriate action because they are not informed of an imminent problem or situation by automated aids, can result from this tendency. A wide body of social psychological research has found that many cognitive biases and resultant errors can be ameliorated by imposing pre-decisional accountability, which sensitizes decision makers to the need to construct compelling justifications for their choices and how they make them. To what extent these effects generalize to performance situations has yet to be empirically established. The two studies presented represent concurrent efforts, with student and “glass cockpit” pilot samples, to determine the effects of accountability pressures on automation bias and on verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves “accountable” for their strategies of interaction with the automation were significantly more likely to verify its correct functioning, and committed significantly fewer automation-related errors than those who did not report this perception.
Article
This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the 'classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration.
Article
Subjects included 24 non-pilots who performed simulated flight-related tasks of tracking, fuel-management, and system monitoring. Tracking and fuel management were performed manually, whereas system monitoring was automated. Subjects were required to detect system malfunctions not detected by the automation (automation failures). The reliability of the automation remained constant or varied over time. Subjects detected significantly fewer automation failures in the constant-reliability automation condition than in the variable-reliability condition. Inefficiency in monitoring for automation failure was examined in relation to three individual-difference measures: the Complacency Potential Rating Scale, the Eysenck Personality Inventory (introversion-extraversion), and a modified version of Thayer's Activation-Deactivation Adjective Check List (energetic arousal). These measures were not significantly intercorrelated, suggesting their relative independence. For subjects with high-complacency-potential scores, there was a correlation of - .42 between complacency potential and detection rate of automation failures. Introversion-extraversion was unrelated to monitoring performance. Finally, high energetic-arousal subjects had initially higher detection rates in the constant-reliability condition than did low-arousal subjects. The results suggest a modest relationship between individual differences in complacency potential and energetic-arousal and automation-related monitoring inefficiency.
Article
This paper sets out the basic philosophy of a developing program of computer-aiding concepts for the controller's decision making. A brief review is given of early work on the computer-assisted approach sequencing (CAAS) concept for a major airport, and the main topic is the interactive conflict resolution (ICR) concept for assisting the en route controller in conflict detection and resolution. ICR is a predictive aid used interactively by the controller; the concept is described in detail. A real-time simulation experiment is reported, in which each of three pairs of controllers acted as an executive/support team in handling traffic samples in a busy sector. Objective records and subjective data suggest the effectiveness and acceptability of ICR. Further research on the controller's activities within, and attitudes toward, computer-based tasks is outlined.
Article
Automation-induced "complacency" has been implicated in accidents involving commercial aircraft and other high-technology systems. A 20-item scale was developed for measuring attitudes toward commonly encountered automated devices that reflect a potential for complacency. Factor analysis of responses (N = 139) to scale items revealed four factors: Confidence-Related, Reliance-Related, Trust-Related, and Safety-Related Complacency. It is proposed that these form components of automation-induced complacency rather than general attitudes toward automation. The internal consistency (r = 37) and test reliability (r = .90) of the factors and the scale as a whole were high. Complacency potential is discussed with respect to interrelations between automation and operator trust in and reliance on automation.
Article
The increasing role of automation in human-machine systems requires modelling approaches which are flexible enough to systematically express a large range of automation levels and assist the exploration of a large range of automation issues. A General Model of Mixed-Initiative Human-Machine Systems is described, along with a corresponding automation taxonomy, which: provides a framework for representing human-machine systems over a wide range of complexity; forms the basis of a dynamic, pseudo-mathematical simulation of complex interrelationships between situational and cognitive factors operating in dynamic function allocation decisions; and can guide methodical investigations into the implications of decisions regarding system automation levels.
Article
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
Recently, a new class of artifacts has appeared in our environment: complex, high-technology work domains. An important characteristic of such systems is that their goal-relevant properties cannot be directly observed by the unaided eye. As a result, interface design is a ubiquitous problem in the design of these work environments. Nevertheless, the problem is one that has yet to be addressed in an adequate manner. An analogy to human perceptual mechanisms suggests that a smart instrument approach to interface design is needed to supplant the rote instrument (single-sensor-single-indicator) approach that has dominated to this point. Ecological interface design (ED) is a theoretical framework in the smart instrument vein that postulates a set of general, prescriptive principles for design. The goal of E D is twofold: first, to reveal the affordances of the work domain through the interface in such a way as to take advantage of the powerful capabilities of perception and action; and second, to provide the appropriate computer support for the comparatively more laborious process of problem solving. An example of the application of the E D framework is presented in the context of a thermal-hydraulic system. The various steps in the design process are illlustrated, showing how the abstract principles of E D can be applied in a prescriptive manner to develop a concrete design product. An important outcome of this discussion is the novel application of Rasmussen's (1985b) means-end hierarchy to structure the affordances of an ecosystem.
Article
Adaptive aiding is a human-machine system design concept that involves using aiding/automation only at those points in time when human performance in a system needs support to meet operational requirements---in the absence of such needs, human performance remains unaided/manual, and thereby humans remain very much "in the loop." This paper describes the evolution and results of an ongoing program of experimental and theoretical research in adaptive aiding. The development and proof of concept are first discussed, followed by consideration of human performance models, on-line assessment methods, and the psychology of human-aid interaction. The implications of these ideas and results are discussed relative to design of intelligent support systems in general and expert systems in particular. A framework for design is presented that includes a structured set of design questions that may be addressed in terms of principles of adaptation and principles of interaction.
Article
This book unifies relevant aspects of engineering, operations analysis, human factors, and psychology and discusses the basis of integrated systems design.
Book
This book proposes a theory of human cognitive evolution, drawing from paleontology, linguistics, anthropology, cognitive science, and especially neuropsychology. The properties of humankind's brain, culture, and cognition have coevolved in a tight iterative loop; the main event in human evolution has occurred at the cognitive level, however, mediating change at the anatomical and cultural levels. During the past two million years humans have passed through three major cognitive transitions, each of which has left the human mind with a new way of representing reality and a new form of culture. Modern humans consequently have three systems of memory representation that were not available to our closest primate relatives: mimetic skill, language, and external symbols. These three systems are supported by new types of “hard” storage devices, two of which (mimetic and linguistic) are biological, one technological. Full symbolic literacy consists of a complex of skills for interacting with the external memory system. The independence of these three uniquely human ways of representing knowledge is suggested in the way the mind breaks down after brain injury and confirmed by various other lines of evidence. Each of the three systems is based on an inventive capacity, and the products of those capacities – such as languages, symbols, gestures, social rituals, and images – continue to be invented and vetted in the social arena. Cognitive evolution is not yet complete: the externalization of memory has altered the actual memory architecture within which humans think. This is changing the role of biological memory and the way in which the human brain deploys its resources; it is also changing the form of modern culture.
Article
The evolution of information technology has been characterized by three trends: increasing ethereality, increasing connectivity, and increasing versatility. As a consequence of these trends, the nature of many of the devices and systems with which people interact, in the workplace and elsewhere, has been changing rapidly and is likely to continue to do so in the future. In particular, devices are acquiring more and more cognitive-type capabilities and are being used to perform an increasingly large fraction of the tasks once done by human beings. On the other hand, the increasing versatility of machines can expand greatly the universe of things that people can do with their help. Human factors researchers and engineers are concerned with the question of how to design tools, appliances, vehicles, communication devices and other artifacts so they are well matched to the capabilities and limitations of human beings and contribute positively to the safety, comfort, and productivity of their users. This concern will be at least as important in the future as it has been in the past, but the context in which it is exercised will be different in many respects. Reflection on how information technology may continue to evolve and be used raises many human factors issues and questions that need to be addressed. @ 1995 John WIley & Sons, Inc.
Article
It has been widely argued that the organizational and strategic issues have to be considered for the successful implementation of Advanced Manufacturing Technology (AMT). However, there are still some questions left unanswered. What specific dimensions of the organization and strategy should be considered? When should they be considered? How could they be interrelated during the process of implementation? This article tries to look at these questions through (1) the development of a conceptual model linking the organization, technology, and strategy through management, (2) the division of the implementation process into four stages, and (3) the proposal of a general framework of the implementation guide.
Article
Evidence suggests that the effective implementation of advanced computer-integrated technologies in manufacturing depends upon a considerable degree of organizational adaptation. One interpretation of such radical technological change suggests that it is part of a pervasive paradigm shift which is reframing the rules governing best practice in manufacturing and that organization design needs to involve a complete reappraisal rather than a minor adjustment. This article reviews evidence for the emergence of such a paradigm shift and presents some case study data on the nature of organizational changes which characterize it. It concludes with some suggested guidelines for organization designers working toward suitable models for factory organization and management into the next century.
Article
Sensemaking in crisis conditions is made more difficult because action that is instrumental to understanding the crisis often intensifies the crisis. This dilemma is interpreted from the perspective that people enact the environments which constrain them. It is argued that commitment, capacity, and expectations affect sensemaking during crisis and the severity of the crisis itself. It is proposed that the core concepts of enactment may comprise an ideology that reduces the likelihood of crisis.
Article
Increased maritime traffic, new types of vessels, and construction of oil and gas producing structures have made navigating in close waters more hazardous. In addition, attempts to increase shipboard productivity have resulted in fewer personnel on board the vessel. This paper reports on the development and evaluation of a prototype expert system to support the cognitive processes involved in piloting: maneuvering and collision avoidance, and the practice of good seamanship. A model was constructed and implemented in a frame- and rule-based representation. The system was assessed using gaming with novice pilots in a merchant marine training facility. The results showed significant improvement in the bridge watch team performance, but no significant improvement in vessel performance in terms of trackkeeping. The paper concludes with a discussion of the motor, perceptual, and cognitive skills needed for piloting and how they could be supported by expert system technology as part of an integrated bridge system, an operational center for navigational and supervisory tasks aboard a ship.
Article
There is a danger inherent in labeling systems “expert.” Such identification implies some levels of “intelligence” or “understanding” within the confines of the system. It is important to know the limitations of any system, including realistic expectations of the real or implied power of an expert system. The “blindness” or boundaries inherent in expert system development extends to users who may misplace trust in false technology.This study investigates the use of an incorrect advice-giving expert system. Expert and novice engineers used the faulty system to solve a well test interpretation task. Measures of decision confidence, system success, state-anxiety and task difficulty were taken. Subjects expressed confidence in their “wrong” answer to the problem, displaying true dependence on a false technology. These results suggest implications for developers and/or users in areas of certification, evaluation, risk assessment, validation, and verification of systems conveying a level of “expertise.”
Article
Intelligent Vehicle-Highway Systems (IVHS) have been proposed in the wake of rapid worldwide growth in traffic volume and density. These systems involve the application of advanced sensor, communications, computational, and control technologies to the design of highways and vehicles to improve traffic flow and safety. Similar technologies have been applied in other transportation systems such as aviation and air-traffic control, and it is suggested that the human factors insights derived from these systems can be usefully applied, proactively rather than retroactively, in IVHS design. Several safety and human factors issues relevant to the design of IVHS technologies, both near-term and long-term, are discussed, including: (a) the optimization of driver mental workload in highly-automated “hybrid” systems; (b) the design of in-vehicle navigation aids and the resolution of display conflicts; (c) individual and group differences in driver behavior and their implications for training and licensure; (d) the evolution and integration of IVHS technologies; and (e) traffic management and the regulation of driver trust in IVHS. Successful resolution of these issues and their incorporation in IVHS design will provide for fully functional systems that will serve the twin needs of reducing traffic congestion and improving highway safety.
Article
The present study examined the effects of task complexity and time on task on the monitoring of a single automation failure during performance of a complex flight simulation task involving tracking, fuel management, and engine-status monitoring. Two groups of participants performed either all three flight simulation tasks simultaneously (multicomplex task) or the monitoring task alone (single-complex task); a third group performed a simple visual vigilance task (simple task). For the multicomplex task, monitoring for a single failure of automation control was poorer than when participants monitored engine malfunctions under manual control. Furthermore, more participants detected the automation failure in the first 10 min of a 30-min session than in the last 10 min of the session, for both the simple and the multicomplex task. Participants in the single-complex condition detected the automation failure equally well in both periods. The results support previous findings of inefficiency in monitoring automation and show that automation-related monitoring inefficiency occurs even when there is a single automation failure. Implications for theories of vigilance and automation design are discussed.
Article
The question of whether the automated system should serve as the human pilot's assistant or vice versa is examined. The authors consider whether the quantity and nature of pilot error have been altered by technology and increased automation. Attention is also given to the question of whether system designers should automate by replacing or by enhancing pilot performance. Finally, the problems that stand in the way of achieving the ambitious goals of high performance and high system reliability are considered.
Article
As automated controllers supplant human intervention in controlling complex systems, the operators' role often changes from that of an active controller to that of a supervisory controller. Acting as supervisors, operators can choose between automatic and manual control. Improperly allocating function between automatic and manual control can have negative consequences for the performance of a system. Previous research suggests that the decision to perform the job manually or automatically depends, in part, upon the trust the operators invest in the automatic controllers. This paper reports an experiment to characterize the changes in operators' trust during an interaction with a semi-automatic pasteurization plant, and investigates the relationship between changes in operators' control strategies and trust. A regression model identifies the causes of changes in trust, and a 'trust transfer function' is developed using time series analysis to describe the dynamics of trust. Based on a detailed analysis of operators' strategies in response to system faults we suggest a model for the choice between manual and automatic control, based on trust in automatic controllers and self-confidence in the ability to control the system manually.
Article
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
Article
Thesis (Ph. D.)--Ohio State University, 1994. Includes bibliographical references (leaves 118-122). Advisor: David D. Woods, Dept. of Industrial and Systems Engineering.
Article
A system for the automated management and control of arrival traffic, referred to as the Center-TRACON Automation System (CTAS), has been designed by the ATC research group at NASA Ames research center. In a cooperative program, NASA and the FAA have efforts underway to install and evaluate the system at the Denver and Dallas/Ft. Worth airports. CTAS consists of three types of integrated tools that provide computer-generated intelligence for both Center and TRACON controllers to guide them in managing and controlling arrival traffic efficiently. One tool, the Traffic Management Advisor (TMA), establishes optimized landing sequences and landing times for aircraft arriving in the center airspace several hundred miles from the airport. In TRACON, TMA frequencies missed approach aircraft and unanticipated arrivals. Another tool, the Descent Advisor (DA), generates clearances for the center controllers handling at crossing times provided by TMA. In the TRACON, the final approach spacing tool (FAST) provides heading and speed clearances that produce and accurately spaced flow of aircraft on the final approach course. A data base consisting of aircraft performance models, airline preferred operational procedures and real time wind measurements contribute to the effective operation of CTAS. Extensive simulator evaluations of CTAS have demonstrated controller acceptance, delay reductions, and fuel savings.
Article
A three-year study of airline crews at two U.S. airlines who were flying an advanced technology aircraft, the Boeing 757 is discussed. The opinions and experiences of these pilots as they view the advanced, automated features of this aircraft, and contrast them with previous models they have flown are discussed. Training for advanced automation; (2) cockpit errors and error reduction; (3) management of cockpit workload; and (4) general attitudes toward cockpit automation are emphasized. The limitations of the air traffic control (ATC) system on the ability to utilize the advanced features of the new aircraft are discussed. In general the pilots are enthusiastic about flying an advanced technology aircraft, but they express mixed feelings about the impact of automation on workload, crew errors, and ability to manage the flight.
Article
The aims and methods of aircraft cockpit automation are reviewed from a human-factors perspective. Consideration is given to the mixed pilot reception of increased automation, government concern with the safety and reliability of highly automated aircraft, the formal definition of automation, and the ground-proximity warning system and accidents involving controlled flight into terrain. The factors motivating automation include technology availability; safety; economy, reliability, and maintenance; workload reduction and two-pilot certification; more accurate maneuvering and navigation; display flexibility; economy of cockpit space; and military requirements.
Article
The paper analyzes the role of human factors in flight-deck automation, identifies problem areas, and suggests design guidelines. Flight-deck automation using microprocessor technology and display systems improves performance and safety while leading to a decrease in size, cost, and power consumption. On the other hand negative factors such as failure of automatic equipment, automation-induced error compounded by crew error, crew error in equipment set-up, failure to heed automatic alarms, and loss of proficiency must also be taken into account. Among the problem areas discussed are automation of control tasks, monitoring of complex systems, psychosocial aspects of automation, and alerting and warning systems. Guidelines are suggested for designing, utilising, and improving control and monitoring systems. Investigation into flight-deck automation systems is important as the knowledge gained can be applied to other systems such as air traffic control and nuclear power generation, but the many problems encountered with automated systems need to be analyzed and overcome in future research.
Article
In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.
Article
Consideration is given to some of the negative aspects of the trend toward increased automation of aircraft flight decks. The history of automated devices for navigation, communications and detection on board aircraft is reviewed. Instances of automatic system failure are identified which have led to accidents, and the events surrounding the downing of Korean Airlines Flight 747 are reexamined within the context of a computer-based system failure. Finally, new software and interactive systems to reduce navigational error due to inadequate computer-assisted flight instruction (CAI) are described, with emphasis given to speech processing and intelligent CAI systems.
Article
Dissociation between performance and subjective workload measures was investigated in the theoretical framework of the multiple resources model. Subjective measures do not preserve the vector characteristics in the multidimensional space described by the model. A theory of dissociation was proposed to locate the sources that may produce dissociation between the two workload measures. According to the theory, performance is affected by every aspect of processing whereas subjective workload is sensitive to the amount of aggregate resource investment and is dominated by the demands on the perceptual/central resources. The proposed theory was tested in three experiments. Results showed that performance improved but subjective workload was elevated with an increasing amount of resource investment. Furthermore, subjective workload was not as sensitive as was performance to differences in the amount of resource competition between two tasks. The demand on perceptual/central resources was found to be the most salient component of subjective workload. Dissociation occurred when the demand on this component was increased by the number of concurrent tasks or by the number of display elements. However, demands on response resources were weighted in subjective introspection as much as demands on perceptual/central resources. The implications of these results for workload practitioners are described.