Fig 4 - uploaded by Eleni Petraki
Content may be subject to copyright.
Levels of automation adopted from Sheridan and Verplank [39]  

Levels of automation adopted from Sheridan and Verplank [39]  

Source publication
Article
Full-text available
This paper considers two emerging interdisciplinary, but related topics that are likely to create tipping points in advancing the engineering and science areas. Trusted Autonomy (TA) is a field of research that focuses on understanding and designing the interaction space between two entities each of which exhibits a level of autonomy. These entitie...

Similar publications

Preprint
Full-text available
This paper presents a hierarchical motion planner for planning the manipulation motion to repose long and heavy objects considering external support surfaces. The planner includes a task level layer and a motion level layer. We formulate the manipulation planning problem at the task level by considering grasp poses as nodes and object poses for edg...

Citations

... Trust also impacts the willingness of people to accept information and help from SARs (Freedy et al., 2007;Hancock et al., 2011) and their desire to use them for certain purposes (Naneva et al., 2020;Salem et al., 2015). As noted by Abbass et al. (2016), if people trust the technology and evaluate it positively, they are likely to delegate tasks to it that will help them make decisions in complex situations. Moreover, if the trust relationship is positively reinforced, task performance improves, and subsequently evaluations become more positive. ...
Preprint
Full-text available
Socially Assistive Robots (SARs) are expected to support autonomy, aging in place, and wellbeing in later life. For successful assimilation, it is necessary to understand factors affecting older adults Quality Evaluations (QEs) of SARs, including the pragmatic and hedonic evaluations and overall attractiveness. Previous studies showed that trust in robots significantly enhances QE, while technophobia considerably decreases it. The current study aimed to examine the relative impact of these two factors on older persons QE of SARs. The study was based on an online survey of 384 individuals aged 65 and above. Respondents were presented with a video of a robotic system for physical and cognitive training and filled out a questionnaire relating to that system. The results indicated a positive association between trust and QE and a negative association between technophobia and QE. A simultaneous exploration demonstrated that the relative impact of technophobia is significantly more substantial than that of trust. In addition, the pragmatic qualities of the robot were found to be more crucial to its QE than the social aspects of use. The findings suggest that implementing robotics technology in later life strongly depends on reducing older adults technophobia regarding the convenience of using SARs and highlight the importance of simultaneous explorations of facilitators and inhibitors.
... Looking at recent attempts to operationalise Human-AI interaction (HAII), researchers are pointing toward the idea that AI systems should provide a platform over which humans and AI may safely act together (co-act) and compensate for each other's abilities and limitations in a symbiotic relationship (Abbass 2019;Abbass et al. 2016;Bousdekis et al. 2020;Fletcher et al. 2020;Hamann et al. 2016;Jarrahi 2018;Peeters et al. 2021). To co-act AI systems should be designed as tools with programmed intentionality that may evolve autonomously by affecting, as well as being affected by, the exchanges with humans in positive and negative ways. ...
... Along this line, Fletcher et al. (2020) proposed an exploratory work to understand how to benchmark and demonstrate the value of Human-AI teams. Independently from which perspectives will aggregate more consensus, the common trend is to conceive AI systems as co-active agents (Johnson et al. 2011) that may work in a symbiotic relationship with humans by creating additional value sustainable at the individual and collective level (Abbass et al. 2016;Bousdekis et al. 2020;Fletcher et al. 2020;Hamann et al. 2016;Jarrahi 2018;Peeters et al. 2021). To put at the centre of the design the exchange between AI and humans instead of the user alone is a sort of 'Copernican revolution' for the product design (Shneiderman 2020). ...
Article
Full-text available
The European Union (EU) Commission’s whitepaper on Artificial Intelligence (AI) proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: (i) the lack of a coherent EU vision to drive future decision-making processes at state and local levels and (ii) the lack of methods to support a sustainable diffusion of AI in our society. The lack of a coherent vision stems from not considering societal differences across the EU member states. We suggest that these differences may lead to a fractured market and an AI crisis in which different members of the EU will adopt nation-centric strategies to exploit AI, thus preventing the development of a frictionless market as envisaged by the EU. Moreover, the Commission aims at changing the AI development culture proposing a human-centred and safety-first perspective that is not supported by methodological advancements, thus taking the risks of unforeseen social and societal impacts of AI. We discuss potential societal, technical, and methodological gaps that should be filled to avoid the risks of developing AI systems at the expense of society. Our analysis results in the recommendation that the EU regulators and policymakers consider how to complement the EC programme with rules and compensatory mechanisms to avoid market fragmentation due to local and global ambitions. Moreover, regulators should go beyond the human-centred approach establishing a research agenda seeking answers to the technical and methodological open questions regarding the development and assessment of human-AI co-action aiming for a sustainable AI diffusion in the society.
... Trust is an abstract concept and is complex and multidimensional (Abbass et al. 2016;Lee and See 2004;Siau and Wang 2018). Trust can be attributed to a wide variety of entities, including humans, machines (hardware and software), organizations, institutions (e.g., trust in a legal system), and countries. ...
... A variety of models exist describing the development of trust in automation (e.g., Abbass et al. 2016;Hancock et al. 2011;Hoff and Bashir 2015;Lee and See 2004;Muir 1994;Schaefer et al. 2016;Sheridan 2019a). As a cognitive process, trust has a long-term tendency that is relatively static unless it is broken (Jarvenpaa et al. 1998;Mayer et al. 1995). ...
... The study found that there was a lower level of trust in the autonomous agents in the low human performing teams than both medium and high performing teams, while there was a loss of trust in autonomous systems among human teams at all three performing levels over time. In fact, some have argued that true trust is only attained when the agent itself has the free will to enter the trust relationship (Abbass et al. 2016). It touches on the essential topic of human-machine interaction: roles, responsibilities, and authorities (Hou et al. 2014). ...
Article
Full-text available
A trust model IMPACTS (intention, measurability, performance, adaptivity, communication, transparency, and security) has been conceptualized to build human trust in autonomous systems. A system must exhibit the seven critical characteristics to gain and maintain its human partner’s trust towards an effective and collaborative team in achieving common goals. The IMPACTS model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a human-autonomy interaction context. Positive feedback from subject matter experts who participated in a large-scale exercise controlling multiple unmanned assets indicated the decision aid’s effectiveness. It also demonstrated the IMPACTS model’s utility as a design principle for enabling trust between a human-autonomy team.
... Last, but not least, human trust in automation received the attention of many research works as a key factor impacting human tendency to delegate tasks to their machine counterpart (J. D. Lee & See, 2004;J. Lee & Moray, 1992;H. A. Abbass, Petraki, Merrick, Harvey, & Barlow, 2016). As trust is in the center of this thesis, it is discussed in detail in the next section. ...
Thesis
Full-text available
Human-swarm interaction (HSI) is a research area that studies how human and swarm capabilities can be combined to successfully perform tasks that exceed the performance limits of single robot systems. The main objective of this thesis is to improve the success of HSI by improving the effectiveness of three interdependent elements: the swarm decision-making algorithm in performing its tasks, human interventions in swarm operation, and the interface between the human and the swarm. Swarm decision-making warrants investigation as the state-of-the-art algorithms per-form very poorly under some conditions. Analysing the root causes of such failures reveals that a key performance inhibitor is the unreliable estimation of swarm members’ confidence in their judgements. Two different approaches are proposed to circumvent the identified issues. Performance evaluation under different conditions demonstrates the merits of the proposed approaches and shows that profound improvements to the effective-ness and efficiency of swarm decision-making are possible through the reliable estimation of confidence. Improving swarm effectiveness begets significant benefits to mission performance, but it can negatively affect the effectiveness of human interventions. Previous research has shown that when interacting with a highly reliable machine, humans tend to over-rely on the machine and exhibit notable complacency that limits their ability to detect and fix machine errors. Although over-trust in automation is widely blamed for such complacency, this attribution is yet to be empirically confirmed. This gap is addressed through an empirical investigation of trust in HSI. The results confirm the significant role of trust as a predictor of human reliance on swarm, which suggests that designing trust-aware HSI systems may reduce the negative impacts of human reliance. Utilising a highly reliable swarm while maintaining human vigilance is an objective that might not be possible without an effective human-swarm interface. As automation transparency has proven useful for boosting human understanding of machine operations, it could facilitate human awareness of machine limitations and possible failures. Thus, the thesis empirically examines the efficacy of swarm transparency as a potential intervention for minimising human complacency. The results assert the benefits of transparency in ensuring continued human contributions to the mission even when a highly reliable swarm is used.
... One of the main requirements that enables task delegation in such team settings is trust [11]. Trust was shown to be an influential variable with a causal effect on human reliance on swarm [12]. ...
... Another challenge for transparency is to consider how to support everyday interactions between human-machine. In [11], the authors proposed using cyber to support such interactions, leading to the possibility of swarms existing beyond the physical. The adaptability, robustness, and scalability of swarm systems are also inspiring research into abstract modelling of cyber-physical systems to support understanding complex problems [31], [32]. ...
Article
Transparency is a widely used but poorly defined term within the explainable artificial intelligence literature. This is due, in part, to the lack of an agreed definition and the overlap between the connected — sometimes used synonymously — concepts of interpretability and explainability. We assert that transparency is the overarching concept, with the tenets of interpretability, explainability, and predictability subordinate. We draw on a portfolio of definitions for each of these distinct concepts to propose a Human-Swarm-Teaming Transparency and Trust Architecture (HST3-Architecture). The architecture reinforces transparency as a key contributor towards situation awareness, and consequently as an enabler for effective trustworthy Human-Swarm Teaming.
... Kaber (2018) proposes a conceptual framework of autonomous agents and argues that they must be independent, viable and self-governing. Abbass et al. (2016) provide a very similar definition of autonomy in which viability is replaced with reliance on the agents' own laws. ...
Article
Full-text available
A range of terms and concepts referring to autonomous vehicle technologies are used both in the scientific and grey literature. Different, often overlapping, concepts and adjectives are used to describe automated vehicles. This abundance of terminology can create conditions for confusion and factual misinterpretation among audiences and between authors. This paper argues the lack of clarity between automated and autonomous cars contributes to increase expectations of current technology and to inappropriate predictions of both public and governments alike. The “autonomous” car, or vehicle, is a misnomer that could mislead potential users and its use may well result in a backlash of rejection, slowing development. To have an overview of driving automation vocabulary, a search of publications referencing “autonomous”, “automated”, “driverless” and “self-driving” cars or vehicles in the ScienceDirect library was conducted. Results showed they were largely used in the scientific literature investigated, despite obvious meaning differences between the concepts. The impact of the incorrect use of these terms on individuals’ acceptance is discussed and clear definitions provided.
... The aforementioned rankings and "happiness" indices, such as the World Happiness Report, conceptualize trust as social trust, or the notion that people in general can be trusted [63]. Trust has therefore become a variable of great interest in the extension of cultural values to AI [64,65]. According to Mayer et al. [53], trust potentially exposes one to a vulnerability of some sort [65]. ...
... Trust has therefore become a variable of great interest in the extension of cultural values to AI [64,65]. According to Mayer et al. [53], trust potentially exposes one to a vulnerability of some sort [65]. How can national strategic policies for AI enable and support trust between human and machine or AI (social trust), or citizen and government/nation-state (institutional trust)? ...
Article
Full-text available
Using textual analysis methodology with Hofstede's cultural dimensions as basis for cross-national comparison, the manuscript explores the influence of cultural values of trust, transparency, and openness in Nordic national artificial intelligence (AI) policy documents. Where many AI processes are technologies hidden from view of the citizen, how can public institutions support and ensure these high levels of trust, transparency, and openness in Nordic culture and extend these concepts of "digital trust" to AI? One solution is by authoring national policy that upholds cultural values and personal rights, ultimately reinforcing these values in their societies. The paper highlights differences in how Nordic nations position themselves using cultural values as organizing principles, with the author showing these values (i.e., trust through clear information and information security, transparency through AI literacy education and clear algorithmic decision making, and openness by creating data lakes and data trusts) support the development of AI technology in society. The analysis shows that three cultural values are upheld and influence Nordic national AI strategies, while themes of privacy, ethics, and autonomy are present, and democracy, a societal building block in the Nordics, is especially prominent in the policies. For policy development, policy leaders must understand that without citizen involvement in AI implementation or lacking citizen AI education, we risk alienating those for who these services are meant to utilize and improve access for.
... Thus, autonomy seems to require something above and beyond even fairly sophisticated automation. However, this issue is still open for debate (Abbass, Petraki, Merrick, Harvey, & Barlow, 2016). According to Abbass et al. (2016) autonomy comprises aspects of self-governance and suggests that agents rely on their own laws and work independently. ...
... However, this issue is still open for debate (Abbass, Petraki, Merrick, Harvey, & Barlow, 2016). According to Abbass et al. (2016) autonomy comprises aspects of self-governance and suggests that agents rely on their own laws and work independently. To achieve true autonomy in AMAs, ethical cognitive architectures must endow AMAs with a set of norms and laws that govern their decision-making, but also, a set of cognitive functions for learning and reasoning that allows AMAs to acquire new knowledge, behaviors, skills, values, or preferences, or to modify the ones they already have, in order to improve their behavior. ...
Article
New technologies based on artificial agents promise to change the next generation of autonomous systems and therefore our interaction with them. Systems based on artificial agents such as self-driving cars and social robots are examples of this technology that is seeking to improve the quality of people’s life. Cognitive architectures aim to create some of the most challenging artificial agents commonly known as bio-inspired cognitive agents. This type of artificial agent seeks to embody human-like intelligence in order to operate and solve problems in the real world as humans do. Moreover, some cognitive architectures such as Soar, LIDA, ACT-R, and iCub try to be fundamental architectures for the Artificial General Intelligence model of human cognition. Therefore, researchers in the machine ethics field face ethical questions related to what mechanisms an artificial agent must have for making moral decisions in order to ensure that their actions are always ethically right. This paper aims to identify some challenges that researchers need to solve in order to create ethical cognitive architectures. These cognitive architectures are characterized by the capacity to endow artificial agents with appropriate mechanisms to exhibit explicit ethical behavior. Additionally, we offer some reasons to develop ethical cognitive architectures. We hope that this study can be useful to guide future research on ethical cognitive architectures.
... Trust can define the way people interact with technology, this has been defined as trust in automation [42]. Research has shown that automation is severely restricted and becomes impractical if untrusted by humans, in particular in applications where high levels of cooperation are needed [1]. Trust overlaps across multiple research areas and domains. ...
Chapter
Full-text available
Due to many factors that range from ethical considerations and accountability to technological imperfection in autonomous systems, humans will continue to be an integral part of any meaningful autonomous system. While shepherding offers a technological concept that allows a human to operate a significantly larger number of autonomous systems that a human can handle in today’s environment, it is important to realise that a significant amount of accidents today are due to human error. The scalability promise that shepherding offers comes with possible challenges including those arising from the cognitive load imposed on human operators and the need to smoothly integrate the human, as a biological autonomous system, with the wider multi-agent autonomous system of future operating environments. In this chapter, we bring together the dimensions of this complex problem. We present carefully selected factors to cover human performance, especially for cognitively demanding tasks and situation awareness, and how these factors contribute to trust in the system. We then present the Human Factors Operating Picture (H-FOP), which offers a real-time situation awareness picture on human performance in this complex environment. We conclude with the concept of operation for integrating H-FOP with the human-swarm teaming problem, with a focus on the reliance of shepherding as the swarm guidance and control method.
... The same survey illustrated the low adoption of AI-based services in professional sectors with only 9% trusting financial AI services and only 4% trusting AI-based services with hiring employees. Abbass et al. (2016), propose a generalised model of trust for humans and machines, where trust is a social operator that balances the complexity inherent in social systems and the environment. According to Abbass (2019), it might be possible to standardise all AI-based services by adding an interface that assesses the human-trustworthiness and presents people with the information at a sufficient pace and in a form suitable for them to understand and act. ...
Article
Full-text available
Introduction We are increasingly exposed to applications that embed some sort of artificial intelligence (AI) algorithm, and there is a general belief that people trust any AI-based product or service without question. This study investigated the effect of personality characteristics (Big Five Inventory (BFI) traits and locus of control (LOC)) on trust behaviour, and the extent to which people trust the advice from an AI-based algorithm, more than humans, in a decision-making card game. Method One hundred and seventy-one adult volunteers decided whether the final covered card, in a five-card sequence over ten trials, had a higher/lower number than the second-to-last card. They either received no suggestion (control), recommendations from what they were told were previous participants (humans), or an AI-based algorithm (AI). Trust behaviour was measured as response time and concordance (number of participants' responses that were the same as the suggestion), and trust beliefs were measured as self-reported trust ratings. Results It was found that LOC influences trust concordance and trust ratings, which are correlated. In particular, LOC negatively predicted beyond the BFI dimensions trust concordance. As LOC levels increased, people were less likely to follow suggestions from both humans or AI. Neuroticism negatively predicted trust ratings. Openness predicted reaction time, but only for suggestions from previous participants. However, people chose the AI suggestions more than those from humans, and self-reported that they believed such recommendations more. Conclusions The results indicate that LOC accounts for a significant variance for trust concordance and trust ratings, predicting beyond BFI traits, and affects the way people select whom they trust whether humans or AI. These findings also support the AI-based algorithm appreciation.