Figure - available from: International Journal of Social Robotics
This content is subject to copyright. Terms and conditions apply.
The Modular Advanced Armed Robotic System (MAARS®) is an unmanned ground vehicle (UGV) developed by Qinetiq and designed for reconnaissance, surveillance, and target acquisition (RSTA) missions to increase the security of personnel manning forward locations
Source publication
Hybrid military teams, formed by human warfighters and autonomous artificial agents, represent the technological future of Defence operations. Both the potential and the inherent limitations of current technology are well-known, but the cognitive–behavioral and motivational aspects of human–robot interaction on the battlefield have yet to be system...
Similar publications
This paper deals with the identification of machines in a smart city environment. The concept of machine biometrics is proposed in this work for the first time, as a way to authenticate machine identities interacting with humans in everyday life. This definition is imposed in modern years where autonomous vehicles, social robots, etc. are considere...
Citations
... One might consider human-agent relationships simple, since the human is typically in charge, but they often are not. Studies of soldier-robot relationships in the social robotics discipline have described extensive anthropomorphism and emotional attachment to robots [13]. When a robot company produced an online video of their robotic dog product and showed an engineer kicking it to demonstrate the robot's ability to remain balanced, some people were concerned about this "violent" act [74]. ...
... accessible and applicable anthropocentric knowledge) when their survival depends on the cohesion and solidarity of their team members. In situations where users perceive machines as a threat or need to feel less isolated and alone (i.e. the desire for social contact and affiliation), they are more likely to anthropomorphise (i.e. the desire for social contact and affiliation) (Cappuccio, Galliott, and Sandoval 2021b). Because of the fuzzy nature of ML algorithmic logiccoupled with the high incentives for understanding and effectively interfacing with AI agentsthe tendency to anthropomorphise the workings of many non-humans AI agents will likely be especially acute. ...
... chatbots, digital avatars, deep-fake technology, and AI-augmented adversarial attacks and electromagnetic warfare) in ways that can make anthropomorphism more acute (Knight 2022). In tactical HMI's, the need for rapid decision-making in dynamic and contingent situations will complicate the challenge of accurately interpreting human bodily actions and subtle cues when AI agents (and machines and artificial tools generally) are used as a medium (Cappuccio, Galliott, and Sandoval 2021b). That is, interpreting the mental state of a combatant in close physical contact is generally easier than when they are using tools (drones, digital assistants, and other vehicles) that hide bodily expressions (Yong 2022). ...
Why are we likely to see anthropomorphisms in military artificial intelligence (AI) human-machine interactions (HMIs)? And what are the potential consequences of this phenomena? Since its inception, AI has been conceptualised in anthropomorphic terms, employing biomimicry to digitally map the human brain as analogies to human reasoning. Hybrid teams of human soldiers and autonomous agents controlled by AI are expected to play an increasingly more significant role in future military operations. The article argues that anthropomorphism will play a critical role in future human-machine interactions in tactical operations. The article identifies some potential epistemological, normative, and ethical consequences of humanising algorithms for the conduct of war. It also considers the possible impact of the AI-anthropomorphism phenomenon on the inversion of AI anthropomorphism and the dehumanisation of war. ARTICLE HISTORY
... We could use the AI-as-a-teammate framework (see Groom & Nass, 2007) and adopt a more subjective stance for framing our thinking. Cappuccio, Galliott, & Sandoval (2021) claim that this type of anthropomorphizing can have positive implication, but requires a great deal of study and research. Instead of starting with nebulous, subjective factors, we prefer the more objective LOA approach. ...
Artificial intelligence (AI) can offset human intelligence, relieving its users of cognitive burden; however, the trade-off in this relationship between the computer and the user is complicated. The challenge in defining the correlation between an increase in the level of autonomy (LOA) of a computer and a corresponding decrease in the cognitive workload of the user makes it difficult to identify the return on investment for an implemented technology. There is little research to support the assumptions that (1) user workload decreases with greater LOA, (2) greater LOA leads to greater collaborative performance, and (3) factors like trust or automation bias do not vary with LOA. This chapter will discuss the implications of prior research into the relationship between LOA and cognitive workload, including the challenges of accurately and consistently measuring cognitive load using subjective, physiological, and performance-based methods. The chapter will also identify potential experiments and what they might tell us about the relationship between LOA and cognitive workload.KeywordsArtificial intelligence (AI)AutomationCognitive burdenCognitive workloadLevel of autonomy (LOA)Trust
... As these applications become normalized, the prospects for the deskilling of some forms of police work increase, as police perceive themselves as having less discretion when dealing with volatile situations. Confusions about the basic roles of the autonomous robots in relation to human roles are expanding as their capabilities increase (Cappuccio et al., 2021). Community participants will indeed receive some information about the police robots through the press, social media, and word-of-mouth, as well as whatever public information releases are provided by the relevant authorities. ...
Robots, artificial intelligence, and autonomous vehicles are associated with substantial narrative and image-related legacies that often place them in a negative light. This chapter outlines the basics of the “dramaturgical” and technosocial approaches that are used throughout this book to gain insights about how these emerging technologies are affecting deeply-seated social and psychological processes. The robot as an “other” in the workplace and community—an object of attention and discussion– has been a frequently-utilized theme of science fiction as well as a topic for research analysis, with many people “acting out” their anxieties and grievances. Human-AI contests and displays of robotic feats are often used to intimidate people and reinforce that individuals are not in control of their own destinies, which presents unsettling prospects for the future.
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
The Operator 5.0 concept calls for the self-resilience of operators in Industry 5.0, including the cognitive aspect. Despite attempts to develop supporting technologies, achieved results are loosely connected without a comprehensive approach. Looking for novel expectations, this study seeks inspiration from a chaotic environment, where cognitive resilience is a firm standard: military operations. A systematic literature review in Scopus investigated how technology-enabled cognitive resilience is achieved in this context. Extracted details show vast technology support from field operations to control space, against the single or corporate effect of stressors from the work environment, context, content, or users themselves. These technologies generate indirect and direct influence from physical, mental, and cognitive aspects, creating a cognitive resilience effect. The concept of human-machine symbiosis is proposed, with a framework from technology development to resilience training, to inspire developers to define a broader scope, and engineers to facilitate comprehensive adaptation of Operator 5.0 solutions.