Book

Culture and Human-Robot Interaction in Militarized Spaces: A War Story

Authors:

Abstract

Explosive Ordnance Disposal (EOD) personnel are some of the most highly trained people in the military, with a job description that spans defusing unexploded ordnance to protecting VIP's and state dignitaries. EOD are also one of the first military groups to work with robots every day. These robots have become an increasingly important tool in EOD work, enabling people to work at safer distances in many dangerous situations. Based on exploratory research investigating interactions between EOD personnel and the robots they use, this study richly describes the nuances of these reciprocal influences, especially those related to operator emotion associated with the robots. In particular, this book examines the activities, processes and contexts that influence or constrain everyday EOD human-robot interactions, what human factors are shaping the (robotic) technology and how people and culture are being changed by using it. The findings from this research have implications for future personnel training, and the refinement of robot design considerations for many fields that rely on critical small group communication and decision-making skills.
... This paper, however, specifically focuses on humanoid robots because of the higher likelihood for people to anthropomorphise these robots, form social and emotional bonds with them and, therefore, possibly allow them to crowd out human relations. Although people do anthropomorphise and form bonds with robots that are not humanoids [6,39,42], the more humanlike something looks and behaves, the more likely we are to anthropomorphise and relate to it in a humanlike way [11,16]. This paper proceeds as follows: Before tackling the main aim of the paper, we first need a clear explanation of what humanoid robots are, as well as the nature of our relations with them. ...
... uk/ robot/ ameca/.5 See https:// futur eofsex. net/ robots/ state-of-the-sexbot-market-theworlds-best-sex-robot-and-ai-love-doll-compa nies/.6 See https:// www. ...
Article
Full-text available
This paper considers ethical concerns with regard to replacing human relations with humanoid robots. Many have written about the impact that certain types of relations with robots may have on us, and why we should be concerned about robots replacing human relations. There has, however, been no consideration of this issue from an African philosophical perspective. Ubuntu philosophy provides a novel perspective on how relations with robots may impact our own moral character and moral development. This paper first discusses what humanoid robots are, why and how humans tend to anthropomorphise them, and what the literature says about robots crowding out human relations. It then explains the ideal of becoming "fully human", which pertains to being particularly moral in character. In ubuntu philosophy, we are not only biologically human, but must strive to become better, more moral versions of ourselves, to become fully human. We can become fully human by having other regarding traits or characteristics within the context of interdependent, or humane, relationships (such as by exhibiting human equality, reciprocity, or solidarity). This concept of becoming fully human is important in ubuntu philosophy. Having explained that idea, the main argument of the paper is then put forward: that treating humanoid robots as if they are human is morally concerning if they crowd out human relations, because such relations prevent us from becoming fully human. This is because we cannot experience human equality, solidarity, and reciprocity with robots, which can be seen to characterise interdependent, or humane, relations with human beings.
... Such characteristics are not exclusive requirements for an emotional bond to be formed. Indeed, it has been shown that people become attached to objects or other kinds of technologies even when these lack any social dimension (Sung et al., 2007;Scheutz, 2012;Carpenter, 2016). However, social robotics' anthropomorphic and deceptive features further and deeply encourage the elicitation of affection in the human. ...
Conference Paper
Can trust be meaningfully attributed to technology? If so, under which conditions? By first presenting a conceptual analysis of trust, which differentiates between reliability and affective trust, we explore these intentionally broad questions through the analysis of the specific case of trusting social robots equipped with artificial emotional intelligence. Given their emotional capacities, which arguably strengthen the potential for deception, AEI social robots are considered the most likely candidates for experiencing trust-like attitudes towards technology. Determining whether, and what kind of, trust applies to relationships between humans and such robots will, we argue, be useful for determining what sort of trust can meaningfully be applied to human-technology interactions more broadly. This novel approach to the issue of trust in technology is underexplored in human-technology interaction, and the results presented will enable designers, citizens, and politicians to make better informed decisions regarding AEI social robots’ development. KeywordsTrustsocial robotsartificial emotional intelligence
... On the one hand, there are problems associated with humans building emotional bonds with their robots. Past research has shown that humans are less willing to put robots in harm's way, even when the robot's purpose is to be employed in a dangerous situation to avoid human harm (Carpenter, 2016). There is also evidence that humans can develop strong bonds with robots which can create subgroups in human-robot teams (You & Robert, 2023). ...
Preprint
Full-text available
Organizations are facing the new challenge of integrating humans and robots into one cohesive workforce. Relational demography theory (RDT) explains the impact of dissimilarities on when and why humans trust and prefer to work with others. This paper proposes that RDT would be a useful lens to help organizations understand how to integrate humans and robots into a cohesive workforce. We proposed a research model based on RDT and examined dissimilarities in gender and co-worker type (human vs. robot) along with dissimilarities in work style and personality. To empirically examine the research model, two experiments were conducted with 347 and 422 warehouse workers. Results show that the negative impacts of gender, work style, and personality dissimilarities on swift trust depended on the co-worker type. Gender dissimilarity had a stronger negative impact on swift trust in a robot co-worker, while work style and personality had a weaker negative impact on swift trust in a robot co-worker. Also, swift trust in a robot co-worker increased the preference for a robot co-worker over a human co-worker, while swift trust in a human co-worker decreased such preferences. Overall, this research contributes to our current understanding of human-robot collaboration by identifying the importance of dissimilarity from the perspective of RDT.
... 12 Darling 2017; see also Darling 2016Darling , 2021 See also my discussion of "The ASIMO Problem" in Schwitzgebel andGarza 2015. 14 Gerreau 2007;Singer 2009;Carpenter 2016;Gunkel 2018. 15 Shevlin 2021. ...
Preprint
An Artificially Intelligent system (an AI) has debatable personhood if it's epistemically possible either that the AI is a person or that it falls far short of personhood. Debatable personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or don't treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties.
... Byron Reeves and Clifford Nass note that "computers, in the way that they communicate, instruct, and take turns interacting, are close enough to human that they encourage social responses (…) any medium that is close enough will get human treatment, even though people know it's foolish and even though they likely will deny it afterwards" [14]. There are also a number of studies on the close personal attitude toward military Packbots which retrieve objects from dangerous territory and thus reduce soldiers' risks -these are given them names, they are awarded battlefield promotions, and their death is mourned [17,18]. As David J. Gunkel notes, this attitude to technology is the opposite of their traditional instrumental interpretation: "it happens in direct opposition to what otherwise sounds like good common sense: They are just technologies-instruments or tools that feel nothing" [19]. ...
Chapter
The discussion about the special position of robots is often based on their similarity to humans. Our research shows that non-anthropomorphic smart appliances also receive a special regard that is not similar to the attitude to ordinary household appliances. We conducted an ethnographic analysis of 229 cases in which users described online interaction with robot vacuum cleaners, a smart home, a smart refrigerator, and other smart appliances. As a result, the largest number of stories received are devoted to robot vacuum cleaners. These are given names, character is attributed to them and activities described in the context of zoomorphism or anthropomorphism. In our opinion, one of the important reasons for this is the ability of smart technologies to communicate and actively interact. After all, the so-called “desire to establish contact”, along with performing quite complex functions, causes people to empathise and respond. We propose that the most correct way to describe this relationship is in terms of a game. People realise that they are acting within the framework of special fictional conditions that do not correspond to reality but get fun from it. People tend to perceive condescendingly even the mistakes of some robots and are ready to help. Real irritation is caused by cases when technologies claim to manage people’s lives.
... Besides utilitarian use of technology, there is also a hedonic, pleasure, and attachment-oriented use of technology (Van der Heijden, 2004) specially when robots offer interaction possibilities to human beings to build long-term relationships, such as friendship and attachment, and even if this relationship is a one-way for the moment. Moreover, sometimes, emotional attachment towards a robot can generate problems (Carpenter, 2013(Carpenter, , 2016. But emotional proximity and confidence are necessary if robots are considered as a companion, an assistant or a guide in daily and intimate activity. ...
... As technical developments increasingly enable robots to work in teams together with humans, social scientists are exploring how people think of and interact with robots as team members. Despite early work suggesting that robots will not be trusted as team members (Groom & Nass, 2007), interviews with members of military bomb disposal teams showed that they come to think of even "merely functional" robots they work with as team members, who cannot be easily replaced by another similar robot if damaged (Carpenter, 2016). A study by Correia et al. (2018) had two humanrobot teams compete in a game, and found that robots expressing group-based emotions based on their team's outcomes (rather than emotions based on their individual outcomes) were liked better and trusted more by their human teammates. ...
... Besides utilitarian use of technology, there is also a hedonic, pleasure, and attachment-oriented use of technology (Van der Heijden, 2004) specially when robots offer interaction possibilities to human beings to build long-term relationships, such as friendship and attachment, and even if this relationship is a one-way for the moment. Moreover, sometimes, emotional attachment towards a robot can generate problems (Carpenter, 2013(Carpenter, , 2016. But emotional proximity and confidence are necessary if robots are considered as a companion, an assistant or a guide in daily and intimate activity. ...
Conference Paper
This paper is aiming to investigate the impact of perceived autonomy and perceived risk on attitudes and opinion about two assistive robots (Paro© and Asimo©), as factors explaining the probability to become “friend” with a robot. The worldwide population of elderly people is growing rapidly and in the coming decades the proportion of older people in the developed countries will change significantly. This demographic shift will create a huge increase in demand for domestic and health-care robotics systems. But the spread of robots in everyday life particularly for purposes of healthcare already gives rise to questions about acceptability, moral and legal responsibility. A robotics system can be powerful and useful, there is not a reason why this system is usable and/or desirable and in fine, accepted. It is still unclear how well these new “faux-people” will be accepted by society, for they raise fundamental questions about what it means to be human, especially at home or in nursing house.METHOD. In a large online survey conducted in France, 2 783 participants (936 adolescents with a mean-age of 12.2 years; 1077 adults with a mean-age of 33.4 years; and 770 seniors with a mean-age of 71.3 years) were asked to complete three questionnaires: (1) The DOSPERT scale (for Domain-Specific Risk-Taking; Blais & Weber, 2006) to assess risk attitude and perception of risks for our participants; (2) The revised version of the FQUA-R scale (for Friendship Quality- Revised; Thien, Razak & Jamil, 2012) to assess close relationships and potential friendship with a robot; (3) The PAS (for Perception of Autonomy Scale; Lombard & Dinet, 2015) to assess positive and negative attitudes towards autonomy of robots. Each participant was asked to complete the three questionnaires twice: before and after viewing two videos showing two assistive robots (Paro© and Asimo©) interacting with human people: In one of the videos, a young woman interacts with the robotic baby-seal Paro©, and gives many explanations about the interests for elderly people (“Paro© gives kindness”, “its allows to create an attachment”). Moreover, we can see an elderly woman who caresses Paro©. In the other video, several physical characteristics of Asimo© are presented (size, weight) and the robot performs several tasks by interacting with a young woman (Asimo© walks, runs, plays football, opens a bottle, serves a glass, etc.).RESULTS AND DISCUSSION. For the two robots, structural equation modelling was used to determine the relationships between all the variables. Results have mainly showed that (i) Perceived risk is mainly and significantly explained by attitudes about risks in health and social domains whatever the gender and the age, and (ii) Perceived autonomy has a direct and positive effect on Friendship quality. In other words, our results tend to confirm that our three factors (perceived risk, perceived autonomy and friendship) are strongly interrelated and should be integrated in studies investigating the acceptability of assistive robots, and confirm that these three factors have different impact according to the physical appearance of the robot (human-like for Asimo© or animal-like shape for Paro©). Industrial and theoretical perspectives are discussed.
... The first of these is the autonomous movement [44,79]. With regard to robots, this phenomenon can be analysed, among others, thanks to studies on the Roomba vacuum cleaner [115,116], robotic 'bug' HEXBUG [38,39] or military robots deployed to defuse bombs [24]. If the robot's movement resembles the so-called biological movement, in many cases it causes even more intense reactions in people. ...
Article
Full-text available
This paper discuss the phenomenon of empathy in social robotics and is divided into three main parts. Initially, I analyse whether it is correct to use this concept to study and describe people's reactions to robots. I present arguments in favour of the position that people actually do empathise with robots. I also consider what circumstances shape human empathy with these entities. I propose that two basic classes of such factors be distinguished: biological and socio-cognitive. In my opinion, one of the most important among them is a sense of group membership with robots, as it modulates the empathic responses to representatives of our-and other-groups. The sense of group membership with robots may be co-shaped by socio-cognitive factors such as one's experience, familiarity with the robot and its history, motivation, accepted ontology, stereotypes or language. Finally, I argue in favour of the formulation of a pragmatic and normative framework for manipulations in the level of empathy in human-robot interactions.
... 8 Engagement with social robots is fairly new to humans. We are only just beginning to work out the place that such objects will have in our society and the significance of our relationship to them and there are a number of ways that we 5 See Garreau (2007) and Carpenter (2015) for further evidence of soldiers developing unexpectedly close emotional relationships with military robots. 6 Note that Darling uses language that may constitute framing by asking them to 'kill' their Pleo, 'kill' being an anthropomorphising term. ...
Article
Full-text available
In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to their usefulness, and that this emotional reaction is not a direct indicator that robots deserve either moral consideration or rights. The positive framework of Fictional Dualism provides us with an understanding of what social robots are and with a plausible basis for our relationships with them as we bring them further into society.
... As technical developments increasingly enable robots to work in teams together with humans, social scientists are exploring how people think of and interact with robots as team members. Despite early work suggesting that robots will not be trusted as team members (Groom & Nass, 2007), interviews with members of military bomb disposal teams showed that they come to think of even "merely functional" robots they work with as team members, who cannot be easily replaced by another similar robot if damaged (Carpenter, 2016). A study by Correia et al. (2018) had two humanrobot teams compete in a game, and found that robots expressing group-based emotions based on their team's outcomes (rather than emotions based on their individual outcomes) were liked better and trusted more by their human teammates. ...
... 8. Possibility of emotional affinity with robots: No matter how bizarre it sounds at the outset, but by designing robots with a human-like semblance, there's a chance of developing emotional association towards them. Lin (2016) ponders over the question of relationship with humans and cites Carpenter (2016), who has addressed these complex issues of attachment, trust, and dependence on robots in her book, Culture and Human-Robot Interaction in Militarized Spaces: A War Story. Despite distinct awareness by people that these robots are mere machines and tools, there's still some degree of (pseudo) social interaction. ...
... We 72 already see instances of such interactions today. For example, military personnel working 73 with robots to defuse bombs sometimes demand that a broken robot be fixed, rather than 74 replaced, because it is part of their team (Carpenter, 2016), or hold full military funerals for 75 a robot that served well. People owning the AIBO pet dog sometimes prefer to fix an 76 outdated model for sentimental reasons, rather than buy a new one (Robertson, 2017; 77 Suzuki, 2015). ...
Article
How do people treat robot teammates compared to human opponents? Past research indicates that people favor, and behave more morally toward, ingroup than outgroup members. People also perceive that they have more moral responsibilities toward humans than nonhumans. This paper presents a 2×2×3 experimental study that placed participants ( N = 102) into competing teams of humans and robots. We examined how people morally behave toward and perceive players depending on players’ Group Membership (ingroup, outgroup), Agent Type (human, robot), and participant group Team Composition (humans as minority, equal, or majority within the ingroup compared to robots). Results indicated that participants favored the ingroup over the outgroup and humans over robots – to the extent that they favored ingroup robots over outgroup humans. Interestingly, people differentiated more between ingroup than outgroup humans and robots. These effects generalized across Team Composition.
... A group of soldiers in Iraq, for example, held a funeral for their robot and created a medal for it (Kolb 2012). Carpenter provides an in-depth examination of humanrobot interaction from the perspective of Explosive Ordinance Disposal (EOD) teams within the military (Carpenter 2016). Her work offers an glimpse of how naturally and easily people anthropomorphise robots they work with daily. ...
... Socioemotional relationships with robots People can spontaneously form socioemotional bonds with robots, even those that are not specifically designed to elicit social behavior, as demonstrated by evidence of emotional attachment in owners of home-cleaning robots (Sung et al., 2007) and in soldiers working alongside bomb-disposal robots (Carpenter, 2015). In 2015, a Buddhist temple in Japan made world headlines by conducting a ceremony for Aibo robot dogs that were due to be dismantled (Brown, 2015). ...
Article
Full-text available
Social robots that can interact and communicate with people are growing in popularity for use at home and in customer-service, education, and healthcare settings. While growing evidence suggests that co-operative and emotionally-aligned social robots could benefit users across the lifespan, controversy continues about the ethical implications of these devices and their potential harms. In this perspective, we explore this balance between benefit and risk through the lens of human-robot relationships. We review the definitions and purposes of social robots, explore their philosophical and psychological status, and relate research on human-human and human-animal relationships to the emerging literature on human-robot relationships. Advocating a relational rather than essentialist view, we consider the balance of benefits and harms that can arise from different types of relationship with social robots and conclude by considering the role of researchers in understanding the ethical and societal impacts of social robotics.
... Also, in the military sphere, where one might suspect less emotional bonds, one can refer to reports illustrating how behavioral patterns are developed that one might expect to be more likely to be found amongst humans. For example, soldiers develop relationships with their robot comrades, which has led to the fact that, for example, mine-sweeping robots have been buried with honors [12]. ...
Chapter
Full-text available
This paper investigates reasons to argue for social norms regulating our behavior towards artificial agents. By problematizing the assertion that moral agency is, in principle, a necessary prerequisite for any form of moral patiency, reasons are examined which are independent of attributing moral agency to artificial agents, but which speak for morally appropriate behavior towards artificial systems. Suggesting a consequentialist strategy, potential negative impacts of human-machine interactions are analyzed with a focus on factors that support a transfer of behavioral patterns from human-machine interactions to human-human interactions.
... The possibility of affective attachment to robots is beginning to draw some attention. Julie Carpenter (2016), for one, notes that military personnel at the Explosive Ordnance Disposal (EOD) unit grow attached to the robots they use to dispose of bombs even when these robots are tracked and clawed and do not at all look like humans or pets. Attachment can take many forms. ...
Chapter
In this paper, I consider whether social robots can nudge us in standard ways, i.e., by influencing choice/behavior in predictable ways, and the ethical issues attached to that. I then argue that social robots can also nudge us in non-standard ways, by influencing in predictable ways cognitive and affective states as opposed to simple choice/behavior, and consider the ethical issues attached to that. Before tackling the issues specific to nudging and its ethics in connection to social robots, I introduce nudging and its ethics more in general.
... For example, people expect a robot that looks or behaves like a human to be able to engage in conversation, even if the robot cannot. Certain types of anthropomorphism occur with robotic teammates even if the robots do not look like humans; e.g., people have held funerals for mechanomorphic (i.e., machine-like) military robots Carpenter (2016), and dog-like robots Robertson (2018). Given the effects of anthropomorphism on human behavior, we measure social responses to robots. ...
Article
Autonomous robotic vehicles (i.e., drones) are potentially transformative for search and rescue (SAR). This paper works toward wearable interfaces, through which humans team with multiple drones. We introduce the Virtual Drone Search Game as a first step in creating a mixed reality simulation for humans to practice drone teaming and SAR techniques. Our goals are to (1) evaluate input modalities for the drones, derived from an iterative narrowing of the design space, (2) improve our mixed reality system for designing input modalities and training operators, and (3) collect data on how participants socially experience the virtual drones with which they work. In our study, 17 participants played the game with two input modalities (Gesture condition, Tap condition) in counterbalanced order. Results indicated that participants performed best with the Gesture condition. Participants found the multiple controls challenging, and future studies might include more training of the devices and game. Participants felt like a team with the drones and found them moderately agentic. In our future work, we will extend this testing to a more externally valid mixed reality game.
... There is evidence that adults form attachments to robots even when they are remote controlled bomb disposal robots (Carpenter 2016) or robot vacuum cleaners (Sung et al. 2007). When the robots are furry robot pets, or cute looking humanoids, attachment formation is even more likely. ...
Article
Full-text available
Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when it leads to harmful impacts on individuals and society. The appearance and behaviour of a robot can lead to an overestimation of its functionality or to an illusion of sentience or cognition that can promote misplaced trust and inappropriate uses such as care and companionship of the vulnerable. We consider the allocation of responsibility for harmful deception. Finally, we make the suggestion that harmful impacts could be prevented by legislation, and by the development of an assessment framework for sensitive robot applications.
... In fact, preliminary research on robots on the battlefield has indicated the development of strong human-robot attachment, or even a feeling of "self-extension into the robot," that might influence operational decisionmaking. 129 More broadly, the introduction of new BCI technologies raises questions about the future structure Service members may not want to provide the U.S. government, or its machines, with access to the inner workings of their minds. of the human force. What does a company or platoon look like when some or all of the force is neurally plugged into various weapon systems, drones, or robots? ...
... According to our study, it will be necessary to craft the words carefully for explaining the automated agents to the human co-workers so that they can manage their expectations of those agents' capability or responsibility vis-Ãă-vis their own. This will be even more important for AI and robots employed for safety-critical missions in extreme environments such as outer space or the war, where the issues of accountability and punishment frequently emerge [12,37]. Above all, our analysis of the public perception of AI and robots shows that the co-work or cooperation between humans and autonomous electronic agents is fundamentally social [27]. ...
Preprint
Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities." Numerous scholars who favor or disfavor its feasibility have participated in the debate. This paper presents an experiment (N=1270) that 1) collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future and 2) examines whether debunking common misconceptions on the proposal modifies one's stance toward the issue. The results indicate that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., favor the right against cruel treatment). Furthermore, people's perceptions became more positive when given information about rights-bearing non-human entities or myth-refuting statements. The style used to introduce AI and robot rights significantly affected how the participants perceived the proposal, similar to the way metaphors function in creating laws. For robustness, we repeated the experiment over a more representative sample of U.S. residents (N=164) and found that perceptions gathered from online users and those by the general population are similar.
... Robots are becoming increasingly prevalent, not only behind the scenes but also as members of human teams. For example, military teams work with bomb-diffusing robots (Carpenter, 2016), factory workers with "social" industrial robots (Sauppé and Mutlu, 2015), and eldercare facilities with companion robots (Wada et al., 2005;Wada and Shibata, 2007;Chang and Šabanović, 2015). Such teaming is critical for advancing our society, because humans and robots have different skillsets, which can complement each other's expertise to enhance team outcomes (Kahneman and Klein, 2009;Chen and Barnes, 2014;Bradshaw et al., 2017). ...
Article
Full-text available
Past research indicates that people favor, and behave more morally toward, human ingroup than outgroup members. People showed a similar pattern for responses toward robots. However, participants favored ingroup humans more than ingroup robots. In this study, I examine if robot anthropomorphism can decrease differences between humans and robots on ingroup favoritism. This paper presents a 2 × 2 × 2 mixed-design experimental study with participants (N = 81) competing on teams of humans and robots. I examined how people morally behaved toward and perceived players depending on players’ Group Membership (ingroup, outgroup), Agent Type (human, robot), and Robot Anthropomorphism (anthropomorphic, mechanomorphic). Results replicated prior findings that participants favored the ingroup over the outgroup and humans over robots—to the extent that they favored ingroup robots over outgroup humans. This paper also includes novel results indicating that patterns of responses toward humans were more closely mirrored by anthropomorphic than mechanomorphic robots.
... In fact, preliminary research on robots on the battlefield has indicated the development of strong human-robot attachment, or even a feeling of "self-extension into the robot," that might influence operational decisionmaking. 129 More broadly, the introduction of new BCI technologies raises questions about the future structure Service members may not want to provide the U.S. government, or its machines, with access to the inner workings of their minds. of the human force. What does a company or platoon look like when some or all of the force is neurally plugged into various weapon systems, drones, or robots? ...
... Pourtant, de manière intéressante, d'autres acteurs dissocient ces enjeux. Ils mettent en avant la possibilité qu'un enfant s'attache au robot sans pour autant l'assimiler à de l'humain, comme dans le cas des robots militaires (Carpenter, 2016). Une pédopsychiatre en IME souligne la difficulté à saisir, et à maîtriser, la relation qui se noue entre l'enfant et le robot, même en écartant toute forme de confusion : « D'abord, ils font quand même une très grande différence entre le robot et l'adulte qui est là, qui est le référent. ...
Article
Full-text available
La possibilité, dans un avenir proche, de nouer une relation personnelle et durable avec des robots sociaux est régulièrement annoncée. Cependant, l’écart entre cet horizon et la réalité reste grand. Les questions éthiques exprimées dans l’espace public sont-elles trop en décalage avec le stade actuel de développement de la robotique sociale ? Qu’en pensent les personnes qui tentent d’utiliser cette technologie ? Cet article propose de s’intéresser aux enjeux moraux tels qu’ils sont problématisés dans des expériences situées. Il s’appuie sur une enquête comparative d’un an menée sur deux robots sociaux utilisés dans des lieux accueillant des enfants avec un trouble du spectre autistique. On s’attache à étudier la manière ambivalente dont les professionnels en centres de soin définissent la place de ce nouvel artefact interactionnel. On montre d’abord comment les robots sociaux sont des êtres dont il faut produire la sociabilité, en abandonnant le projet de « vie autonome » au profit d’interactions programmées. On souligne ensuite comment les professionnels hésitent à établir le robot comme un support thérapeutique comme les autres, en s’interrogeant sur la nécessité de le manier différemment. Enfin, on analyse comment les usages de ce support sont entourés de précautions particulières : le cadrage de la relation que les enfants entretiennent avec le robot et la limitation de la capacité de jugement de ce dernier. Cette analyse révèle comment les acteurs s’engagent dans un travail réflexif de moralisation des robots sociaux.
Article
This article presents a standardized human–robot teleoperation interface (HRTI) evaluation scheme for mobile manipulators. Teleoperation remains the predominant control type for mobile manipulators in open environments, particularly for quadruped manipulators. However, mobile manipulators, especially quadruped manipulators, are relatively novel systems to be implemented in the industry compared to traditional machinery. Consequently, no standardized interface evaluation method has been established for them. The proposed scheme is the first of its kind in evaluating mobile manipulator teleoperation. It comprises a set of robot motion tests, objective measures, subjective measures, and a prediction model to provide a comprehensive evaluation. The motion tests encompass locomotion, manipulation, and a combined test. The duration for each trial is collected as the response variable in the objective measure. Statistical tools, including mean value, standard deviation, and T-test, are utilized to cross-compare between different predictor variables. Based on an extended Fitts' law, the prediction model employs the time and mission difficulty index to forecast system performance in future missions. The subjective measures utilize the NASA-task load index and the system usability scale to assess workload and usability. Finally, the proposed scheme is implemented on a real-world quadruped manipulator with two widely-used HRTIs, the gamepad and the wearable motion capture system.
Article
Full-text available
In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.
Chapter
In this chapter, I use the expression “robotic animism” to refer to the tendency that many people have to interact with robots as if the robots have minds or a personality. I compare the idea of robotic animism with what philosophers and psychologists sometimes refer to as “mind-reading”, as it relates to human interaction with robots. The chapter offers various examples of robotic animism and mind-reading within different forms of human-robot interaction, and it also considers ethical and prudential arguments for and against attributing minds and a personality to robots. In the last section of the chapter, I also consider the intriguing question of whether any robots that exist today could be said to have some sort of minds in some non-trivial sense.
Chapter
Gängige Formen von Diskriminierung sowie die Reproduktion normativer Stereotype sind auch bei künstlicher Intelligenz an der Tagesordnung. Die Beitragenden erläutern Möglichkeiten der Reduktion dieser fehlerhaften Verfahrensweisen und verhandeln die ambivalente Beziehung zwischen Queerness und KI aus einer interdisziplinären Perspektive. Parallel dazu geben sie einem queer-feministischen Wissensverständnis Raum, das sich stets als partikular, vieldeutig und unvollständig versteht. Damit eröffnen sie Möglichkeiten des Umgangs mit KI, die reduktive Kategorisierungen überschreiten können.
Article
Full-text available
La ville intelligente est le produit du déploiement d’une infrastructure numérique au sein de l’environnement urbain. C’est un spectacle créé par l’industrie 4.0 et qui est promu par des municipalités d’ici et d’ailleurs. Les gains potentiels pour les gestionnaires des villes sont présentés aux citadins dans un registre émotif. Dans cet article est décrite une série d’incidences sociales qui découlent de la numérisation de la vie citadine. Ce texte est une proposition analytique qui a été échafaudée à la suite d’observations ethnographiques au sein du Laboratoire d’innovation urbaine de Montréal. Il en ressort que la transplantation d’une infrastructure numérique au sein du tissu urbain est loin d’être une opération anodine puisqu’elle : 1) accélère le transfert de la gestion des affaires humaines à des artefacts, 2) institutionnalise le paradigme indiciaire de Ginzburg, 3) fabrique de nouvelles temporalités non-humaines, 4) réactualise la fiction moderne du contrôle des parties et du tout. De ce nouvel état de fait surgit l’opportunité d’institutionnaliser une riche collaboration entre la science-fiction et la gestion publique afin de mieux qualifier les processus de mise en forme de la cité contemporaine. En effet, la cité numérisée est le cocon du projet de gestion algorithmique du social. Au fil des pages, le lecteur constatera qu’il n’est pas seulement question du devenir de l’expérience citadine sinon de l’avenir de l’idée que l’on se fait de nous-mêmes : qu’est-ce qu’être humain dans la cité numérisée ? À l’arrivée, ce qui ressort de la réflexion, c’est l’urgence de mettre à l’épreuve les raisonnements et les idéologies qui nous incitent quotidiennement à numériser nos vies. L’objectif de cet article est d’aider à enclencher la téléportation de la cité numérisée au cœur de la place publique afin qu’elle soit débattue par une pluralité de points de vue pour garantir, in fine, la qualité du fonctionnement des régimes démocratiques.
Article
La ville intelligente est le produit du déploiement d'une infrastructure numérique au sein de l'environnement urbain. C'est un spectacle créé par l'industrie 4.0 et qui est promu par des municipalités d'ici et d'ailleurs. Les gains potentiels pour les gestionnaires des villes sont présentés aux citadins dans un registre émotif. Dans cet article est décrite une série d'incidences sociales qui découlent de la numérisation de la vie citadine. Ce texte est une proposition analytique qui a été échafaudée à la suite d'observations ethnographiques au sein du Laboratoire d'innovation urbaine de Montréal. Il en ressort que la transplantation d'une infrastructure numérique au sein du tissu urbain est loin d'être une opération anodine puisqu'elle : 1) accélère le transfert de la gestion des affaires humaines à des artefacts, 2) institutionnalise le paradigme indiciaire de Ginzburg, 3) fabrique de nouvelles temporalités non-humaines, 4) réactualise la fiction moderne du contrôle des parties et du tout. De ce nouvel état de fait surgit l'opportunité d'institutionnaliser une riche collaboration entre la science-fiction et la gestion publique afin de mieux qualifier les processus de mise en forme de la cité contemporaine. En effet, la cité numérisée est le cocon du projet de gestion algorithmique du social. Au fil des pages, le lecteur constatera qu'il n'est pas seulement question du devenir de l'expérience citadine sinon de l'avenir de l'idée que l'on se fait de nous-mêmes : qu'est-ce qu'être humain dans la cité numérisée ? À l'arrivée, ce qui ressort de la réflexion, c'est l'urgence de mettre à l'épreuve les raisonnements et les idéologies qui nous incitent quotidiennement à numériser nos vies. L'objectif de cet article est d'aider à enclencher la téléportation de la cité numérisée au coeur de la place publique afin qu'elle soit débattue par une pluralité de points de vue pour garantir, in fine, la qualité du fonctionnement des régimes démocratiques.
Chapter
Full-text available
Gängige Formen von Diskriminierung sowie die Reproduktion normativer Stereotype sind auch bei künstlicher Intelligenz an der Tagesordnung. Die Beitragenden erläutern Möglichkeiten der Reduktion dieser fehlerhaften Verfahrensweisen und verhandeln die ambivalente Beziehung zwischen Queerness und KI aus einer interdisziplinären Perspektive. Parallel dazu geben sie einem queer-feministischen Wissensverständnis Raum, das sich stets als partikular, vieldeutig und unvollständig versteht. Damit eröffnen sie Möglichkeiten des Umgangs mit KI, die reduktive Kategorisierungen überschreiten können.
Chapter
This chapter presents the core argument and the proposal for both the negative conditions for synthetic friends as well as some constructive elements. If a social machine fulfills the negative conditions (non-deception, non-exploitation, non-exclusivity, non-violence) for interacting with humans, we may consider this machine a synthetic friend. However, the positive side suggests that we should also endorse the fundamental differences between humans and machines in these friendships, such as their immortality, the ability to memorize specific, time-stamped parts of our lives, and their ability to sympathize without having to suffer alongside us.KeywordsSynthetic friendsImmortalityMemoriesSympathyEmpathy
Chapter
This chapter discusses theories of how we relate to machines in the first place. This is a necessary step toward human–machine friendship as we should rule out those technologies that do not possess some minimum requirements for a social machine. We discuss specifically Gunkel’s, Coeckelbergh’s, and Kempt’s approaches, while also acknowledging the growing corpus of relational philosophy of technology. We settle on three elements for social machines relevant for our investigation: autonomous behavior, interactivity, and attachability.
Conference Paper
Modern robotics seems to have taken root from the theories of Isaac Asimov, in 1941. One area of research that has become increasingly popular in recent decades is the study of artificial intelligence or A.I., which aims to use machines to solve problems that, according to current opinion, require intelligence. This is related to the study on “Social Robots”. Social Robots are created in order to interact with human beings; they have been designed and programmed to engage with people by leveraging a "human" aspect and various interaction channels, such as speech or non-verbal communication. They therefore readily solicit social responsiveness in people who often attribute human qualities to the robot. Social robots exploit the human propensity for anthropomorphism, and humans tend to trust them more and more. Several issues could arise due to this kind of trust and to the ability of “superintelligence” to "self-evolve", which could lead to the violation of the purposes for which it was designed by humans, becoming a risk to human security and privacy. This kind of threat concerns social engineering, a set of techniques used to convince users to perform a series of actions that allow cybercriminals to gain access to the victims' resources. The Human Factor is the weakest ring of the security chain, and the social engineers exploit Human-Robots Interaction to persuade an individual to provide private information.An important research area that has shown interesting results for the knowledge of the possibility of human interaction with robots is "cyberpsychology". This paper aims to provide insights into how the interaction with social robots could be exploited by humans not only in a positive way but also by using the same techniques of social engineering borrowed from "bad actors" or hackers, to achieve malevolent and harmful purposes for man himself. A series of experiments and interesting research results will be shown as examples. In particular, about the ability of robots to gather personal information and display emotions during the interaction with human beings. Is it possible for social robots to feel and show emotions, and human beings could empathize with them? A broad area of research, which goes by the name of "affective computing", aims to design machines that are able to recognize human emotions and consistently respond to them. The aim is to apply human-human interaction models to human-machine interaction. There is a fine line that separates the opinions of those who argue that, in the future, machines with artificial intelligence could be a valuable aid to humans and those who believe that they represent a huge risk that could endanger human protection systems and safety. It is necessary to examine in depth this new field of cybersecurity to analyze the best path to protect our future. Are social robots a real danger? Keywords: Human Factor, Cybersecurity, Cyberpsychology, Social Engineering Attacks, Human-Robot Interaction, Robotics, Malicious Artificial Intelligence, Affective Computing, Cyber Threats
Chapter
In this chapter, we address the problem of humanizing business when we must interact with intelligent robots and other AI systems, rather than real people, on a daily basis. There is a strong tendency to anthropomorphize pets and other animals that carries over to smart machines, leading us to replace human relationships with something less sophisticated and subtle. As a result, we argue, humanizing machines actually tends to dehumanize the workplace. We take an anthropological approach to this phenomenon that teaches three important lessons. One is that Western cultures have a particularly strong tendency to anthropomorphize machines, due to what Max Weber called the disenchantment of nature. Another is that humans have long interacted with intelligent nonhuman beings, such as domesticated work animals, without anthropomorphizing them. Finally, we can take a cue from these interactions to create analogous relationships with robots by neither humanizing nor objectifying them, but by relating to them in a manner that suits their capabilities. In particular, we can avoid anthropomorphism by involving workers in the training of AI systems, much as our ancestors trained domesticated animals, and by introducing ritual activities involving robots that clarify their ethical status and guide our interaction with them.KeywordsArtificial intelligenceIntelligent robotsDehumanizationAnthropomorphism
Chapter
Full-text available
Technology is often an augur of both possibility and peril in our lives. It promises to make our lives easier, more enjoyable, more efficient, but it also threatens the status quo. The mobile phone is a recent example: people were better able to coordinate their lives and keep in touch with loved ones, but it also changed the nature of public space and upended social expectations. In the same way that mobile phones radically altered social norms in both public spaces and interpersonal relationships, artificial intelligence (AI) and robots are positioned as the next frontier of social disruptive technologies. This chapter reviews the social adjustments prompted by mobile phone integration and looks forward to future social integration of robots in everyday life. Drawing from qualitative data, the chapter explores people’s perceptions of robots and how they would view others’ interactions with robots. The findings reveal a general negativity toward and otherwise ambivalence about companionate AI like social robots, particularly in the ways such technology might undermine socialization and disrupt human-human relationships. Interviewees could perceive the potential need for others to have social robots but clearly delimited the contexts in which they would be acceptable. This ambivalent stance toward personal social robots speaks to the evolving norms and practices around emerging technologies, as well as the complexity they would add to interpersonal relationships. As people integrate and maintain social robots in their lives, coordinating this new presence will require an “emotional choreography” that may generate new interpersonal management techniques.
Article
The great technological achievements in the recent past regarding artificial intelligence (AI), robotics, and computer science make it very likely, according to many experts in the field, that we will see the advent of intelligent and autonomous robots that either match or supersede human capabilities in the midterm (within the next 50 years) or long term (within the next 100–300 years). Accordingly, this article has two main goals. First, we discuss some of the problems related to ascribing moral status to intelligent robots, and we examine three philosophical approaches—the Kantian approach, the relational approach, and the indirect duties approach—that are currently used in machine ethics to determine the moral status of intelligent robots. Second, we seek to raise broader awareness among moral philosophers of the important debates in machine ethics that will eventually affect how we conceive of key concepts and approaches in ethics and moral philosophy. The effects of intelligent and autonomous robots on our traditional ethical and moral theories and concepts will be substantial and will force us to revise and reconsider many established understandings. Therefore, it is essential to turn attention to debates over machine ethics now so that we can be better prepared to respond to the opportunities and challenges of the future.
Chapter
Full-text available
We know that robots are just machines. Why then do we often talk about them as if they were alive? Laura Voss explores this fascinating phenomenon, providing a rich insight into practices of animacy (and inanimacy) attribution to robot technology: from science-fiction to robotics R&D, from science communication to media discourse, and from the theoretical perspectives of STS to the cognitive sciences. Taking an interdisciplinary perspective, and backed by a wealth of empirical material, Voss shows how scientists, engineers, journalists - and everyone else - can face the challenge of robot technology appearing »a little bit alive« with a reflexive and yet pragmatic stance.
Article
Full-text available
Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.
Article
An estimated 11% of adults report experiencing some form of cognitive decline, which may be associated with conditions such as stroke or dementia and can impact their memory, cognition, behavior, and physical abilities. While there are no known pharmacological treatments for many of these conditions, behavioral treatments such as cognitive training can prolong the independence of people with cognitive impairments. These treatments teach metacognitive strategies to compensate for memory difficulties in their everyday lives. Personalizing these treatments to suit the preferences and goals of an individual is critical to improving their engagement and sustainment, as well as maximizing the treatment's effectiveness. Robots have great potential to facilitate these training regimens and support people with cognitive impairments, their caregivers, and clinicians. This article examines how robots can adapt their behavior to be personalized to an individual in the context of cognitive neurorehabilitation. We provide an overview of existing robots being used to support neurorehabilitation and identify key principles for working in this space. We then examine state-of-the-art technical approaches for enabling longitudinal behavioral adaptation. To conclude, we discuss our recent work on enabling social robots to automatically adapt their behavior and explore open challenges for longitudinal behavior adaptation. This work will help guide the robotics community as it continues to provide more engaging, effective, and personalized interactions between people and robots. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 5 is May 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
Full-text available
An interesting aspect of love and sex (and other types of interactions) with robots is that human beings often treat robots as animate and express emotions towards them. In this paper, we discuss two interpretations of why people experience emotions towards robots and tend to treat them as animate: naturalistic and antinaturalistic. We first provide a set of examples that illustrate human beings considering robots animate and experiencing emotions towards them. We then identify, reconstruct and compare naturalist and antinaturalist accounts of these attitudes and point out the functions and limitations of these accounts. Finally, we argue that in the case of emotional and ‘animating’ human–robot interactions, naturalist and antinaturalist accounts should be – as they most often are – considered complementary rather than competitive or contradictory.
Article
Full-text available
Teaching is inherently collaborative. The input teachers receive from colleagues, students, and administrators can influence curriculum choices and alter classroom dynamics. A teaching team is a group of professionals who choose to actively collaborate for a common instructional purpose (Cook & Friend, 1995). The model of shared teaching responsibilities, or co-teaching, has been widely applied in the K-12 setting, particularly related to special education (Scruggs et al., 2007). Austin (2001) found that educators appreciated the availability of “another teacher’s expertise and viewpoint” (p. 251) in a co-teaching situation. Within higher education specifically, research argues collaboration, or co-teaching, is difficult as questions surrounding power dynamics, shared responsibility, and individual expertise often emerge (Ferguson & Wilson, 2011; Morelock et al. 2017). Co-teaching essentially doubles resources available to students and allows instructors to give more attention to classroom dynamics, but the paradigm is still largely centered on individual teachers. In this study, we investigate an alternative to the traditional way of thinking about expertise in the classroom. Specifically, when one member of a teaching team is a social robot there may be additional interpersonal affordances and opportunities that enhance the learning experience.
Book
Full-text available
How people judge humans and machines differently, in scenarios involving natural disasters, labor displacement, policing, privacy, algorithmic bias, and more. How would you feel about losing your job to a machine? How about a tsunami alert system that fails? Would you react differently to acts of discrimination depending on whether they were carried out by a machine or by a human? What about public surveillance? How Humans Judge Machines compares people's reactions to actions performed by humans and machines. Using data collected in dozens of experiments, this book reveals the biases that permeate human-machine interactions. Are there conditions in which we judge machines unfairly? Is our judgment of machines affected by the moral dimensions of a scenario? Is our judgment of machine correlated with demographic factors such as education or gender? César Hidalgo and colleagues use hard science to take on these pressing technological questions. Using randomized experiments, they create revealing counterfactuals and build statistical models to explain how people judge artificial intelligence and whether they do it fairly. Through original research, How Humans Judge Machines bring us one step closer to understanding the ethical consequences of AI. Written by César A. Hidalgo, the author of Why Information Grows and coauthor of The Atlas of Economic Complexity (MIT Press), together with a team of social psychologists (Diana Orghian and Filipa de Almeida) and roboticists (Jordi Albo-Canals), How Humans Judge Machines presents a unique perspective on the nexus between artificial intelligence and society. Anyone interested in the future of AI ethics should explore the experiments and theories in How Humans Judge Machines.
Article
Full-text available
In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, we hope to open new views upon urgent and much-discussed questions that, quite soon, may confront us in our daily lives.
Chapter
Full-text available
In 2017, the Protestant Church in Germany presented the robot priest “BlessU2” to the participants of the Deutscher Kirchentag in Wittenberg. This generated a number of important questions on key themes of religion(s) in digital societies: Are robots legitimized and authorized to pronounce blessings on humans—and why? To answer such questions, one must first define the interrelationship of technology, religion and the human being. Paul Tillich (1886–1965) referred to the polarization of autonomy and heteronomy by raising the issue of theonomy: the first step on the way to critical research on representing the divine in robotic technology.
Article
Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities." Numerous scholars who favor or disfavor its feasibility have participated in the debate. This paper presents an experiment (N=1270) that 1) collects online users' first impressions of 11 possible rights that could be granted to autonomous electronic agents of the future and 2) examines whether debunking common misconceptions on the proposal modifies one's stance toward the issue. The results indicate that even though online users mainly disfavor AI and robot rights, they are supportive of protecting electronic agents from cruelty (i.e., favor the right against cruel treatment). Furthermore, people's perceptions became more positive when given information about rights-bearing non-human entities or myth-refuting statements. The style used to introduce AI and robot rights significantly affected how the participants perceived the proposal, similar to the way metaphors function in creating laws. For robustness, we repeated the experiment over a more representative sample of U.S. residents (N=164) and found that perceptions gathered from online users and those by the general population are similar.
Article
There is an ongoing debate about the existence of humanoid robotics. The arguments tend to focus on the ethical claims that there is always deception involved. However, little attention has been paid to the ontological reasons that humanoid robotics are valuable in consciousness research. This paper examines the arguments and controversy around ethical rejection of humanoid robotics, while also summarizing some of the landscape of 4e cognition that highlights the ways our specific humanoid bodies in our specific cultural, social, and physical environments play an indispensable role in cognition, from conceptualization through communication. Ultimately, we argue that there is a compelling set of reasons to pursue humanoid robotics as a major research agenda in AI if the goal is to create an artificial conscious system that we will be able to both recognize as conscious and communicate with successfully.
Article
Full-text available
Human-robot interaction for mobile robots is still in its infancy. As robots increase in capabilities and are able to perform more tasks in an autonomous manner we need to think about the interactions that humans will have with robots and what software architecture and user interface designs can accommodate the human-in-the loop. This paper outlines a theory of human-robot interaction and proposes information needed for maintaining the user’s situational awareness. Keywords: Human-robot interaction, situational awareness, human-computer interaction 1.
Article
This article was written for scholars who do not engage in qualitative research and/or who are not familiar with its methods and epistemologies. It focuses on naturalistic qualitative research with families. An overview of the goals and procedures of qualitative research is first presented. This is followed by a discussion of the linkages between epistemologies and methodology. Then, possible guidelines involved in the several steps of the evaluation process of qualitative family papers are reviewed. This is complemented by an overview of problems frequently encountered both by reviewers and by authors of such papers. The variety of qualitative epistemologies and methods in family research is highlighted, even though our focus is limited to epistemologies leading to naturalistic fieldwork.
Article
Five common themes were derived from the literature on effective work groups, and then characteristics representing the themes were related to effectivness criteria. Themes included job design, interdependence, composition, context, and process. They contained 19 group characteristics which were assessed by employees and managers. Effectiveness criteria included productivity, employee satisfaction, and manager judgments. Data were collected from 391 employees, 70 managers, and archival records for 80 work groups in a financial organization. Results showed that all three effectiveness criteria were predicted by the characteristics, and nearly all characteristics predicted some of the effectiveness criteria. The job design and process themes were slightly more predictive than the interdependence, composition, and context themes. Implications for designing effective work groups were discussed, and a 54-item measure of the 19 characteristics was presented for future research.
Article
This article investigates the commonalities between motivations and emotions as evidenced in a broad range of animal models, including humans. In particular, a focus is placed on how these models can have utility within the context of working robotic systems. Behavior-based control serves as the primary vehicle through which emotions and motivations are integrated into robots ranging from hexapods to wheeled robots to humanoids. Starting from relatively low-level organisms, such as the sowbug and praying mantis, and then moving upward to human interactions, a progression of motivational/emotional models and robotic experiments is reported. These capture a wide set of affective phenomena including social attachment, emotional behavior in support of interspecies interaction, multiscale temporal affect, and various motivational drives such as hunger and fear.