Chapter

The Role of Gesture in Social Telepresence Robots—A Scenario of Distant Collaborative Problem-Solving

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Human–robot interaction is a well-studied research field today; robots vary from tele-operators and avatars to robots with social characteristics. In this review paper, first we present related work on tele-operation, mobile robotic telepresence, and social robots. Then, we focus on the role of gestures and body language in robotics, and more precisely their importance for communication in collaborative settings. In our collaborative setting scenario, we have a group of multiple human users working on collaborative problem-solving around a tangible user interface (TUI). A TUI employs physical artifacts both as “representations” and “controls” for computational media. We have the same situation in a separate spatial location. We extend this specific scenario by having an avatar robot in each one of the two locations which represents remote team members and mirrors their actions, gaze, and gestures. Our goal in this paper is to give an overview of current solutions that provide a sense of being in a different place and to describe our future scenario of having an avatar robot solving a problem on a TUI collaboratively with human users. We present a discussion about technical and social questions related to the acceptance of avatar robots at work considering which properties they should have, to what extent the current state of the art in social robotics is applicable, and which additional technical components need to be developed.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Social robots are robots interacting with humans not only in collaborative settings, but also in personal settings like domestic services and healthcare. Some social robots simulate feelings (companions) while others just help lifting (assistants). However, they often incite both fascination and fear: what abilities should social robots have and what should remain exclusive to humans? We provide a historical background on the development of robots and related machines (1), discuss examples of social robots (2) and present an expert study on their desired future abilities and applications (3) conducted within the Forum of the European Active and Assisted Living Programme (AAL). The findings indicate that most technologies required for the social robots' emotion sensing are considered ready. For care robots, the experts approve health-related tasks like drawing blood while they prefer humans to do nursing tasks like washing. On a larger societal scale, the acceptance of social robots increases highly significantly with familiarity, making health robots and even military drones more acceptable than sex robots or child companion robots for childless couples. Accordingly, the acceptance of social robots seems to decrease with the level of face-to-face emotions involved.
Conference Paper
Full-text available
Our research aims toward a method of evaluating how invasion of personal space by a robot, with appropriate social context, affects human comfort. We contribute an early design of a testbed to evaluate how comfort changes because of invasion of personal space by a robot during a collaborative task within a shared workspace. Our testbed allows the robot to reach into the human's personal space at different distances and urgency levels. We present a collaborative task testbed using a humanoid robot and future directions for this work.
Article
Full-text available
Collaborative problem solving is a skill that has become very important in our everyday lives and is constantly gaining attention in educational settings. In this paper, we present COPSE: a novel and unique software framework for instantiating Microworlds as collaborative problem solving activities on tangible tabletop interfaces. The framework provides three types of building blocks: widgets (provide input and localized feedback), equations (define the model), and scenes (visualize feedback), which can be specified in the form of structured text. Aim of COPSE is to simplify processes of creating, adjusting, and reusing custom Microworlds scenarios. We describe the structure of the framework, provide an example of a scenario, and report on a case study where we have used COPSE together with 33 teachers to build new scenarios on the fly.
Article
Full-text available
This paper investigates the effects of relative position and proxemics in the engagement process involved in Human-Robot collaboration. We evaluate the differences between two experimental placement conditions (frontal vs. lateral) for an autonomous robot in a collaborative task with a user across two different types of robot behaviours (helpful vs. neutral). The study evaluated placement and behaviour types around a touch table with 80 participants by measuring gaze, smiling behaviour, distance from the task, and finally electrodermal activity. Results suggest an overall user preference and higher engagement rates with the helpful robot in the frontal position. We discuss how behaviours and position of the robot relative to a user may affect user engagement and collaboration, in particular when the robot aims to provide help via socio-emotional bonding.
Conference Paper
Full-text available
In an era in which robots take a part in our lives in daily living activities, humans have to trust robots in home environments. We aim to create guidelines that allow humans to trust robots to be able to look after their well-being by adopting human-like behaviours. We want to study a Human-Robot Interaction (HRI) to assess whether a certain degree of transparency in the robots actions, the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. However, trust can change over time due to different factors, e.g. due to perceiving erroneous robot behaviors. We believe that the magnitude and the timing of the error during an interaction may have different impacts resulting in different scales of loss of trust and of restoring lost trust.
Article
Full-text available
To enable situated human–robot interaction (HRI), an autonomous robot must both understand and control proxemics—the social use of space—to employ natural communication mechanisms analogous to those used by humans. This work presents a computational framework of proxemics based on data-driven probabilistic models of how social signals (speech and gesture) are produced (by a human) and perceived (by a robot). The framework and models were implemented as autonomous proxemic behavior systems for sociable robots, including: (1) a sampling-based method for robot proxemic goal state estimation with respect to human–robot distance and orientation parameters, (2) a reactive proxemic controller for goal state realization, and (3) a cost-based trajectory planner for maximizing automated robot speech and gesture recognition rates along a path to the goal state. Evaluation results indicate that the goal state estimation and realization significantly improve upon past work in human–robot proxemics with respect to “interaction potential”—predicted automated speech and gesture recognition rates as the robot enters into and engages in face-to-face social encounters with a human user—illustrating their efficacy to support richer robot perception and autonomy in HRI.
Article
Full-text available
Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs, but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.
Article
Full-text available
In this paper, we present a method for recognizing human activities using information sensed by an RGB-D camera, namely the Microsoft Kinect. Our approach is based on the estimation of some relevant joints of the human body by means of the Kinect; three different machine learning techniques, i.e., K-means clustering, support vector machines, and hidden Markov models, are combined to detect the postures involved while performing an activity, to classify them, and to model each activity as a spatiotemporal evolution of known postures. Experiments were performed on Kinect Activity Recognition Dataset, a new dataset, and on CAD-60, a public dataset. Experimental results show that our solution outperforms four relevant works based on RGB-D image fusion, hierarchical Maximum Entropy Markov Model, Markov Random Fields, and Eigenjoints, respectively. The performance we achieved, i.e., precision/recall of 77.3% and 76.7%, and the ability to recognize the activities in real time show promise for applied use.
Article
Full-text available
In the past decade, there has been a resurgence in the field of unobtrusive cardiomechanical assessment, through advancing methods for measuring and interpreting ballistocardiogram (BCG) and seismocardiogram (SCG) signals. Novel instrumentation solutions have enabled BCG and SCG measurement outside of clinical settings: in the home, in the field, and even in microgravity. Customized signal processing algorithms have led to reduced measurement noise, clinically relevant feature extraction, and signal modeling. Finally, human subjects physiology studies have been conducted using these novel instruments and signal processing tools with promising clinically relevant results. This paper reviews the recent advances in these areas of modern BCG and SCG research.
Conference Paper
Full-text available
This paper is concerned with the modality of gestures in communication between an intelligent wheelchair and a human user. Gestures can enable and facilitate human-robot interaction (HRI) and go beyond familiar pointing gestures considering also context-related, subtle, implicit gestural and vocal instructions that can enable a service. Some findings of a user study related to gestures are presented in this paper; the study took place at the Bremen Ambient Assisted Living Lab, a 60m2 apartment suitable for the elderly and people with physical or cognitive impairments.
Article
Full-text available
This paper introduces and empirically evaluates two scaling functions to alter a robot’s physical movements based on proximity to a human. Previous research has focused on individual aspects of proxemics, like the appropriate distance to maintain from a human, but has not explored autonomous methods to adapt robot behavior as proximity changes. This paper proposes that robots in a social role should modify their behavior using a continuous function mapped to proximity. The method developed calculates a gain value from proximity readings, which is used to shape the execution of active behaviors on the robot. In order to identify the effects of different mappings from proximity to gain value, two different scaling functions were implemented on an affective search and rescue robot. The findings from a 72 participant study, in a high-fidelity mock disaster site, are examined with attention given to a new measure to determine proxemic awareness. The results indicated that for attributes of intelligence, likability, proxemic awareness, and submissiveness, a logarithmic-based scaling function is preferred over a linear-based scaling function, and over no scaling function. In areas of participant comfort and participant stress, the results indicated both logarithmic and linear scaling functions were preferred to no scaling.
Conference Paper
Full-text available
The new generation of watches is smart. Smart watches are connected to the internet and provide sensor functionality that allows an enhanced human-computer-interaction. Smart watches provide a gesture interaction and a permanent monitoring of physical activities. In comparison to other electronic home consumer devices with integrated sensors, Smart watches provide monitoring data for 24h per day, many watches are water resistant and can be worn constantly. The integrated sensors are varying in performance and are not intended to distinguish between different states of activity and inactivity. This paper reports on identified requirements on sensors of smart watches for detection of activity, inactivity as well as sleep detection. Hereby a new measurement quantity is introduced and applications of heart beat detection or wearing situation are presented.
Article
Full-text available
A method that remotely measures blood oxygen saturation through two cameras under regular lighting is proposed and experimentally demonstrated. Two narrow-band filters with their visible wavelength of 660nm and 520nm are mounted to two cameras respectively, which are then used to capture two photoplethysmographic (PPG) from the subject simultaneously. The data gathered from this system, including both blood oxygen saturation and heart rate, is compared to the output of a traditional figure blood volume pulse (BVP) senor that was employed on the subject at the same time. Result of the comparison showed that the data from the new, non-contact system is consistent and comparable with the BVP senor. Compared to other camera-based measuring method, which requires additional close-up lighting, this new method is achievable under regular lighting condition, therefore more stable and easier to implement. This is the first demonstration of an accurate video-based method for non-contact oxygen saturation measurements by using ambient light with their respective visible wavelength of 660nm and 520nm which is free from interference of the light in other bands.
Article
Full-text available
Proxemic Interactions envision interactive computer systems that exploit peoples ’ and devices ’ spatial relationships (proxemics) to provide more natural and seamless interactions with ubicomp technology. It builds upon fundamental proxemic theories about people’s understanding and use of the personal space around them. In this paper, we focus on how nuances of the proxemic theories and concepts of Proxemic Interaction can be applied to address six key challenges of ubicomp interaction design, where we consider how we can leverage information on fine grained proxemic relationships. We also discuss how previous proxemic2aware systems addressed these challenges. ACM Classification: H.5.2 [Information interfaces and
Article
Full-text available
Recent research in the field of Human Computer Interaction aims at recognizing the user's emotional state in order to provide a smooth interface between humans and computers. This would make life easier and can be used in vast applications involving areas such as education, medicine etc. Human emotions can be recognized by several approaches such as gesture, facial images, physiological signals and neuro imaging methods. Most of the researchers have developed user dependent emotion recognition system and achieved maximum classification rate. Very few researchers have tried to develop a user independent system and obtained lower classification rate. Efficient emotion stimulus method, larger data samples and intelligent signal processing techniques are essential for improving the classification rate of the user independent system. In this paper, we present a review on emotion recognition using physiological signals. The various theories on emotion, emotion recognition methodology and the current advancements in emotion research are discussed in subsequent topics. This would provide an insight on the current state of research and its challenges on emotion recognition using physiological signals, so that research can be advanced to obtain better recognition.
Article
Full-text available
Educational computing has much to offer mathematics education, particularly when software is designed which provides students with the opportunity to go beyond practicing basic skills and solving routine problems, and instead supports mathematical discovery and exploration. Although drill and practice is still the category of software employed most frequently by mathematics and science teachers who use computers [1], software which functions as a cognitive tool for exploration and sense-making is becoming more evident in both classroom and research environments [2, 3]. The purpose of this article is to report on the results of a research study involving an exploratory learning environment, or mathematical microworld, for transformation geometry. The goal is to outline the principles underlying the design of the environment as well as to present an analysis of the learning of a group of middle school students who interacted with the microworld over a period of several weeks.
Article
Full-text available
Mixed Presence Groupware (MPG) supports both co-located and distributed participants working over a shared visual workspace. It does so by connecting multiple single-display groupware workspaces together through a shared data structure. Our implementation and observations of MPG systems exposes two problems: the first is display disparity, where connecting heterogeneous displays introduces issues in how people are seated around the workspace and how workspace artifacts are oriented; the second problem is presence disparity, where the perceived presence of collaborators is markedly different depending on whether they are co-located or remote. Presence disparity is likely caused by inadequate consequential communication between remote parti- cipants, which in turn disrupts group collaborative and communication dynamics. To mitigate display and presence disparity problems, we determine virtual seating positions and replace conventional telepointers with digital arm shadows that extend from a person's side of the table to their pointer location. ACM Classification: H.5.3 (Groups and organizational interfaces - Computer supported cooperative work).
Article
Full-text available
Mobile robotic telepresence (MRP) systems incorporate video conferencing equipment onto mobile robot devices which can be steered from remote locations. These systems, which are primarily used in the context of promoting social interaction between people, are becoming increasingly popular within certain application domains such as health care environments, independent living for the elderly, and office environments. In this paper, an overview of the various systems, application areas, and challenges found in the literature concerning mobile robotic telepresence is provided. The survey also proposes a set terminology for the field as there is currently a lack of standard terms for the different concepts related to MRP systems. Further, this paper provides an outlook on the various research directions for developing and enhancing mobile robotic telepresence systems per se, as well as evaluating the interaction in laboratory and field settings. Finally, the survey outlines a number of design implications for the future of mobile robotic telepresence systems for social interaction.
Article
Full-text available
Utilization of computer tools in linguistic research has gained importance with the maturation of media frameworks for the handling of digital audio and video. The increased use of these tools in gesture, sign language and multimodal interaction studies has led to stronger requirements on the flexibility, the efficiency and in particular the time accuracy of annotation tools. This paper describes the efforts made to make ELAN a tool that meets these requirements, with special attention to the developments in the area of time accuracy. In subsequent sections an overview will be given of other enhancements in the latest versions of ELAN, that make it a useful tool in multimodality research.
Article
Full-text available
We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.
Conference Paper
Full-text available
As robots enter the everyday physical world of people, it is important that they abide by society's unspoken social rules such as respecting people's personal spaces. In this paper, we explore issues related to human personal space around robots, beginning with a review of the existing literature in human-robot interaction regarding the dimensions of people, robots, and contexts that influence human-robot interactions. We then present several research hypotheses which we tested in a controlled experiment (N = 30). Using a 2 (robotics experience vs. none: between-participants) × 2 (robot head oriented toward a participant's face vs. legs: within-participants) mixed design experiment, we explored the factors that influence proxemic behavior around robots in several situations: (1) people approaching a robot, (2) people being approached by an autonomously moving robot, and (3) people being approached by a teleoperated robot. We found that personal experience with pets and robots decreases a person's personal space around robots. In addition, when the robot's head is oriented toward the person's face, it increases the minimum comfortable distance for women, but decreases the minimum comfortable distance for men. We also found that the personality trait of agreeableness decreases personal spaces when people approach robots, while the personality trait of neuroticism and having negative attitudes toward robots increase personal spaces when robots approach people. These results have implications for both human-robot interaction theory and design.
Conference Paper
Full-text available
Our current understanding of human interaction with hybrid or augmented environments is very limited. Here we focus on 'tangible interaction', denoting systems that rely on embodied interaction, tangible manipulation, physical representation of data, and embeddedness in real space. This synthesis of prior 'tangible' definitions enables us to address a larger design space and to integrate approaches from different disciplines. We introduce a framework that focuses on the interweaving of the material/physical and the social, contributes to understanding the (social) user experience of tangible interaction, and provides concepts and perspectives for considering the social aspects of tangible interaction. This understanding lays the ground for evolving knowledge on collaboration-sensitive tangible interaction design. Lastly, we analyze three case studies, using the framework, thereby illustrating the concepts and demonstrating their utility as analytical tools.
Conference Paper
Full-text available
As geographically distributed teams become increasingly common, there are more pressing demands for communication work practices and technologies that support distributed collaboration. One set of technologies that are emerging on the commercial market is mobile remote presence (MRP) systems, physically embodied videoconferencing systems that remote workers use to drive through a workplace, communicating with locals there. Our interviews, observations, and survey results from people, who had 2-18 months of MRP use, showed how remotely-controlled mobility enabled remote workers to live and work with local coworkers almost as if they were physically there. The MRP supported informal communications and connections between distributed coworkers. We also found that the mobile embodiment of the remote worker evoked orientations toward the MRP both as a person and as a machine, leading to formation of new usage norms among remote and local coworkers.
Conference Paper
Full-text available
The design of an affect recognition system for socially perceptive robots relies on representative data: human-robot interaction in naturalistic settings requires an affect recognition system to be trained and validated with contextualised affective expressions, that is, expressions that emerge in the same interaction scenario of the target application. In this paper we propose an initial computational model to automatically analyse human postures and body motion to detect engagement of children playing chess with an iCat robot that acts as a game companion. Our approach is based on vision-based automatic extraction of expressive postural features from videos capturing the behaviour of the children from a lateral view. An initial evaluation, conducted by training several recognition models with contextualised affective postural expressions, suggests that patterns of postural behaviour can be used to accurately predict the engagement of the children with the robot, thus making our approach suitable for integration into an affect recognition system for a game companion in a real world scenario.
Conference Paper
Full-text available
To seamlessly integrate into the human physical and social environment, robots must display appropriate proxemic behavior-that is, follow societal norms in establishing their physical and psychological distancing with people. Social-scientific theories suggest competing models of human proxemic behavior, but all conclude that individuals' proxemic behavior is shaped by the proxemic behavior of others and the individual's psychological closeness to them. The present study explores whether these models can also explain how people physically and psychologically distance themselves from robots and suggest guidelines for future design of proxemic behaviors for robots. In a controlled laboratory experiment, participants interacted with Wakamaru to perform two tasks that examined physical and psychological distancing of the participants. We manipulated the likeability (likeable/dislikeable) and gaze behavior (mutual gaze/averted gaze) of the robot. Our results on physical distancing showed that participants who disliked the robot compensated for the increase in the robot's gaze by maintaining a greater physical distance from the robot, while participants who liked the robot did not differ in their distancing from the robot across gaze conditions. The results on psychological distancing suggest that those who disliked the robot also disclosed less to the robot. Our results offer guidelines for the design of appropriate proxemic behaviors for robots so as to facilitate effective human-robot interaction.
Conference Paper
Full-text available
ABSTRACT This paper proposes a model,of approach,behavior with which a robot can initiate conversation with people who,are walking. We developed,the model by learning,from the failures in a simplistic approach,behavior ,used in a ,real shopping ,mall. Sometimes people were unaware of the robot’s presence, even when it spoke tothem. Sometimes, people were not sure whether the robot was really trying to start a conversation, and they did not start talking with it even ,though ,they displayed interest. To prevent ,such failures, our model includes the following functions: predicting the walking behavior of people, choosing a target person, planning its approaching path, and nonverbally indicating its intention to initiate ,a conversation. ,The approach ,model ,was implemented,and ,used in a ,real shopping ,mall. The field trial demonstrated,that our model ,significantly improves ,the robot’s performance,in initiating conversations. Categories and Subject Descriptors H.5.2 [Information Interfaces and Presentation]: User Interfaces-Interaction styles General Terms
Conference Paper
Full-text available
Based on a study of the engagement process between humans, we have developed and implemented an initial computational model for recognizing engagement between a human and a humanoid robot. Our model contains recognizers for four types of connection events involving gesture and speech: directed gaze, mutual facial gaze, conversational adjacency pairs and backchannels. To facilitate integrating and experimenting with our model in a broad range of robot architectures, we have packaged it as a node in the open-source Robot Operating System (ROS) framework. We have conducted a preliminary validation of our computational model and implementation in a simple human-robot pointing game.
Conference Paper
Full-text available
To explore possible robot tasks in daily life, we developed a guide robot for a shopping mall and conducted a field trial with it. The robot was designed to interact naturally with customers and to affectively provide shopping information. It was also designed to repeatedly interact with people to build a rapport; since a shop- ping mall is a place people repeatedly visit, it provides the chance to explicitly design a robot for multiple interactions. For this ca- pability, we used RFID tags for person identification. The robot was semi-autonomous, partially controlled by a human operator, to cope with the difficulty of speech recognition in a real envi- ronment and to handle unexpected situations. A field trial was conducted at a shopping mall for 25 days to ob- serve how the robot performed this task and how people inte- racted with it. The robot interacted with approximately 100 groups of customers each day. We invited customers to sign up for RFID tags and those who participated answered questionnaires. The results revealed that 63 out of 235 people in fact went shop- ping based on the information provided by the robot. The experi- mental results suggest promising potential for robots working in shopping malls.
Conference Paper
Full-text available
Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and design by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper discusses a model of TUI, key properties, genres, applications, and summarizes the contributions made by the Tangible Media Group and other researchers since the publication of the first Tangible Bits paper at CHI 1997. http://tangible.media.mit.edu/
Chapter
This paper proposes gesture performance as one main channel for assessing collaboration skills, while multiple users solve a problem collaboratively on a tangible user interface. Collaborative problem solving incorporates two dimensions, complex problem solving and collaboration. Thus, the technology-based assessment of collaborative problem solving includes assessing both problem solving and collaboration skills. Particularly, for assessing collaboration skills, we consider gesture performance as an important indicator. We differentiate between physical 3D mid-air gestures and manipulative gestures; for the latter, we developed a gesture recognition application using Kinect. The method we follow for object and gesture recognition is to merge the logging files from our tangible interface software framework (object recognition) with the Kinect log files (gesture recognition) in one file. The application can analyze the number of object manipulations with respect to timing axis, subject/participant, and handedness.
Article
Aims and objectives This article examines the attitudes of Finnish home care registered nurses, licenced vocational nurses, and other health and social care personnel towards the introduction and use of care robots in home care. Background The significance of care robotics has been highlighted in recent years. However, personnel‐related social psychological barriers to the introduction of care robots have been given very little study. Design Cross sectional study conducted by questionnaire. The theoretical framework of the study is based on Ajzen's theory of planned behaviour and the research discussion about attitudes towards robots. Methods The research data was collected in five municipalities in different parts of Finland in 2016, and the questionnaire was answered by a total of 200 home care workers. The research data was analysed using exploratory factor analysis, Pearson product‐moment correlation, one‐way analysis of variance, and linear regression analysis. Results The results are consistent with Ajzen′s theory and previous studies on the acceptance of information systems in health care. Personnel behavioural intentions related to the introduction of robot applications in home care are influenced by their personal appreciation of the usefulness of robots, the expectations of their colleagues and supervisors, as well as by their own perceptions of their capacity to learn to use care robots. In particular, personnel emphasized the value of care robots in providing reminders and guidance, as well as promoting the safety of the elderly. Conclusions The study shows that an intimate human–robot relationship can pose a challenge from the perspective of the acceptance of care robots. This article is protected by copyright. All rights reserved.
Article
This study proposes a novel prosthetic hand that can artificially replace human sensation using haptic transplant technology. Conventional motorized prostheses are unable to precisely control the grasping force of the hand due to the lack of force sensation. In this paper, a haptic prosthetic hand is developed that allows for intuitive operation and precise force adjustment. These functions are realized by artificially transplanting the tactile sensation of a healthy part of the user's body to the amputated part of their body. The haptic transplant technology transmits the force sensation of the prosthesis to the master interface attached to the amputee's healthy part of body using bilateral control without a force sensor. Furthermore, variable compliance control is proposed for flexible adaptation to the shape of the object being grasped. The effectiveness of the proposed system was experimentally verified by comparing the controllability of the proposed system with that of a myoelectric prosthesis. In summary, an artificial replacement for human sensation was developed, and the intuitive operability and flexibility of the proposed system were confirmed. IEEE
Conference Paper
Telepresence is one of the emerging fields in robotics. The objective of the work is to enhance the Telepresence Robot to Telepresence Robot Doctor which can allow direct connection to medical sensors, such as electronic heart pulse rate sensors and temperature sensor for transmitting medical data to the remote physician. The physician can thus discuss mode of treatment and can communicate with patients remotely. This helps to improve the efficiency of medical diagnosis and treatment plans during non-life threatening emergencies. This project has contributed in the doctor-robot interface where the doctor/user can control the robot reliably via regular internet connection from a different location.
Conference Paper
The general problem addressed in this paper is supporting a more efficient communication between remote users, who control telepresence robots, and people in the local setting. The design of most telepresence robots does not allow them to perform gestures. Given the key role of pointing in human communication, exploring design solutions for providing telepresence robots with deictic gesturing capabilities is, arguably, a timely research issue for Human-Robot Interaction. To address this issue, we conducted an empirical study, in which a set of low fidelity prototypes, illustrating various designs of a robot's gesture arm, were assessed by the participants (N=18). The study employed a mixed-method approach, a combination of a controlled experiment, elicitation study, and design provocation. The evidence collected in the study reveals participants' assessment of the designs, used in the study, and provides insights into participants' attitudes and expectations regarding gestural communication with telepresence robots in general
Chapter
Automatic emotion recognition is a major topic in the area of human--robot interaction. This paper presents an emotion recognition system based on physiological signals . Emotion induction experiments which induced joy, sadness, anger, and pleasure were conducted on 11 subjects. The subjects’ electrocardiogram (ECG) and respiration (RSP) signals were recorded simultaneously by a physiological monitoring device based on wearable sensors. Compared to the non-wearable physiological monitoring devices often used in other emotion recognition systems, the wearable physiological monitoring device does not restrict the subjects’ movement. From the acquired physiological signals, one hundred and forty-five signal features were extracted. A feature selection method based on genetic algorithm was developed to minimize errors resulting from useless signal features as well as reduce computation complexity. To recognize emotions from the selected physiological signal features, a support vector machine (SVM) method was applied, which achieved a recognition accuracy of 81.82, 63.64, 54.55, and 30.00 % for joy, sadness, anger, and pleasure, respectively. The results showed that it is feasible to recognize emotions from physiological signals.
Conference Paper
This paper presents about an intelligent service robot named Moratuwa Intelligent Robot (MIRob) that can acquire knowledge through interactive discussion with the user while handling the uncertain information in the user instructions. In order to facilitate this behavior, a finite state intention module has been introduced to manage the interaction between the user and the robot. A set of states has been defined to acquire and update the knowledge as well as in order to perform the actions required to satisfy the user instructions. Dialogue flows and patterns have been defined for each state in order to maintain the interaction with the user. The defined dialogue flows and the dialogue patterns are presented. Furthermore, the concept has been developed in such a way that the system is capable of updating a Robot Experience Model (REM) according to the acquired knowledge. In order to evaluate the performance of the system, experiments have been carried out in an artificially created domestic environment. The capabilities of the proposed system have been demonstrated and validated from the experimental results.
Article
Photoelectric plethysmographs for the fingers and toes are described which use electrocardiographs for the recording and which have definite advantages in routine clinical observations on the circulation. The validity of the technique is established (1) by comparison of the photoelectric records with simultaneous records obtained with transmission plethysmographs, (2) by comparison of the photoelectric records in instances of circulatory disturbances with independent directional confirmation by other methods in the literature.
Conference Paper
In this paper, we draw upon insights gained in our previous work on human-human proxemic behavior analysis to develop a novel method for human-robot proxemic behavior production. A probabilistic framework for spatial interaction has been developed that considers the sensory experience of each agent (human or robot) in a co-present social encounter. In this preliminary work, a robot attempts to maintain a set of human body features in its camera field-of-view. This methodology addresses the functional aspects of proxemic behavior in human-robot interaction, and provides an elegant connection between previous approaches.
Article
In this work, we discuss a set of feature representations for analyzing human spatial behavior (proxemics) motivated by metrics used in the social sciences. Specifically, we consider individual, physical, and psychophysical factors that contribute to social spacing. We demonstrate the feasibility of autonomous real-time annotation of these proxemic features during a social interaction between two people and a humanoid robot in the presence of a visual obstruction (a physical barrier). We then use two different feature representations—physical and psychophysical—to train Hidden Markov Models (HMMs) to recognize spatiotemporal behaviors that signify transitions into (initiation) and out of (termination) a social interaction. We demonstrate that the HMMs trained on psychophysical features, which encode the sensory experience of each interacting agent, outperform those trained on physical features, which only encode spatial relationships. These results suggest a more powerful representation of proxemic behavior with particular implications in autonomous socially interactive and socially assistive robotics.
Article
The factors influencing flicker sensitivity are reviewed. Sensitivity is increasing because of improving lighting standards and personal peak sensitivity occurs for people approximately 20 years old. There is wide variability between people and the effect of flicker is related to the individual threshold. Office surveys reveal that significant numbers of people see flicker and that this feature is associated with the 'unsatisfactory' ratings of the lighting. Some headaches and eyestrain are related to seeing flicker. Light modulation from fluorescent lamps at 50 and 100 Hz has been measured and the growth in the 50 Hz component with time has been studied. The 50 Hz component is the one which is normally seen. The lamp characteristics show this 50 Hz modulation to remain at a low value for the first 7-8000 hours operation and thereafter to rise more rapidly with operation. Planned lamp replacement is now essential.
Article
This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
Conference Paper
Telepresence refers to a set of technologies that allow users to feel present at a distant location; telerobotics is a subfield of telepresence. This paper presents the design and evaluation of a telepresence robot which allows for social expression. Our hypothesis is that a telerobot that communicates more than simply audio or video but also expressive gestures, body pose and proxemics, will allow for a more engaging and enjoyable interaction. An iterative design process of the MeBot platform is described in detail, as well as the design of supporting systems and various control interfaces. We conducted a human subject study where the effects of expressivity were measured. Our results show that a socially expressive robot was found to be more engaging and likable than a static one. It was also found that expressiveness contributes to more psychological involvement and better cooperation.
Conference Paper
The detection of emotion is becoming an increasingly important field for human- computer interaction as the advantages emotion recognition offer become more apparent and realisable. Emotion recognition can be achieved by a number of methods, one of which is through the use of bio -sensors. Bio-sensors possess a number of advantages against other emotion recognition methods as they can be made both inobtrusive and robust against a number of environmental conditions which other forms of emotion recognition have difficulty to overcome. In this paper, we describe a procedure to train computers to recognise emotions using multiple signals from many different bio-sensors. In particular, we describe the procedure we adopted to elicit emotions and to train our system to recognise them. We also present a set of preliminary results which indicate that our neural net classifier is able to obtain accuracy rates of 96.6% and 89.9% for recognition of emotion arousal and valence respectively.