Conference Paper

Teddy-bear based robotic user interface

Authors:
  • VIVITA INC. JAPAN
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A Robotic User Interface (RUI) is part of a concept in which a robot is used as an interface for human behavior. Our RUI, which we have been developing for communications, is a system for interpersonal exchanges that uses robots as agents for physical communication. In this paper, we propose a new type of RUI for interactive entertainment. This RUI enables people to directly interact with the information world.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A Robotic User Interface (RUI) is not only an interface for a robot but is also an interface that itself has robot features [2]. Shimizu defined an RUI as a device that has three features: a portable humanoid interface, an intuitive interaction method, and (various) haptic display [19]. Thus, it can be seen that a RUI has advantages as an interface method because it has intuitive and natural characteristics that other interface systems do not have. ...
... The most natural way to control a human-like agent is to control a corresponding part of the agent directly rather than through other mediate devices such as a joystick. Several RUI examples based on this advantage have been designed and verified [15] [19]. In addition, a human tends to consider a robot to be a living creature with emotion. ...
Conference Paper
In this paper, a robotic user interface (RUI) as an information management system is described. There have been numerous studies regarding 2D- or 3D-based GUIs that attempt to overcome the disadvantages of a hierarchal structure in information handling. Here, the GUI system is extended to RUI in a technique that enhances its capacity to store and recall by exploiting a spatial/functional association between semantic categories and body parts. A robot resembling a bear was implemented for the evaluation of this association. Experimental results revealed that information can be clustered on a specific body part with statistical significance. Information storage and retrieval becomes easier because there is no need for maintenance or memorization of a location in which specific information is stored. In addition, the fact that a robotic interface can be used in parallel with an existing desktop-based system enhances work efficiency.
... 2018 yılı itibariyle dünyada ileri teknoloji unsurları kullanılarak üretilen oldukça gelişmiş ve insanlarla etkileşime girebilen robot hayvanlar mevcuttur. Bunların en meşhurları AIBO (robot köpek) (Veloso, vd., 2006), PARO (robot deniz aslanı) (Wada, Shibata, Saito ve Tanie, 2004), NeCoRo (robot kedi) (Libin ve Cohen-Mansfield, 2002) ve Teddy Bear'dır (robot ayıcık) (Shimizu, 2006). ...
... We have proposed another type of RUI [14] that synchronize the shape and motion between the RUI in real world and a virtual avatar in the information world. This RUI system enables people to directly interact with the information world. ...
Article
Full-text available
A Robotic User Interface (RUI) is a concept wherein robots are used as interfaces for human behavior. We have developed a RUI system for communications and given it the appearance of a teddy bear. This system allows interpersonal communications using robots as physical avatars. In this paper, we propose a new use of the RUI as a physical agent for control of home appliances. This use of a RUI enables people to control home appliance as if playing with a teddy bear.
... Johnson et al. developed the Sympathetic Interface, a plush doll embedded with a wireless module that is used to manipulate a virtual character in an iconic and intentional manner [11]. In a similar manner, Shimizu et al. developed a plush robotic toy interface with haptic feedback [23]. ...
Conference Paper
PINOKY is a wireless ring-like device that can be externally attached to any plush toy as an accessory that animates the toy by moving its limbs. A user is thus able to instantly convert any plush toy into a soft robot. The user can control the toy remotely or input the movement desired by moving the plush toy and having the data recorded and played back. Unlike other methods for animating plush toys, PINOKY is non-intrusive, so alterations to the toy are not required. In a user study, 1) the roles of plush toys in the participants' daily lives were examined, 2) how participants played with plush toys without PINOKY was observed, 3) how they played with plush toys with PINOKY was observed, and their reactions to the device were surveyed. On the basis of the results, potential applications were conceptualized to illustrate the utility of PINOKY.
... Raffle et al. [18] created Topobo, a 3D constructive assembly system with kinetic memory, in which the actuator component recalls user operations and replays the motions. Shimizu et al. [21] introduced a teddy bear robot as a generic game controller. The main applications of these systems are entertainment and education. ...
Conference Paper
Full-text available
We present an actuated handheld puppet system for controlling the posture of a virtual character. Physical puppet devices have been used in the past to intuitively control character posture. In our research, an actuator is added to each joint of such an input device to provide physical feedback to the user. This enhancement offers many benefits. First, the user can upload pre-defined postures to the device to save time. Second, the system is capable of dynamically adjusting joint stiffness to counteract gravity, while allowing control to be maintained with relatively little force. Third, the system supports natural human body behaviors, such as whole-body reaching and joint coupling. This paper describes the user interface and implementation of the proposed technique and reports the results of expert evaluation. We also conducted two user studies to evaluate the effectiveness of our method.
Conference Paper
Full-text available
Young children are emotionally dependant on their parents. Sometimes they have to be apart from each other, for example, when a parent is travelling. Current communication technologies are not optimal for supporting the feeling of presence. Our goal was to explore the design space for remote communication between young children (4-6 years) and their parents. More specifically, we aimed at gaining user feedback to a variety of non-verbal interaction modalities using augmented everyday objects. We developed the Teddy Bear concept and created an embodied mock-up that enables remote hugging based on vibration, presence indication, and communication of gestures. We conducted a user study with eight children and their parents. Our qualitative findings show that both children and parents appreciated Teddy Bear for its non-verbal communication features, but that some aspects were not easily understood, such as gestures for strong emotions. Based on our findings, we propose design implications for mediated presence between young children and their parents.
Article
In this paper, we introduce Haptic-Emoticon, which is used in a text message. It expresses a writer's emotion and/or decorates the message with physical stimulus on a reader's skin. We employ stroking motions for that purpose. The motions are represented as a 2D trajectory so that a writer can create the Haptic-Emoticon with a conventional pointing input device. The input line is recorded as an image data in an existing image format. The Haptic-Emoticon can be shared on Twitter network because it is an image file. By employing Twitter, a worldwide SNS network, as an infrastructure for sharing the Haptic-Emoticon, we expect that users (writers and readers) evaluate their creative works each other. Moreover, by combing it with a Twitter text message, the Haptic-Emoticon could be a context aware haptic content to enrich text-based communication.
Book
Full-text available
This dissertation presents the development of a huggable social robot named Probo. Probo embodies a stuffed imaginary animal, providing a soft touch and a huggable appearance. Probo's purpose is to serve as a multidisciplinary research platform for human-robot interaction focused on children. In terms of a social robot, Probo is classified as a social interface supporting non-verbal communication. Probo's social skills are thereby limited to a reactive level. To close the gap with higher levels of interaction, an innovative system for shared control with a human operator is introduced. The software architecture de nes a modular structure to incorporate all systems into a single control center. This control center is accompanied with a 3D virtual model of Probo, simulating all motions of the robot and providing a visual feedback to the operator. Additionally, the model allows us to advance on user-testing and evaluation of newly designed systems. The robot reacts on basic input stimuli that it perceives during interaction. The input stimuli, that can be referred to as low-level perceptions, are derived from vision analysis, audio analysis, touch analysis and object identification. The stimuli will influence the attention and homeostatic system, used to de ne the robot's point of attention, current emotional state and corresponding facial expression. The recognition of these facial expressions has been evaluated in various user-studies. To evaluate the collaboration of the software components, a social interactive game for children, Probogotchi, has been developed. To facilitate interaction with children, Probo has an identity and corresponding history. Safety is ensured through Probo's soft embodiment and intrinsic safe actuation systems. To convey the illusion of life in a robotic creature, tools for the creation and management of motion sequences are put into the hands of the operator. All motions generated from operator triggered systems are combined with the motions originating from the autonomous reactive systems. The resulting motion is subsequently smoothened and transmitted to the actuation systems. With future applications to come, Probo is an ideal platform to create a friendly companion for hospitalised children.
Article
Full-text available
This paper reviews the research and development of augmented reality (AR) applications in design and manufacturing. It consists of seven main sections. The first section introduces the background of manufacturing simulation applications and the initial AR developments. The second section describes the current hardware and software tools associated with AR. The third section reports on the various studies of design and manufacturing activities, such as AR collaborative design, robot path planning, plant layout, maintenance, CNC simulation, and assembly using AR tools and techniques. The fourth section outlines the technology challenges in AR. Section 5 looks at some of the industrial applications. Section 6 addresses the human factors and interactions in AR systems. Section 7 looks into some future trends and developments, followed by conclusion in the last section.
Chapter
For a long time, computer games were limited to input and output devices such as mouse, joystick, typewriter keyboard, and TV screen. This has changed dramatically with the advent of inexpensive and versatile sensors, actuators, and visual and acoustic output devices. Modern games employ a wide variety of interface technology, which is bound to broaden even further. This creates a new task for game designers. They have to choose the right option, possibly combining several technologies to let one technology compensate for the deficiencies of the other or to achieve more immersion through new modes of interaction. To facilitate this endeavor, this chapter gives an overview on current and upcoming human–computer interface technologies, describes their inner workings, highlights applications in commercial games and game research, and points out promising new directions.
Article
Full-text available
The innovative aspects introduced by the new hands-free gaming systems, like the Nintendo Wii, Sony Move and Microsoft Kinect, indicate that technology is progressively, and at a limited cost, reaching more natural ways of interacting with human beings, and vice versa. This process can clearly pave the way to beneficial opportunities in a wealth of fields, including those of technical and artistic interactive exhibitions. In fact, while for long all types of tech-inspired exhibitions have been mainly based on artifacts and tools that seldom required any interactions with the public, today this can radically change. The new frontiers of technology for performing arts provide the means for creating active interactions between the public and any object, allowing visitors to enjoy an experience that exceeds what can be solely passively seen or heard. In such domain, we describe the design and implementation of a multimedia system that interactively illustrates the process of preparing one of the masterpieces of the culinary heritage of the Italian city of Bologna, the Tortellino. Step-by-step, with our system, anyone can enjoy a virtual experience learning how to prepare a Tortellino while mimicking the movements a real cook would perform to prepare its recipe when starting from its raw ingredients: eggs and flour. From a scientific viewpoint, the challenging part of this project resides in recognizing cooks' movements while verifying their correctness. In this paper, we describe how we achieved such result devising the most appropriate techniques able to allow our system to recognize a predefined set of actions (as those performed by a real cook) with the help of no other external hardware device, but a simple camera. Witnessing the importance of the results we got, our multimedia system has been chosen to be part of the expositions that will represent the City of Bologna at the Shanghai Universal Expo, in October 2010.
Conference Paper
We proposed and developed a novel interface to display high-quality tactile information by improving the temporal bandwidth extremely. The system is composed of two oppositely arranged speakers. A user holds the speakers in between his/her both hands while the speakers vibrate air between the speakers and the palms. The user feels suctioning and pushing sensations to the palms from the air pressure. Spatial distribution of the pressure is uniform and the user can feel pure force without any feeling of edges. As the speaker has a potential to present tactile sensation of very wide frequency range, we can present many types of high-quality tactile feeling, such as liquid, some small objects and living matter. Additionally, we implemented several interactions between the display and the user by using force sensor and acceleration sensor, which enabled us to experience emotional feeling.
Conference Paper
Full-text available
Interaction with a remote team of robots in real time is a difficult human-robot interaction (HRI) problem exacerbated by the complications of unpredictable real-world environments, with solutions often resorting to a larger-than-desirable ratio of operators to robots. We present two innovative interfaces that allow a single operator to interact with a group of remote robots. Using a tabletop computer the user can configure and manipulate groups of robots directly by either using their fingers (touch) or by manipulating a set of physical toys (tangible user interfaces). We recruited participants to partake in a user study that required them to interact with a small group of remote robots in simple tasks, and present our findings as a set of design considerations.
Conference Paper
We have proposed novel tactile display that presents high-fidelity tactile information by achieving a very wide frequency bandwidth. The system is composed of one or two speakers. Users hold the speakers between their hands while the speakers vibrate the air between the speakers and their palms. The user feels suction or pushing pressure on their palms from the air. Due to the very wide frequency range (1 Hz and below to 1 kHz and above), users can feel a variety of sensations. Furthermore, due to the indirect drive of the palm through the air, users feel uniform pressure without any feeling of the edges or shapes of hard contactors, which are necessary for an ordinary haptic interface to convey high frequency signals. In this paper, we introduce three application ideas by using this display that enable us to experience rich tactile expressions. We also show some pilot studies to realize these ideas, first implementations of them and result of exhibition.
Conference Paper
A Robotic User Interface (RUI) is part of a concept in which a robot is used as an interface for human behavior. By combining the RUI with Mixed Reality (MR) technology, we propose a MR RUI system that enables the presentation of enhanced visual information of a robot existing in the real world. In this paper, we propose the virtual kinematics to enhance robot motion. A MR RUI system with virtual kinematics can present a selection of visual information by controlling the robot through physical simulation and by changing the parameter dynamically.
Article
Full-text available
RobotPHONE is a Robotic User Interface (RUI) that uses robots as physical avatars for interpersonal communication. Using RobotPHONE, users in remote locations can communicate shapes and motion with each other. In this paper we present the concept of RobotPHONE, and describe implementations of two prototypes.
Article
Full-text available
ActiveCube is a novel user interface which allows intuitive interaction with computers. ActiveCube allows users to construct and interact with Three Dimensional (3D) environments using physical cubes equipped with input/output devices. Spatial, temporal and functional consistency is always maintained between the physical object and its corresponding representation in the computer. In this paper we detail the design and implementation of our system. We describe the method we used to realize flexible 3D modeling by controlling the recognition signals of each face in each cube. We also explain how we integrated additional multimodal interaction options by a number of sophisticated I/O devices and by the inclusion of a second microprocessor in our cubes. We argue that ActiveCube, with its current real-time multimodal and spatial capabilities, is ready to enable a large range of interactive entertainment applications that were impossible to realize before.
Article
Full-text available
Personal robots for human entertainment form a new class of computer-based entertainment that is beginning to become commercially and computationally practical. We expect that the principal manifestation of the robots' entertainment capabilities will be socially interactive game playing. We describe this form of gaming and summarize our current efforts in this direction on our lifelike, expressive, autonomous humanoid robot. Our focus is on teaching the robot via playful interaction using natural social gesture and language. We detail this in terms of two broad categories: teaching as play and teaching with play.
Article
Full-text available
In this paper, we present Robot Entertainment as a new field of the entertainment industry using autonomous robots. For feasibility studies of Robot Entertainment, we have developed an autonomous quadruped robot, named MUTANT, as a pet-type robot. It has four legs, each of which has three degree-of-freedom, and a head which also has three degree-of-freedom. Micro camera, stereo microphone, touch sensors, and other sensor systems are coupled with newly developed behavior generation system, which has emotion module as its major components, and generates high complex and interactive behaviors. Agent architecture, real-world recognition technologies, software component technology, and some dedicated devices such as Micro Camera Unit, were developed and tested for this purpose. From the lessons learned from the development of MUTANT, we refined the design concept of MUTANT to derive requirements for a general architecture and a set of interfaces of robot systems for entertainment applications. Through these feasibility studies, we consider entertainment applications a significant target at this moment from both scientific and engineering points of view.
Article
Full-text available
This paper proposes a novel way to realize augmented reality (AR) systems. Unlike previous AR systems, which use head-mounted or head-up displays, our approach employs a palmtop-sized video see-through device to present computer augmented view of the real world. This configuration, which we call the magnifying glass approach, has several advantages over traditional head-up or head-mounted configurations. A user doesn't have to put on any cumbersome head gear. Like a magnifying glass, the user can easily move the device around and compare real and augmented images. We have built a prototype augmented reality system called NaviCam, based on the proposed magnifying glass approach. NaviCam uses a miniature gyro sensor to determine the orientation of device. It also has a vision-based ID recognition capability to detect a rough position of the device in the real world, and real world objects in front of the device. The experiment conducted by using NaviCam shows the great potential of the v...
Article
Full-text available
This paper describes a 3D kinesthetic interface device using tensed strings. The results of the experiments on pick-and-place tasks show that not only sensations of collision but also weights of virtual objects are important in virtual environments.
Article
Full-text available
We introduce a promising new approach to rigid body dynamic simulation called impulse-based simulation. The method is well suited to modeling physical systems with large numbers of collisions, or with contact modes that change frequently. All types of contact (colliding, rolling, sliding, and resting) are modeled through a series of collision impulses between the objects in contact, hence the method is simpler and faster than constraint-based simulation. We have implemented an impulse-based simulator that can currently achieve interactive simulation times, and real time simulation seems within reach. In addition, the simulator has produced physically accurate results in several qualitative and quantitative experiments. After giving an overview of impulse-based dynamic simulation, we discuss collision detection and collision response in this context, and present
Article
Full-text available
This paper presents our vision of Human Computer Interaction (HCI): "Tangible Bits." Tangible Bits allows users to "grasp & manipulate" bits in the center of users' attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems -- the metaDESK, transBOARD and ambientROOM -- to identify underlying research issues. Keywords tangible user interface, ambient media, gras...
Article
Current augmented reality (AR) systems are not designed to be used in our daily lives. Head-mounted see-through displays are too cumbersome and look too unusual for everyday life. The limited scalability of position-tracking devices limits the use of AR to very restricted environments. This paper proposes a different way to realize AR that can be used in an open environment by introducing the concept of ID awareness and a hand-held video see-through display. Unlike other AR systems that use head-mounted or head-up displays, our approach employs the combination of a palmtop-sized display and a small video camera. A user sees the real world through the display device, with added computer-augmented information. We call this configuration the magnifying glass approach. It has several advantages over traditional head-up or head-mounted configurations. The main advantage is that the user is not required to wear any cumbersome headgear. The user can easily move the display device around like a magnifying glass and compare real and augmented images. The video camera also obtains information related to real-world situations. The system recognizes real-world objects using the video images by reading identification (ID) tags. Based on the recognized ID tag, the system retrieves and displays information about the real-world object to the user. The prototype hand-held device based on our proposed concept is called NaviCam. We describe several potential applications. Our experiments with NaviCam show the great potential of our video see-through palmtop display. It was significantly faster than a head-up configuration, and its subjective score from testers was also higher.
Article
A method for analytically calculating the forces between systems of rigid bodies in resting (non-colliding) contact is presented. The systems of bodies may either be in motion or static equilibrium and adjacent bodies may touch at multiple points. The analytic formulation of the forces between bodies in non-colliding contact can be modified to deal with colliding bodies. Accordingly, an improved method for analytically calculating the forces between systems of rigid bodies in colliding contact is also presented. Both methods can be applied to systems with arbitrary holonomic geometric constraints, such as linked figures. The analytical formulations used treat both holonomic and non-holonomic constraints in a consistent manner.
Conference Paper
Accurate simulation of Newtonian mechanics is essential for simulating realistic motion of joined figures. Dynamic simulation requires, however, a large amount of computation when compared to kinematic methods, and the control of dynamic figures can be quite complex. We have implemented an efficient forward dynamic simulation algorithm for articulated figures which has a computational complexity linear in the number of joints. In addition, we present a strategy for the coordination of the locomotion of a six-legged figure - a simulated insect - which has two main components: a gait controller which sequences stepping, and motor programs which control motions of the figure by the application of forces. The simulation is capable of generating gait patterns and walking phenomena observed in nature, and our simulated insect can negotiate planar and uneven terrain in a realistic manner. The motor program techniques should be generally applicable to other control problems.
Article
Research center for investigating man/computer interaction, discussing display systems, computers, languages and data entities
Conference Paper
The authors proposed the construction of a bipedal humanoid robot that has a head system with visual sensors, two hand-arm systems, 3-DOF trunk and antagonistic driven joints using the nonlinear spring mechanism, on the basis of WL-13. And we really designed and built it. In addition, as the first step to realize the dynamic cooperative motion of limbs and 3-DOF trunk, the authors developed the control algorithm and the simulation program that generates the trajectory of 3-DOF trunk for stable biped walking pattern even if the trajectories of upper and lower limbs are arbitrarily set for locomotion and manipulation respectively. Using this preset walking pattern with variable muscle tension references correspond to swing phase and stance phase, the authors performed walking experiment of dynamic walking forward and backward dynamic dance with 3-DOF trunk motion and carrying, on a flat level surface (1.28 s/step with a 0.15 m step length). As a result, the efficiency of our walking control algorithm and robot system was proved. In this paper, the mechanism of WABIAN and its control method are introduced
Conference Paper
In this paper, we present the mechanism, system configuration, In this paper, we present the mechanism, system configuration, basic control algorithm and integrated functions of the Honda humanoidbasic control algorithm and integrated functions of the Honda humanoid robot. Like its human counterpart, this robot has the ability to moverobot. Like its human counterpart, this robot has the ability to move forward and backward, sideways to the right or the left, as well asforward and backward, sideways to the right or the left, as well as diagonally. In addition, the robot can turn in any direction, walk updiagonally. In addition, the robot can turn in any direction, walk up and down stairs continuously. Furthermore, due to its unique postureand down stairs continuously. Furthermore, due to its unique posture stability control, the robot is able to maintain its balance despitestability control, the robot is able to maintain its balance despite unexpected complications such as uneven ground surfaces. As a part ofunexpected complications such as uneven ground surfaces. As a part of its integrated functions, this robot is able to move on a planned pathits integrated functions, this robot is able to move on a planned path autonomously and to perform simple operations via wireless teleoperationautonomously and to perform simple operations via wireless teleoperation
Article
Both direct, and evolved, behavior-based approaches to mobile robots have yielded a number of interesting demonstrations of robots that navigate, map, plan and operate in the real world. The work can best be described as attempts to emulate insect level locomotion and navigation, with very little work on behavior-based non-trivial manipulation of the world. There have been some behavior-based attempts at exploring social interactions, but these too have been modeled after the sorts of social interactions we see in insects. But thinking how to scale from all this insect level work to full human level intelligence and social interactions leads to a synthesis that is very different from that imagined in traditional Artificial Intelligence and Cognitive Science. We report on work towards that goal. 1 Introduction There has long been a dichotomy in styles used in designing and implementing robots whose task is to navigate about in the real world. Early on Walter (1950) developed simple r...
Richard Marks, Tany Scovill, Care Michaud-Wideman. Enhanced Reality: A New Frontier for Computer Entertainment
  • Richard Marks
  • Tany Scovill
  • Care Michaud-Wideman
  • Marks Richard