Conference Paper

Can a robot deceive humans?

DOI: 10.1145/1734454.1734538 Conference: Proceedings of the 5th ACM/IEEE International Conference on Human Robot Interaction, HRI 2010, Osaka, Japan, March 2-5, 2010
Source: DBLP


In the present study, we investigated whether a robot is able to deceive a human by producing a behavior against him/her prediction. A feeling of being deceived by a robot would be a strong indicator that the human treat the robot as an intentional entity. We conducted a psychological experiment in which a subject played Darumasan ga Koronda, a Japanese children's game, with a robot. A main strategy to deceive a subject was to make his/her mind believe that the robot is stupid so as not to be able to move quickly. The experimental result indicated that unexpected change of a robot behavior gave rise to an impression of being deceived by the robot.

1 Follower
15 Reads
  • [Show abstract] [Hide abstract]
    ABSTRACT: We investigated how humans establish communication in a novel environment using an artificial cooperative task. Subjects are asked to solve a kind of 8-puzzle game cooperatively. One subject (director) can see the design of the puzzle and instructs how to move a tile. The other (operator) cannot see the design but can manipulate the puzzle. The director can communicate his intention only through his body movement. We conducted two experiments. In Experiment I, the operator role is played by the experimenter, who follows the pre-determined algorithms for establishing communication. The director role is played by a recruited subject. In Experiment II, both the director and the operator roles are played by recruited subjects. In both the experiments, simple languages for communication were developed between the players, but the strategies adopted were quite different. The results correspond to two approaches to future Human-Agent communication - human-controlled communication and mind-reading communication. We should re-consider the role of mind-reading communication in designing Human-Agent interface.
    RO-MAN, 2012 IEEE; 01/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Estimating and reshaping human intentions are among the most significant topics of research in the field of human-robot interaction. This chapter provides an overview of intention estimation literature on human-robot interaction, and introduces an approach on how robots can voluntarily reshape estimated intentions. The reshaping of the human intention is achieved by the robots moving in certain directions that have been a priori observed from the interactions of humans with the objects in the scene. Being among the only few studies on intention reshaping, the authors of this chapter exploit spatial information by learning a Hidden Markov Model (HMM) of motion, which is tailored for intelligent robotic interaction. The algorithmic design consists of two phases. At first, the approach detects and tracks human to estimate the current intention. Later, this information is used by autonomous robots that interact with detected human to change the estimated intention. In the tracking and intention estimation phase, postures and locations of the human are monitored by applying low-level video processing methods. In the latter phase, learned HMM models are used to reshape the estimated human intention. This two-phase system is tested on video frames taken from a real human-robot environment. The results obtained using the proposed approach shows promising performance in reshaping of detected intentions.
    Prototyping of Robotic Systems: Applications of Design and Implementation, Edited by T. Sobh and X. Xiong, 01/2012; IGI Global Publisher.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We are investigating on how to build a human-agent interface based on mind-reading mechanism. The research on the emergence of communication sheds light on mind-reading mechanism. However, theses researches so far are conducted only in cooperative environment. We investigated how communication emerges using a series of non-zero-sum games as a task. The task is such that the conflict of interests exists but cooperation is necessary for overall good performance. The only communication media is sending of monotonic sound. For comparison, the experiments are also conducted on soundless condition, i.e., no communication is possible between the players. It is confirmed that the existence of communication media improved the performance even under the conflict of interests. On sound condition, some of the subjects could establish communication and earned nearly optimal points. But others could not communicate well with the partner, and eventually could earn only small points. The task is very simple but various strategies for communication could be observed. The emergence of varieties of strategies is a key point for understanding of mind-reading mechanism.
    RO-MAN, 2013 IEEE; 08/2013
Show more