Article

State Space Construction for Behavior Acquisition in Multi Agent Environments with Vision and Action

Dept. of Adaptive Machine Syst., Osaka Univ.
11/1998; DOI: 10.1109/ICCV.1998.710819
Source: IEEE Xplore

ABSTRACT This paper proposes a method which estimates the relationships between learner's behaviors and other agents' ones in the environment through interactions (observation and action) using the method of system identication. In order to identify the model of each agent, Akaike's Information Criterion is applied to the results of Canonical Variate Analysis for the relationship between the observed data in terms of action and future observation. Next, reinforcement learning based on the estimated state vectors is performed to obtain the optimal behavior. The proposed method is applied to a soccer playing situation, where a rolling ball and other moving agents are well modeled and the learner's behaviors are successfully acquired by the method. Computer simulations and real experiments are shown and a discussion is given. 1 Introduction Building a robot that learns to accomplish a task through visual information has been acknowledged as one of the major challenges facing vision, robotics, a...

0 Bookmarks
 · 
44 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: RoboCup is an increasingly successful attempt to promote the full integration of robotics and AI research. The most prominent feature of RoboCup is that it provides the researchers with the opportunity to demonstrate their research results as a form of competition in a dynamically changing hostile environment, defined as the international standard game definition, in which the gamut of intelligent robotics research issues are naturally involved. This article describes what we have learned from the past RoboCup activities, mainly the first and the second RoboCups, and overview the future perspectives of RoboCup in the next century.
    04/2008: pages 369-378;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper discusses how multiple robots can emerge cooperative and competitive behav-iors through co-evolutionary processes. A genetic programming method is applied to individual population corresponding to each robot so as to obtain cooperative and com-petitive behaviors. The complexity of the problem can be explained twofold: co-evolution for cooperative behaviors needs exact synchronization of mutual evolutions, and three robot co-evolution requires well-complicated environment setups that may gradually change from simpler to more com-plicated situations. As an example task, sev-eral simplified soccer games are selected to show the validity of the proposed methods. Simulation results with fixed and varying fit-ness functions are shown, and a discussion is given.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose Action-Reaction Learning as an approach for analyzing and synthesizing human behaviour. This paradigm uncovers causal mappings between past and future events or between an action and its reaction by observing time sequences. We apply this method to analyze human interaction and to subsequently synthesize human behaviour. Using a time series of perceptual measurements, a system automatically discovers correlations between past gestures from one human participant (action) and a subsequent gesture (reaction) from another participant. A probabilistic model is trained from data of the human interaction using a novel estimation technique, Conditional Expectation Maximization (CEM). The estimation uses general bounding and maximization to monotonically find the maximum conditional likelihood solution. The learning system drives a graphical interactive character which probabilistically predicts a likely response to a user’s behaviour and performs it interactively. Thus, after analyzing human interaction in a pair of participants, the system is able to replace one of them and interact with a single remaining user.
    12/1998: pages 273-292;

Full-text (2 Sources)

Download
0 Downloads