The eMOSAIC model for humanoid robot control

National Institute of Communication Telecommunication, 2-2-2 Hikaridai Seika-cho, Soraku-gun, Kyoto 619-0288, Japan.
Neural networks: the official journal of the International Neural Network Society (Impact Factor: 2.08). 01/2012; 29-30:8-19. DOI: 10.1016/j.neunet.2012.01.002
Source: PubMed

ABSTRACT In this study, we propose an extension of the MOSAIC architecture to control real humanoid robots. MOSAIC was originally proposed by neuroscientists to understand the human ability of adaptive control. The modular architecture of the MOSAIC model can be useful for solving nonlinear and non-stationary control problems. Both humans and humanoid robots have nonlinear body dynamics and many degrees of freedom. Since they can interact with environments (e.g., carrying objects), control strategies need to deal with non-stationary dynamics. Therefore, MOSAIC has strong potential as a human motor-control model and a control framework for humanoid robots. Yet application of the MOSAIC model has been limited to simple simulated dynamics since it is susceptive to observation noise and also cannot be applied to partially observable systems. Our approach introduces state estimators into MOSAIC architecture to cope with real environments. By using an extended MOSAIC model, we are able to successfully generate squatting and object-carrying behaviors on a real humanoid robot.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
    Frontiers in Neurorobotics 01/2014; 8:2. DOI:10.3389/fnbot.2014.00002
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
    Journal of The Royal Society Interface 03/2015; 12(104). DOI:10.1098/rsif.2014.1250 · 3.86 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In industrial process control, there may be multiple performance objectives, depending on salient features of the input-output data. Aiming at this situation, this paper proposes multiple actor-critic structures to obtain the optimal control via input-output data for unknown nonlinear systems. The shunting inhibitory artificial neural network (SIANN) is used to classify the input-output data into one of several categories. Different performance measure functions may be defined for disparate categories. The approximate dynamic programming algorithm, which contains model module, critic network, and action network, is used to establish the optimal control in each category. A recurrent neural network (RNN) model is used to reconstruct the unknown system dynamics using input-output data. NNs are used to approximate the critic and action networks, respectively. It is proven that the model error and the closed unknown system are uniformly ultimately bounded. Simulation results demonstrate the performance of the proposed optimal control scheme for the unknown nonlinear system.
    IEEE transactions on neural networks and learning systems 02/2015; 26(4). DOI:10.1109/TNNLS.2015.2399020 · 4.37 Impact Factor


1 Download
Available from