The eMOSAIC model for humanoid robot control

National Institute of Communication Telecommunication, 2-2-2 Hikaridai Seika-cho, Soraku-gun, Kyoto 619-0288, Japan.
Neural networks: the official journal of the International Neural Network Society (Impact Factor: 2.71). 01/2012; 29-30:8-19. DOI: 10.1016/j.neunet.2012.01.002
Source: PubMed


In this study, we propose an extension of the MOSAIC architecture to control real humanoid robots. MOSAIC was originally proposed by neuroscientists to understand the human ability of adaptive control. The modular architecture of the MOSAIC model can be useful for solving nonlinear and non-stationary control problems. Both humans and humanoid robots have nonlinear body dynamics and many degrees of freedom. Since they can interact with environments (e.g., carrying objects), control strategies need to deal with non-stationary dynamics. Therefore, MOSAIC has strong potential as a human motor-control model and a control framework for humanoid robots. Yet application of the MOSAIC model has been limited to simple simulated dynamics since it is susceptive to observation noise and also cannot be applied to partially observable systems. Our approach introduces state estimators into MOSAIC architecture to cope with real environments. By using an extended MOSAIC model, we are able to successfully generate squatting and object-carrying behaviors on a real humanoid robot.

Download full-text


Available from: Norikazu Sugimoto, Jul 01, 2015
  • Source
    • "The MOSAIC architecture [8]–[10], on the other hand, assumes that some perceptual cues are available that can guide a correct context estimation in the early learning process, that can in turn successfully assign the perceived data points to a predefined number of models. This, however, can be a quite optimistic assumption, as not only the number of models must be known beforehand but also a very domain specific information must be gathered to build the functions relating perceptual cues to specific contexts. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Accurate dynamic models can be very difficult to compute analytically for complex robots; moreover, using a pre-computed fixed model does not allow to cope with unexpected changes in the system. An interesting alternative solution is to learn such models from data, and keep them up-to-date through online adaptation. In this paper we consider the problem of learning the robot inverse dynamic model under dynamically varying contexts: the robot learns incrementally and autonomously the model under different conditions, represented by the manipulation of objects of different weights, that change the dynamics of the system. The inverse dynamic mapping is modeled as a multi-valued function, in which different outputs for the same input query are related to different dynamic contexts (i.e. different manipulated objects). The mapping is estimated using IMLE, a recent online learning algorithm for multi-valued regression, and used for Computed Torque control. No information is given about the context switch during either learning or control, nor any assumption is made about the kind of variation in the dynamics imposed by a new contexts. Experimental results with the iCub humanoid robot are provided.
    Full-text · Conference Paper · Oct 2014
  • Source
    • "Currently, much attention is being paid toward endowing robots with human-like movement features, under the premise that humans will collaborate better with robots which move like humans. Indeed, some progress is been made in implementing human-like movements in some robots (Schaal, 2007; Sugimoto et al., 2012). Although the movements of most current robots are still a caricature of human movements attesting the difficulty of imitating us, a promising approach appears the application of the movement primitives extracted from human subjects (for instance, by means of principal component analysis) to transfer the features of human movement to a robot (Choe et al., 2007; Moro et al., 2012). "
    [Show abstract] [Hide abstract]
    ABSTRACT: In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
    Full-text · Article · Jan 2014 · Frontiers in Neurorobotics
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Autonomy and flexibility are two major requirements for modern robots. In particular, humanoid robots should learn new skills incrementally through autonomous exploration, and adapt to different contexts. In this paper we consider the problem of learning forward models for task space control under dynamically varying kinematic contexts: the robot learns incrementally and autonomously its forward kinematics under different contexts, represented by the inclusion of different tools, and exploits the learned model to realize reaching with those tools. We model the forward kinematics as a multi-valued function, in which different outputs for the same input query are related to different tools (i.e. contexts). The model is estimated using IMLE, a recent online learning algorithm for multi-valued regression, and used for control. No information is given about the tool changes, nor any assumption is made about the tool kinematics. Results are provided both in simulation and with a full-body humanoid. In the latter case we show how the robot successfully performs reaching using a flexible tool, a clear example of complex kinematics.
    Full-text · Article · Sep 2013
Show more