Learning Kinematic Models for Articulated Objects.
- SourceAvailable from: usc.edu
Conference Proceeding: Markerless kinematic model and motion capture from volume sequences[show abstract] [hide abstract]
ABSTRACT: An approach for model-free markerless motion capture of articulated kinematic structures is presented. This approach is centered our method for generating underlying nonlinear axes (or a skeleton curve) from the volume of an arbitrary rigid-body model. We describe the use of skeleton curves for deriving a kinematic model and motion (in the form of joint angles over time) from a captured volume sequence. Our motion capture method uses a skeleton curve, found in each frame of a volume sequence, to automatically determine kinematic postures. These postures are then aligned to determine a common kinematic model for the volume sequence. The derived kinematic model is then reapplied to each frame in the volume sequence to find the motion suited to this model. We demonstrate our method for several types of motion from synthetically generated volume sequences with arbitrary kinematic topology and human volume sequences captured from a set of multiple calibrated cameras.Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on; 07/2003
Conference Proceeding: Learning Forward Models for Robots.[show abstract] [hide abstract]
ABSTRACT: Forward models enable a robot to predict the ef- fects of its actions on its own motor system and its environment. This is a vital aspect of intelligent be- haviour, as the robot can use predictions to decide the best set of actions to achieve a goal. The ability to learn forward models enables robots to be more adaptable and autonomous; this paper describes a system whereby they can be learnt and represented as a Bayesian network. The robot's motor system is controlled and explored using 'motor babbling'. Feedback about its motor system comes from com- puter vision techniques requiring no prior informa- tion to perform tracking. The learnt forward model can be used by the robot to imitate human move- ment.IJCAI-05, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, UK, July 30-August 5, 2005; 01/2005
Conference Proceeding: Unsupervised body scheme learning through self-perception.[show abstract] [hide abstract]
ABSTRACT: In this paper, we present an approach allowing a robot to learn a generative model of its own physical body from scratch using self-perception with a single monocular camera. Our approach yields a compact Bayesian network for the robot's kinematic structure including the forward and inverse models relating action signals and body pose. We propose to simultaneously learn local action models for all pairs of perceivable body parts from data generated through random "motor babbling." From this repertoire of local models, we construct a Bayesian network for the full system using the pose prediction accuracy on a separate cross validation data set as the criterion for model selection. The resulting model can be used to predict the body pose when no perception is available and allows for gradient-based posture control. In experiments with real and simulated manipulator arms, we show that our system is able to quickly learn compact and accurate models and to robustly deal with noisy observations.2008 IEEE International Conference on Robotics and Automation, ICRA 2008, May 19-23, 2008, Pasadena, California, USA; 01/2008
Learning Kinematic Models for Articulated Objects
J¨ urgen Sturm1, Cyrill Stachniss1, Vijay Pradeep2, Christian Plagemann3,
Kurt Konolige2, Wolfram Burgard1
1Univ. of Freiburg, Dept. of Computer Science, D-79110 Freiburg, Germany
2Willow Garage, Inc., 68 Willow Road, Menlo Park, CA 94025
3Stanford University, CS Dept., 353 Serra Mall, Stanford, CA 94305-9010
Topic: estimation, prediction
Oral presentation or poster presentation
Home environments are envisioned as one of the key application areas for service robots. Robots oper-
ating in such environments are typically faced with a variety objects they have to deal with or to manipulate
to fulfill a given task. Many objects are not rigid since they have moving parts such as drawers or doors.
Understanding the spatial movements of parts of such objects is essential for service robots to allow them
to plan relevant actions such as door-opening trajectories. Ideally, robots are able to autonomously infer
these articulation models by observation. In this work, we therefore investigate the problem of learning
kinematic models of articulated objects from observations. As an illustrating example, consider the left
three images of Figure 1 which depict two examples for observations of the door of a microwave oven and
a learned, one-dimensional description of the door motion.
Our problem can be formulated as follows: Given a sequence of rigid body poses from observed objects
parts, learn a compact kinematic model describing the whole articulated object. This kinematic model has
to define (i) which parts are connected, (ii) the dimensionality of the latent (not observed) actuation space of
the object, and (iii) a kinematic function between different body parts in a generative way allowing a robot
to reason also about unseen configurations. Our approach is related to recent work of Katz et al.  who
learn planar kinematic models for articulated objects such as scissors by manipulating the object as well as
to the work of Yan and Pollefeys  who present an approach for learning the structure of an articulated
object from feature trajectories under affine projections.
The contribution of this work is a novel approach for learning actuation models based on observations
only. Our method is able to robustly detect the connectivity of the rigid parts of the object and to esti-
mate accurate articulation models from a candidate set. Our approach allows for selecting the best model
among parametric, expert-designed transformation templates (rotational and prismatic models), and non-
parametric transformations that are learned from scratch requiring minimal prior assumptions. To obtain
a parameter-free description, we apply Gaussian processes  as a non-parametric regression technique
to learn flexible and accurate models. To find the low-dimensional description of the moving parts, we
furthermore apply local linear embedding , which is a non-linear dimensionality reduction technique.
We implemented our approach on a real robot and tested it by estimating models of different objects,
including a door of a microwave oven (see Figure 1 left), a cabinet with drawers (see Figure 1 right), a
garage door (see Figure 2), and a table moved on the ground plane (see Figure 3). Our technique allows
to learn accurate models for different articulated objects. We regard this as an important step towards
autonomous robots understanding and actively handling objects in their environment.
Figure 1: Left: examples for observations of a moving door of a microwave oven and an illustration of
the learned 1-dimensional kinematic model. Right: Illustrations of the latent action variables (using the
arrows) for a cabinet with two drawers during learning.
Figure 2: Illustration of the motion of a garage door predicted by our non-parametric model. The door runs
in a horizontal rail on the ceiling and in a vertical rail on the right. This results in a movement than cannot
be described by a prismatic or rotational model but using our non-parametric model. From left to right:
Illustration of the model while integrating new observations.
Figure 3: Four steps while learning a model of a table that is moved on the ground plane. The arrows
indicate the latent action variable of the currently selected model. Finally, our non-parametric model with
2 degree of freedom explains the movements best.
 D. Katz, Y. Pyuro, and O. Brock. Learning to manipulate articulated objects in unstructured environ-
ments using a grounded relational representation. In Robotics: Science and Systems, 2008.
 C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. The MIT Press,
Cambridge, MA, 2006.
 S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
 J. Yan and M. Pollefeys. Automatic kinematic chain building from feature trajectories of articulated
objects. In CVPR, 2006.