Learning Kinematic Models for Articulated Objects.
ABSTRACT Robots operating in home environments must be able to interact with articulated objects such as doors or drawers. Ideally, robots are able to autonomously infer articulation models by observation. In this paper, we present an approach to learn kinematic models by inferring the connectivity of rigid parts and the articulation models for the corresponding links. Our method uses a mixture of parameterized and parameter-free (Gaussian process) representations and finds low-dimensional manifolds that provide the best explanation of the given observations. Our approach has been implemented and evaluated using real data obtained in various realistic home environment settings.
- [Show abstract] [Hide abstract]
ABSTRACT: We present an algorithm called Procrustes-Lo-RANSAC (PLR) to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture 3D point cloud models of an object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is then able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occluded-aware, meaning that the robot has knowledge of parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints.Robotics and Autonomous Systems 01/2013; · 1.16 Impact Factor
Conference Paper: Learning probabilistic models for mobile manipulation robots[Show abstract] [Hide abstract]
ABSTRACT: Mobile manipulation robots are envisioned to provide many useful services both in domestic environments as well as in the industrial context. In this paper, we present novel approaches to allow mobile maniplation systems to autonomously adapt to new or changing situations. The approaches developed in this paper cover the following four topics: (1) learning the robot's kinematic structure and properties using actuation and visual feedback, (2) learning about articulated objects in the environment in which the robot is operating, (3) using tactile feedback to augment visual perception, and (4) learning novel manipulation tasks from human demonstrations.Proceedings of the Twenty-Third international joint conference on Artificial Intelligence; 08/2013
- [Show abstract] [Hide abstract]
ABSTRACT: Based on a lifetime of experience, people anticipate the forces associated with performing a manipulation task. In contrast, most robots lack common sense about the forces involved in everyday manipulation tasks. In this paper, we present data-driven methods to inform robots about the forces that they are likely to encounter when performing specific tasks. In the context of door opening, we demonstrate that data-driven object-centric models can be used to haptically recognize specific doors, haptically recognize classes of door (e.g., refrigerator vs. kitchen cabinet), and haptically detect anomalous forces while opening a door, even when opening a specific door for the first time. We also demonstrate that two distinct robots can use forces captured from people opening doors to better detect anomalous forces. These results illustrate the potential for robots to use shared databases of forces to better manipulate the world and attain common sense about everyday forces.Autonomous Robots 10/2013; 35(2-3). · 1.91 Impact Factor
Learning Kinematic Models for Articulated Objects
J¨ urgen Sturm1, Cyrill Stachniss1, Vijay Pradeep2, Christian Plagemann3,
Kurt Konolige2, Wolfram Burgard1
1Univ. of Freiburg, Dept. of Computer Science, D-79110 Freiburg, Germany
2Willow Garage, Inc., 68 Willow Road, Menlo Park, CA 94025
3Stanford University, CS Dept., 353 Serra Mall, Stanford, CA 94305-9010
Topic: estimation, prediction
Oral presentation or poster presentation
Home environments are envisioned as one of the key application areas for service robots. Robots oper-
ating in such environments are typically faced with a variety objects they have to deal with or to manipulate
to fulfill a given task. Many objects are not rigid since they have moving parts such as drawers or doors.
Understanding the spatial movements of parts of such objects is essential for service robots to allow them
to plan relevant actions such as door-opening trajectories. Ideally, robots are able to autonomously infer
these articulation models by observation. In this work, we therefore investigate the problem of learning
kinematic models of articulated objects from observations. As an illustrating example, consider the left
three images of Figure 1 which depict two examples for observations of the door of a microwave oven and
a learned, one-dimensional description of the door motion.
Our problem can be formulated as follows: Given a sequence of rigid body poses from observed objects
parts, learn a compact kinematic model describing the whole articulated object. This kinematic model has
to define (i) which parts are connected, (ii) the dimensionality of the latent (not observed) actuation space of
the object, and (iii) a kinematic function between different body parts in a generative way allowing a robot
to reason also about unseen configurations. Our approach is related to recent work of Katz et al.  who
learn planar kinematic models for articulated objects such as scissors by manipulating the object as well as
to the work of Yan and Pollefeys  who present an approach for learning the structure of an articulated
object from feature trajectories under affine projections.
The contribution of this work is a novel approach for learning actuation models based on observations
only. Our method is able to robustly detect the connectivity of the rigid parts of the object and to esti-
mate accurate articulation models from a candidate set. Our approach allows for selecting the best model
among parametric, expert-designed transformation templates (rotational and prismatic models), and non-
parametric transformations that are learned from scratch requiring minimal prior assumptions. To obtain
a parameter-free description, we apply Gaussian processes  as a non-parametric regression technique
to learn flexible and accurate models. To find the low-dimensional description of the moving parts, we
furthermore apply local linear embedding , which is a non-linear dimensionality reduction technique.
We implemented our approach on a real robot and tested it by estimating models of different objects,
including a door of a microwave oven (see Figure 1 left), a cabinet with drawers (see Figure 1 right), a
garage door (see Figure 2), and a table moved on the ground plane (see Figure 3). Our technique allows
to learn accurate models for different articulated objects. We regard this as an important step towards
autonomous robots understanding and actively handling objects in their environment.
Figure 1: Left: examples for observations of a moving door of a microwave oven and an illustration of
the learned 1-dimensional kinematic model. Right: Illustrations of the latent action variables (using the
arrows) for a cabinet with two drawers during learning.
Figure 2: Illustration of the motion of a garage door predicted by our non-parametric model. The door runs
in a horizontal rail on the ceiling and in a vertical rail on the right. This results in a movement than cannot
be described by a prismatic or rotational model but using our non-parametric model. From left to right:
Illustration of the model while integrating new observations.
Figure 3: Four steps while learning a model of a table that is moved on the ground plane. The arrows
indicate the latent action variable of the currently selected model. Finally, our non-parametric model with
2 degree of freedom explains the movements best.
 D. Katz, Y. Pyuro, and O. Brock. Learning to manipulate articulated objects in unstructured environ-
ments using a grounded relational representation. In Robotics: Science and Systems, 2008.
 C.E. Rasmussen and C.K.I. Williams. Gaussian Processes for Machine Learning. The MIT Press,
Cambridge, MA, 2006.
 S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
 J. Yan and M. Pollefeys. Automatic kinematic chain building from feature trajectories of articulated
objects. In CVPR, 2006.