Conference Paper

Learning Kinematic Models for Articulated Objects.

Conference: IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009
Source: DBLP


Robots operating in home environments must be able to interact with articulated objects such as doors or drawers. Ideally, robots are able to autonomously infer articulation models by observation. In this paper, we present an approach to learn kinematic models by inferring the connectivity of rigid parts and the articulation models for the corresponding links. Our method uses a mixture of parameterized and parameter-free (Gaussian process) representations and finds low-dimensional manifolds that provide the best explanation of the given observations. Our approach has been implemented and evaluated using real data obtained in various realistic home environment settings.

Download full-text


Available from: Christian Plagemann,
  • Source
    • "Recently, kinematic evaluation of 3D articulated objects for general manipulation in unstructured environments emerged as a new and exciting research topic in robotics. Sturm et al. [8], [9] presented a method that learns models of kinematic joints from three-dimensional trajectories of a moving plane. Motion is generated from deliberate interaction with the environment. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This research is focused on providing solutions for kinematic evaluation of articulated rigid objects when noisy data is available. Recent results proposed different techniques which necessitate learning stages or large relative motions. Our approach starts from a complete analytical solution which uses only point and line features to parametrize the rigid motion. This analytical solution is built using orthogonal dual tensors generated by rigid basis of dual vectors. Next, the necessary transformations are added to the theoretical solution so it can be used with noisy data input. Thus, the computational procedure is revealed and different experiments are presented in order to underline the advantages of the proposed approach.
    18th International Conference on System Theory, Control and Computing; 10/2014
  • Source
    • "we abort opening motion if the robot exceeds an empirically determined force threshold. 3) Articulation Model Learning: To open the cabinets we use a controller developed by Sturm et al. [16]. The controller assumes that the robot has already successfully grasped the handle of an articulated object and that a suitable initial pulling direction is known. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we investigate the representation and acquisition of Semantic Objects Maps (SOMs) that can serve as information resources for autonomous service robots performing everyday manipulation tasks in kitchen environments. These maps provide the robot with information about its operation environment that enable it to perform fetch and place tasks more efficiently and reliably. To this end, the semantic object maps can answer queries such as the following ones: “What do parts of the kitchen look like?”, “How can a container be opened and closed?”, “Where do objects of daily use belong?”, “What is inside of cupboards/drawers?”, etc. The semantic object maps presented in this article, which we call SOM+, extend the first generation of SOMs presented by Rusu et al. [1] in that the representation of SOM+ is designed more thoroughly and that SOM+ also include knowledge about the appearance and articulation of furniture objects. Also, the acquisition methods for SOM+ substantially advance those developed in [1] in that SOM+ are acquired autonomously and with low-cost (Kinect) instead of very accurate (laser-based) 3D sensors. In addition, perception methods are more general and are demonstrated to work in different kitchen environments.
    Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on; 10/2012
  • Source
    • "In other robotics work, Sturm et al. [21] present an approach to learn kinematic models based on observations from a motion capture system that tracks the poses and orientations of rigid parts. A mixture of parameterized and parameter-free (Gaussian process) representations is used to detect the connectivity of the rigid parts of the objects and to find low-dimensional articulation models that best explain the given observation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a method to recover complete 3D models of articulated objects. Structure-from-motion techniques are used to capture D point cloud models of the object in two different configurations. A novel combination of Procrustes analysis and RANSAC facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. With the resulting articulated model, a robotic system is able to manipulate the object along its joint axes at a specified grasp point in order to exercise its degrees of freedom. Because the models capture all sides of the object, they are occluded-aware, enabling the robotic system to plan paths to parts of the object that are not visible in the current view. Our algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of objects with both revolute and prismatic joints.
    Proceedings - IEEE International Conference on Robotics and Automation 05/2012; DOI:10.1109/ICRA.2012.6224911
Show more