Conference Paper

Learning Kinematic Models for Articulated Objects.

Conference: IJCAI 2009, Proceedings of the 21st International Joint Conference on Artificial Intelligence, Pasadena, California, USA, July 11-17, 2009
Source: DBLP

ABSTRACT

Robots operating in home environments must be able to interact with articulated objects such as doors or drawers. Ideally, robots are able to autonomously infer articulation models by observation. In this paper, we present an approach to learn kinematic models by inferring the connectivity of rigid parts and the articulation models for the corresponding links. Our method uses a mixture of parameterized and parameter-free (Gaussian process) representations and finds low-dimensional manifolds that provide the best explanation of the given observations. Our approach has been implemented and evaluated using real data obtained in various realistic home environment settings.

Download full-text

Full-text

Available from: Christian Plagemann
  • Source
    • "Recently, kinematic evaluation of 3D articulated objects for general manipulation in unstructured environments emerged as a new and exciting research topic in robotics. Sturm et al. [8], [9] presented a method that learns models of kinematic joints from three-dimensional trajectories of a moving plane. Motion is generated from deliberate interaction with the environment. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This research is focused on providing solutions for kinematic evaluation of articulated rigid objects when noisy data is available. Recent results proposed different techniques which necessitate learning stages or large relative motions. Our approach starts from a complete analytical solution which uses only point and line features to parametrize the rigid motion. This analytical solution is built using orthogonal dual tensors generated by rigid basis of dual vectors. Next, the necessary transformations are added to the theoretical solution so it can be used with noisy data input. Thus, the computational procedure is revealed and different experiments are presented in order to underline the advantages of the proposed approach.
    Full-text · Conference Paper · Oct 2014
  • Source
    • "We compute P(T n |β) using the assumptions detailed inSturm et al. (2009), which include assuming that the measurements of the points along the trajectory of the handle, "
    [Show abstract] [Hide abstract]
    ABSTRACT: Based on a lifetime of experience, people anticipate the forces associated with performing a manipulation task. In contrast, most robots lack common sense about the forces involved in everyday manipulation tasks. In this paper, we present data-driven methods to inform robots about the forces that they are likely to encounter when performing specific tasks. In the context of door opening, we demonstrate that data-driven object-centric models can be used to haptically recognize specific doors, haptically recognize classes of door (e.g., refrigerator vs. kitchen cabinet), and haptically detect anomalous forces while opening a door, even when opening a specific door for the first time. We also demonstrate that two distinct robots can use forces captured from people opening doors to better detect anomalous forces. These results illustrate the potential for robots to use shared databases of forces to better manipulate the world and attain common sense about everyday forces.
    Preview · Article · Oct 2013 · Autonomous Robots
  • Source
    • "we abort opening motion if the robot exceeds an empirically determined force threshold. 3) Articulation Model Learning: To open the cabinets we use a controller developed by Sturm et al. [16]. The controller assumes that the robot has already successfully grasped the handle of an articulated object and that a suitable initial pulling direction is known. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we investigate the representation and acquisition of Semantic Objects Maps (SOMs) that can serve as information resources for autonomous service robots performing everyday manipulation tasks in kitchen environments. These maps provide the robot with information about its operation environment that enable it to perform fetch and place tasks more efficiently and reliably. To this end, the semantic object maps can answer queries such as the following ones: “What do parts of the kitchen look like?”, “How can a container be opened and closed?”, “Where do objects of daily use belong?”, “What is inside of cupboards/drawers?”, etc. The semantic object maps presented in this article, which we call SOM+, extend the first generation of SOMs presented by Rusu et al. [1] in that the representation of SOM+ is designed more thoroughly and that SOM+ also include knowledge about the appearance and articulation of furniture objects. Also, the acquisition methods for SOM+ substantially advance those developed in [1] in that SOM+ are acquired autonomously and with low-cost (Kinect) instead of very accurate (laser-based) 3D sensors. In addition, perception methods are more general and are demonstrated to work in different kitchen environments.
    Preview · Conference Paper · Oct 2012
Show more