Conference Paper

Bridging the Gap between Task Planning and Path Planning

Inst. of Robotics & Mechatronics, German Aerosp. Center DLR, Oberpfaffenhofen
DOI: 10.1109/IROS.2006.282087 Conference: Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on
Source: DLR

ABSTRACT Autonomous service robots have to recognize and interpret their environment to be able to interact with it. This paper focuses on service tasks such as serving a glass of water where a humanoid two-arm-system has to acquire an object from the scene. A task planner should be able to autonomously discern the necessary actions to solve the task. In the process, a path planner can be used to compute motion sequences to execute these actions. To plan trajectories, the path planner requires a pair of configurations, the start and the goal configuration of the robot, to be provided e.g. by a task planner. This paper proposes a method to autonomously find the goal configurations necessary to acquire objects from the scene and thus makes an attempt to bridge the gap between task planning and path planning. The method determines where to grasp an object by analyzing the scene and the influence of obstacles on the intended grasp location. For the case where the goal object can not be grasped due to obstructing obstacles, a solution is proposed

0 Bookmarks
 · 
72 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Mobile robots such as explorer rovers need task and path planning abilities in order to fulfill their assigned missions: path planning to plan their movements and task planning to plan their actions. The coupling between these two kinds of planning presents open issues such as the description of the environment and the consideration of geometric constraints that must be verified in order to act and move during an action. This paper addresses these issues by proposing an architecture in which a hierarchical task planner sends requests to a path planner in order to check the feasibility of actions. Requirements allowing the path planner to produce an answer are presented as well as the description of planning operators. Finally, we specify the mechanism and the communication language by which the task planner produces requests and takes into account answers.
    AIP Conference Proceedings. 06/2008; 1019(1):162-167.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this work we introduce a novel approach for robot grasp planning. The proposed method combines the benefits of programming by human demonstration for teaching appropriate grasps with those of automatic 3D shape segmentation for object recognition and semantic modeling. The work is motivated by important studies on human manipulation suggesting that when an object is perceived for grasping it is first parsed in its constituent parts. Following these findings we present a manipulation planning system capable of grasping objects by their parts which learns new tasks from human demonstration. The central advantage over previous approaches is the use of a topological method for shape segmentation enabling both object retrieval and part-based grasp planning according to the affordances of an object. Manipulation tasks are demonstrated in a virtual reality environment using a data glove. After the learning phase, each task is planned and executed in a robot environment that is able to generalize to similar, but previously unknown, objects.
    Robotics and Automation (ICRA), 2011 IEEE International Conference on; 06/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans have at some point learned an abstraction of the capabilities of their arms. By just looking at the scene they can decide which places or objects they can easily reach and which are difficult to approach. Possessing a similar abstraction of a robot arm's capabilities in its workspace is important for grasp planners, path planners and task planners. In this paper, we show that robot arm capabilities manifest themselves as directional structures specific to workspace regions. We introduce a representation scheme that enables to visualize and inspect the directional structures. The directional structures are then captured in the form of a map, which we name the capability map. Using this capability map, a manipulator is able to deduce places that are easy to reach. Furthermore, a manipulator can either transport an object to a place where versatile manipulation is possible or a mobile manipulator or humanoid torso can position itself to enable optimal manipulation of an object.
    Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on; 12/2007

Full-text (4 Sources)

View
54 Downloads
Available from
May 20, 2014