Conference Paper

From structured english to robot motion

Univ. of Pennsylvania, Philadelphia
DOI: 10.1109/IROS.2007.4398998 Conference: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 29 - November 2, 2007, Sheraton Hotel and Marina, San Diego, California, USA
Source: DBLP


Recently, Linear Temporal Logic (LTL) has been successfully applied to high-level task and motion planning problems for mobile robots. One of the main attributes of LTL is its close relationship with fragments of natural language. In this paper, we take the first steps toward building a natural language interface for LTL planning methods with mobile robots as the application domain. For this purpose, we built a structured English language which maps directly to a fragment of LTL.

Download full-text


Available from: George J. Pappas,
  • Source
    • "While this assumption may be sufficient for certain tasks and in certain domains (e.g., kitchen tasks tend to be performed only in the kitchen) it limits the usefulness of the learned sequence (e.g., if the robot must be explicitly taught a sweeping task in each room of the house). Similarly, [9] [8] does not allow postconditions to be specified , though preconditions are verbally stated. The input language is limited to a structured fragment of English. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Robots are currently being used in and developed for critical HRI applications such as search and rescue. In these scenarios, humans operating under changeable and high-stress conditions must communicate effectively with autonomous agents, necessitating that such agents be able to respond quickly and effectively to rapidly-changing conditions and expectations. We demonstrate a robot planner that is able to utilize new information, specifically information originating in spoken input produced by human operators. We show that the robot is able to learn the pre- and postconditions of previously-unknown action sequences from natural language constructions, and immediately update (1) its knowledge of the current state of the environment, and (2) its underlying world model, in order to produce new and updated plans that are consistent with this new information. While we demonstrate in detail the robot's successful operation with a specific example, we also discuss the dialogue module's inherent scalability, and investigate how well the robot is able to respond to natural language commands from untrained users.
    03/2012; DOI:10.1145/2157689.2157840
  • Source
    • "Several projects have attempted to address this need for online learning of meanings in the context of human-robot interaction. 1 [6], for example, demonstrate an instruction system for motion learning in robots using a structured control language. However, this work relies on an very limited, hand-crafted fragment of English (containing approximately 10 grammar rules, compared to the hundreds of rules typically necessary to even approximate a natural human-like grammar). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Natural language interactions between humans and robots are currently limited by many factors, most notably by the robot's concept representations and action repertoires. We propose a novel algorithm for learning meanings of action verbs through dialogue-based natural language descriptions. This functionality is deeply integrated in the robot's natural language subsystem and allows it to perform the actions associated with the learned verb meanings right away without any additional help or learning trials. We demonstrate the effectiveness of the algorithm in a scenario where a human explains to a robot the meaning of an action verb unknown to the robot and the robot is subsequently able to carry out the instructions involving this verb.
    RO-MAN, 2011 IEEE; 09/2011
  • Source
    • "These controllers are encapsulated in the so called " primitive tasks modules " and are treated as input-output modules that with appropriate chaining using automated module composition, provide the necessary controllers that are based on multi-robot navigation functions to carry out a complex motion task. The authors of [4] use Temporal Logic formulas to construct high-level motion tasks, while [6] presents a method to convert English language sentences, through Linear Temporal Logic specifications, into highlevel motion-planning objectives. In [14], the UppAal model checker [3] was successfully employed to model and verify the operation of a group of holonomic agents under a simple control law. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Motivated primarily by the problem of UAV coordination, in this paper we address the problem of coordination of a non-homogeneous group of non-holonomic agents with input constraints. In the first part of the paper, we develop a modeling framework for heterogeneous multi-agent systems that is based on timed automata. To this extent, an appropriate abstraction of the agents' workspace from our previous works is extended to three-dimensional space, by utilizing hexagonal prisms. The low level agent details are abstracted by virtue of appropriate controllers to motion primitives that can be performed in the individual workspace cells. The resulting models of the non-homogeneous system capture the non-holonomic behavior and the input constraints imposed by the considered systems. In the second part of this paper, we use the developed models in conjunction with formal verification tools to verify the safety and liveness properties of the system, captured by Linear Temporal Logic (LTL) specifications. Using counterexample guided search, we obtain trajectories that satisfy spatio-temporal specifications. Finally, we simulate two case-studies for two and three-dimensional workspaces respectively.
    Decision and Control (CDC), 2010 49th IEEE Conference on; 01/2011
Show more