Article

Sensorimotor Primitives for Programming Robotic Assembly Skills

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This thesis addresses the problem of sensor-based skill composition for robotic assembly tasks. Skills are robust, reactive strategies for executing recurring tasks in our domain. In everyday life, people rely extensively on skills such as walking, climbing stairs, and driving cars; proficiency in these skills enables people to develop and robustly execute high-level plans. Unlike people, robots are unskilled -- unable to perform any task without extensive and detailed instructions from a higher-level agent. Building sensor-based, reactive skills is an important step toward realizing robots as flexible, rapidly-deployable machines. Efficiently building skills requires simultaneously reducing robot programming complexity and increasing sensor integration, which are competing and contradictory goals. This thesis attacks the problem through development of sensorimotor primitives to generalize sensor integration, graphical programming environments to facilitate skill composition, and desig...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Similar to the skill concept there is shared autonomy, [51], [53], where intelligence is distributed between man and machine, integrating symbolical task description, and model-based type of control. In assembly problems, shared autonomy conception is usually related to sensorimotor primitives [88], [90], [91] as parameterized, domain-general task-level commands which can be applied in different skills. Sensorimotor primitives can also be applied on visually driven grasping [109]. ...
... Since every new path requires a new skill and overall size of models becomes enormous. A solution, often used in intelligent robot control, is to break down the path into a set of parameterized motion primitives [13], [37], [40], [42], [47], [54], [60], [88], [104], [105], [107], [124]. The description of the path becomes a function of certain parameters, spatial or temporal, that determines the fashion of primitive's performance. ...
Article
Full-text available
Development of skilled robotics draws clues from model based theories of human motor control. Thus, a comprehensive anthropomorphic background is given. Skills in robotics are viewed as a tool for fast and efficient real time control that can handle complexity and nonlinearity of robots, generally aiming at robot autonomy. In particular, a skill of redundancy resolution is addressed through a skill representation problem based on Function Approximator. The task of the robot is approximated by a set of parameterized motion primitives. Adopted parameters are also parameters of the function approximator, i.e., skill used. Redundancy is resolved during skill learning based on available expert knowledge, yielding parameterized joint motions. The approximation procedure (Successive Approximations), a major contribution of the paper, is used for batch compilation of parameterized examples, resulting in a parameterized skill model. Such skill enables a user, inexpert in redundancy resolution, to gain benefits from redundant robots. All properties of the Successive Approximations procedure such as accuracy in interpolation and extrapolation, acceleration in redundancy resolution and upgrading to new skill regarding the task variation, are discussed in the example of a five degrees-of-freedom planar redundant robot, performing parameterized ellipse as motion primitive
... Examples of this approach are often found in the area of dextrous manipulation, such as the work of Michelman and Allen [44], and Nagatani and Yuta [52]. Skills or primitives are also useful for encapsulating expert knowledge and for the software engineering purpose of making this expertise simple to interface into working systems by task-level programmers, as demonstrated by Morrow et al [46,47,48] and Archibald and Petriu [2]. In the robotics literature as a whole, they are used as symbolic units of description for computer representations of task-level plans, and for mapping to human-language descriptions (e.g., "grasping," "placing," "move-to-contact") for human understanding, communication, and reasoning. ...
... Several strategies have previously been used for managing such dynamically reconfigurable subsystems, including on-line state machines, and separate high-level programs running on host workstations. In several Chimera-based robot architectures [18,46], the high-level process reconfigures the real-time subsystem based on an on-line state machine interpreter responding to messages sent from modules in the reconfigurable subsystem. ControlShell [65] for the VxWorks operating system also uses a state machine for managing dynamically reconfigurable real-time subsystems. ...
Article
A great deal of current robotics research studies the modeling of human reaction skills: learning control mappings to represent a person's task-performance strategies. The important related problem of modeling human action skills has received less attention. Action learning is the characterization of the state space or action space explored during typical human performances of a given task. Action models learned from human performances typically represent some form of prototypical performance, and also characterize how the human's performances vary stochastically or due to external influences. They are used for gesture recognition, for realistic computer animations of human motion, for study of an expert performer's motion (e.g., Tiger Woods' golf swing), for generating feed-forward or reference robot control signals, and for evaluating the naturalness of the performances generated in real and simulated systems by reaction-skill models. This thesis formulates the process of building ...
... The program that results is displayed in Figure 6 as a finite state machine (FSM) using Morrow's Skill Programming Interface (SPI) (Morrow, 1997). This is an important step in the development cycle because it provides a visual check of the system's interpretation of the demonstration. ...
Article
Full-text available
This paper explores Gesture-Based Programming as a paradigm for programming robotic agents. Gesture-Based Programming is a form of programming by human demonstration that focuses the development of robotic systems on {\it task experts} rather than {\it programming experts}. The technique relies on the existence of previously acquired robotic skills (called ``sensorimotor primitives'') which are intended to be the robotic equivalent of that which humans acquire through everyday physical interactions. The interpretation of the human's demonstration and subsequent matching to robotic primitives is a qualitative problem that we approach with a community of skilled agents. A simple manipulative task is programmed to demonstrate the system.
... Generally, in order to analyse human behaviour, an approach is to define and segment the behaviours into motion primitives [Morrow, 1997], and describe or generate new behaviours using such primitives. These primitives can be defined in various patterns, according to different methods and theories. ...
Article
Full-text available
The development and recent advancements of inte-grated inertial sensors has afforded substantive new possibilities for the acquisition and study of com-plex human motor skills and ultimately their imita-tion within robotic systems. This paper describes continuing work on kinetic models that are derived through unsupervised learning from a continuous stream of signals, including Euler angles and accel-erations in three spatial dimensions, acquired from motions of a human arm. An intrinsic classifica-tion algorithm, MML (Minimum Message Length encoding) is used to segment the complex data, for-mulating a Gaussian Mixture Model of the dynamic modes it represents. Subsequent representation and analysis as FSM (Finite State Machines) has found distinguishing and consistent sequences of modes that persist across both, a variety of tasks as well as multiple candidates. An exemplary "standard" se-quence for each behaviour can be abstracted from a corpus of suitable data and in turn utilised together with alignment techniques to identify behaviours of new sequences, as well as detail the homologous extent between each. The progress in contrast to previous work and future objectives are discussed.
... These basic subtasks have been named Exploratory Procedures (EPs) when studied in the context of haptic exploration2728. Subsequently, EPs for various robotic operations have been developed29303132. We study various ADL activities to analyze what subtasks are involved in a given ADL task and then combine functionally similar actions under a next higher-level task description and this process continues till an acceptable linguistic description of the task is arrived that is easy for the user to understand and use. ...
Chapter
This chapter presents an innovative semi-autonomous human-robot interaction concept for people with disability and discusses a proof-of-concept prototype system. The communication between the user and the robot is performed by electromyographic (EMG) signals. However, unlike most EMG controlled robotic operations, in this framework the user can issue high-level commands to the robot through a novel EMG based approach. The robot controller is designed such that it is capable of decomposing these high-level commands into primitive subtasks using task grammar. It then develops a dynamic plan that links these primitives with knowledge about the world view to accomplish the high-level task commands. In this manner, a user can achieve semi-autonomous human-robot interaction. This proposed concept eliminates the need for continuous control of the robot and as a result, makes the system easier to use, less tiring and less error-prone. This system provides a platform for the people with disability to supervise a robot through high-level qualitative commands, rather than through low-level robotic teleoperation directives. Such a system would permit a person with disability and a robot to communicate task-relevant information in a convenient, robust, and reliable manner.
... The study for human motion modeling has become of particular interest in the robotics and other relative fields. Generally, in order to analyse human behaviour, an approach is to define and segment these into motion primitives [13], and describe or generate new behaviours using such primitives. These can be defined in various patterns, according to different methods and theories. ...
Conference Paper
Full-text available
Study on modeling human psychomotor behaviour based on tracked motion data is reported. The motion data is acquired through various integrated inertial sensors, and represented as Euler angles and accelerations. The minimum message length (MML) algorithm is used to identify frames of intrinsic segmentations and to acquire a classification basis for unsupervised machine learning. The classification model can ultimately be deployed in recognizing certain skilled behaviors. The prior results are analyzed as FSMs' (finite state machines) to extract the potential rules underlying behaviors. The progress made so far and plan for further work is reported.
... Several strategies have previously been used for managing such dynamically reconfigurable subsystems, including on-line state machines, and separate high-level programs running on host workstations. In several Chimera-based robot architectures [2,3], the high-level process reconfigures the real-time subsystem based on an on-line state machine interpreter responding to messages sent from modules in the reconfigurable subsystem. Con-trolShell [4] for the VxWorks operating system also uses a state machine for managing dynamically reconfigurable real-time subsystems. ...
Article
In this paper, we present a method for high-level control of robots whose low-level software is based on dynamically reconfigurable, reusable real-time software modules. Our approach is to use an embedded interpreter for a general-purpose programming language to direct the operation of the low-level modules toward meeting the task-level goals of the robot. To this end, we present RSK, a virtual-machine kernel implementing a Scheme interpreter capable of hard real-time operation, and employing a method of code execution we call “message-based evaluation” (MBE) . MBE is a novel combination of a traditional code execution model and a message-passing architecture, which simplifies the process of writing code for managing the robot's reconfigurable subsystem.
... Several strategies have previously been used for managing such dynamically reconfigurable subsystems, including on-line state machines, and separate high-level programs running on host workstations. In several Chimera-based robot architectures [2, 3], the high-level process reconfigures the real-time subsystem based on an on-line state machine interpreter responding to messages sent from modules in the reconfigurable subsystem. Con- trolShell [4] for the VxWorks operating system also uses a state machine for managing dynamically reconfigurable real-time subsystems. ...
Conference Paper
In this paper, we present a method for high-level control of robots whose low-level software is based on dynamically reconfigurable, reusable real-time software modules. Our approach is to use an embedded interpreter for a general-purpose programming language to direct the operation of the low-level modules toward meeting the task-level goals of the robot. To this end, we present RSK, a virtual-machine kernel implementing a scheme interpreter capable of hard real-time operation, and employing a method of code execution we call “message-based evaluation” (MBE). MBE is a novel combination of a traditional code execution model and a message-passing architecture, which simplifies the process of writing code for managing the robot's reconfigurable subsystem
Article
This paper introduces graph-rewriting methodology for robotic task planning. An approach to represent a high-level task plan in the form of a graph and modify it to accommodate various planning strategies is proposed. Explicitly delineated graph-rewrite rules and their ordered applications are used to reflect the change in plans by changing the graph topology. A framework for modeling complex manipulation tasks as interconnection of simpler subtasks and events connecting them is presented. Using a simple example, it is demonstrated how various plans can dynamically evolve as a function of events. The advantages of graph-rewriting methodology in task planning, importance of developing a reusable graph-rewrite rule library for manipulation tasks, and its connection to low-level control are also outlined in this introductory paper.
Article
Acquisition of the behavioural skills of a human operator and recreating them in an intelligent autonomous system has been a critical but rather challenging step in the development of complex intelligent autonomous systems. Development of a systematic and generic method for realising this process by acquiring human postural and motor movements is explored. This is achieved by breaking down the human motion into a number of segments called motion or skill primitives. The proposed methodology is developed based on studying the movement of the human hand. The motion is measured by a dual-axis accelerometer and a gyroscope mounted on the hand. The gyroscope locates the position and configuration of the hand, whereas the accelerometer measures the kinematics parameters of the movement. The covariance and the mean of the data produced by the sensors are used as features in the clustering process. A fuzzy clustering method is developed and applied to identify different movements of the human hand. The proposed clustering approach identifies the sequence of the motion primitives embedded in the data produced from the human wrist movement. A review of the previous work in the area is carried out and the developed methodology is described. An overview of the experimental setup and procedures to validate the approach is given. The results of the validation are analysed critically and some conclusions are drawn.
Article
The trend towards smaller lot sizes and shorter product life cycles requires automation solutions with higher flexibility. Today’s robotic systems often are uneconomical for frequently changing boundary conditions and varying tasks due to high engineering costs needed for a well-defined supply of parts and pallets. At the same time, even small inaccuracies due to shape deviations in parts or pallets often cause high downtime. This work contributes to the robustness of industrial assembly processes with high inaccuracy concurrent to narrow tolerances. Therefore, contact-based manipulation strategies are defined, which are model-free and object-independent and solve common industrial tasks as palletizing, packaging and machine feeding. While the strategies are robust to inaccuracy up to 5 mm/5° due to localization uncertainty or object displacement, they handle usual industrial assembly tolerance of far below 1mm. The necessary flexibility and reusability for new tasks is guaranteed by hierarchical decomposition into atomic sub-strategies. In order to accelerate the execution, the manipulation strategies are customized to each specific task by unsupervised experience-based learning. The flexibility of the manipulation strategies and the progress in cycle time during the execution are shown on common industrial tasks with varying objects, tolerances and inaccuracies.
Conference Paper
Explores gesture-based programming as a paradigm for programming robotic agents. Gesture-based programming is a form of programming by human demonstration that focuses the development of robotic systems on task experts rather than programming experts. The technique relies on the existence of previously acquired robotic skills (which we call “sensorimotor primitives”) which we hope to develop to into the robotic equivalent of skills acquired by humans through everyday experiences. The interpretation of the human's demonstration and subsequent matching to robotic primitives is a qualitative problem that we approach with a community of skilled agents. A simple manipulative task and a variant of that task are programmed to demonstrate the system
Conference Paper
Full-text available
A methodology is developed for describing of hierarchical control of robot systems in a manner which is faithful to the underlying mechanics, structured enough to be used as an interpreted language, and sufficiently flexible to encompass a wide variety of systems. A consistent set of primitive operations which form the core of a robot system description and control language is presented. This language, motivated by the hierarchical organization of neuromuscular systems, is capable of describing a large class of robot systems under a variety of single-level and distributed control schemes.
Conference Paper
Full-text available
Real-time visual feedback is an important capability that many robotic systems must possess if these systems are to operate successfully in dynamically varying and imprecisely calibrated environments. An eye-in-hand system is a common technique for providing camera motion to increase the working region of a visual sensor. Although eye-in-hand robotic systems have been well-studied, several deficiencies in proposed systems make them inadequate for actual use. Typically, the systems fail if manipulators pass through singularities or joint limits. Objects being tracked can be lost if the objects become defocused, occluded, or if features on the objects lie outside the field of view of the camera. In this paper, a technique is introduced for integrating a visual tracking strategy with dynamically determined sensor placement criteria. This allows the system to automatically determine, in real-time, proper camera motion for tracking objects successfully while accounting for the undesirable, but often unavoidable, characteristics of camera-lens and manipulator systems. The sensor placement criteria considered include focus, field-of-view spatial resolution, manipulator configuration, and a newly introduced measure called resolvability. Experimental results are presented
Conference Paper
Full-text available
Visual tracking is prone to distractions, where features similar to the target features guide the track away from its intended object. Global shape models and dynamic models are necessary for completely distraction-free contour tracking, but there are cases when component feature trackers alone can be expected to avoid distraction. We define the tracking problem in general and devise a method for local, window-based, feature trackers to track accurately in spite of background distractions. The algorithm is applied to a generic line tracker and a snake-like contour tracker which are then analyzed with respect to previous contour-trackers. We discuss the advantages and disadvantages of our approach and suggest that existing model-based trackers can be improved by incorporating similar techniques at the local level
Conference Paper
Full-text available
Many researchers are interested in developing robust, skill-achieving robot programs. The authors propose the development of a sensorimotor primitive layer which bridges the gap between the robot/sensor system and a class of tasks by providing useful encapsulations of sensing and action. Skills can then be constructed from this library of sensor-driven primitives. This reflects a move away from the separation of sensing and action in robot programming of task strategies towards the integration of sensing and action in a domain-general way for broad classes of tasks. For the domain of rigid-body assembly, the authors exploit the motion constraints which define assembly to develop force sensor-driven primitives. The authors report on the experimental results of a D-connector insertion skill implemented using several force-driven primitives
Conference Paper
Full-text available
Force controlled manipulation is a common technique for compliantly contacting and manipulating uncertain environments. Visual servoing is effective for reducing alignment uncertainties between objects using imprecisely calibrated camera-lens-manipulator systems. These two types of manipulator feedback, force and vision, represent complementary sensing modalities; visual feedback provides information over a relatively large area of the workspace without requiring contact with the environment, and force feedback provides highly localized and precise information upon contact. This paper presents three different strategies which combine force and vision within the feedback loop of a manipulator, traded control, hybrid control, and shared control. A discussion of the types of tasks that benefit from the strategies is included, as well as experimental results which show that the use of visual servoing to stably guide a manipulator simplifies the force control problem by allowing the effective use of low gain force control with relatively large stability margins
Conference Paper
Full-text available
We present the design of a modular tactile sensor and actuator system for observing human demonstrations of contact tasks. The system consists of three interchangeable parts: an intrinsic tactile sensor for measuring net force/torque, an extrinsic tactile sensor for measuring contact distributions, and a tactile actuator for displaying tactile distributions. The novel components are the extrinsic sensor and tactile actuator which are “inside-out symmetric” to each other and employ an electrorheological gel for actuation
Article
The paper discusses and asseses principal approaches for getting and programming fully formalizable (algorithmizable), non formalizable and adaptable tasks and subtasks for sensor guided robots (robot skills) in dependence on the human operator's skills and capabilities used for it. The main idea of the third approach consists of the direct transformation of the human operator's manual skills into an adapted algorithmic solution of the task by direct technological teach in (teaching of the task by doing) as a combination of the first two approaches. Naturally, the third approach requires a well ballanced relation between the human operator"s action and control strategies and the class of algorithmic structures to be adapted. First results of tests concerning this relation and the transformation of a man"s action into an adapted algorihmic solution are discussed on the basis of a very simple problem of following (predicting) a given contour as a first step to find out this question.
Article
Assembly is one of the most complicated fields in a manufacturing process. Commonly, assembly cells are equipped with robots which mate the parts to be assembled by means of a high precise robot arm in addition to passive compliance elements, only. Advanced assembly robots make use of external sensors to cope with complicated situations and low tolerances in the assembly process. Complex assembly sequences can be devided into basic assembly operations (skills) such as "inserting" or screwing", Each skill is structured as a sequence of rules and can be activated by a special robot instruction. The complexity of interaction processes and their nonlinear attribute led us to the fuzzy reasoning method using fuzzy production rules. In the paper presented the structure of a skill subsystem using fuzzy rules is dicussed. Furthermore, a simple method of programming fuzzy production rules using "functions" is presented. Finally, the method was tested by the example "peg in hole insertion". By means of this skill operation a conventional insertion strategy is compared with a method based on fuzzy rules.
Article
Definition of robotic primitives for insertions is an essential step in achieving a truly flexible manufacturing environment. We present techniques based on active compliance, implemented with hybrid force-position control, that are capable of inserting a wide variety of shaped pegs. We analyze the forces encountered during the insertions to determine what classes of shapes these techniques will consistently insert. The analysis also gives guidance in selecting parameters of these techniques for specific shapes. These techniques provide a significant step toward simplifying the programming of a flexible manufacturing environment.
Book
1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References.
Article
A new method based on task process models for acquiring manipulative skills from human experts is presented. In performing manipulative tasks such as deburring, a human expert moves a tool at an optimal feedrate and cutting force as well as with an appropriate compliance for holding the tool. An experienced worker can select the correct strategy for performing a task and change it dynamically in accordance with the task process state. In this paper, the human expertise for selecting a task strategy that accords with the process characteristics is modeled as an associative mapping, and represented and generated by using a neural network. First, the control strategy for manipulating a tool is described in terms of feedforward inputs and tool holding dynamics. The parameters and variables representing the control strategy are then identified by using teaching data taken from demonstrations by an expert. The task process is also modeled and characterized by a set of parameters, which are identified by using this same teaching data. Combining the two sets of identified parameters, we can derive an associative mapping from the task process characteristics to the task strategy parameters. The consistency of the mapping and the transferability of human skills are analyzed by using Lipschitz's condition. The method is applied to deburring, and implemented on a direct-drive robot. It is shown that the robot is able to associate a correct control strategy with process characteristics in a manner similar to that of the human expert.
Article
The problem of part misalignment in assembly is described and the use of passive and active techniques reviewed. In particular, the techniques for integrating a force sensor into an assembly system are discussed. Research at UWCC on force control strategies for adaptable assembly is described. The methodology for developing strategies is outlined and illustrated by considering a typical assembly task.
Article
Experience has proved that exploiting robots for assembly tasks is much more difficult than manufacturing engineers had expected and many attempts at implementing robotic assembly have failed. Our research has led us to believe that a formal approach to specifying the steps required for assembly would be of great benefit in developing the required software for a specific task, and in adaptively controlling and monitoring the execution of robotic assembly steps. The US National Bureau of Standards has developed a formal system, called ABC (for Assembly By Constraints) for specifying the steps required for assembly. The system is based on the reduction in the degrees of freedom of objects as they are assembled. Using this basic concept, we have developed 14 primitive operations which can be used to completely specify assembly steps for a large class of problems. This paper initially outlines the historical development of the system, then describes two pieces of software developed to allow easy definition of assembly tasks using the ABC system, and finally presents two practical examples.
Article
This paper presents a model-based approach to the recognition of discrete state transitions for robotic assembly. Sensor signals, in particular, force and moment, are interpreted with reference to the physical model of an assembly process in order to recognize the state of assembly in real time. Assembly is a dynamic as well as a geometric process. Here, the model-based approach is applied to the unique problems of the dynamics generated by geometric interactions in an assembly process. First, a new method for the modeling of the assembly process is presented. In contrast to the traditional quasi-static treatment of assembly, the new method incorporates the dynamic nature of the process to highlight the discrete changes of state, e.g., gain and loss of contact. Second, a qualitative recognition method is developed to understand a time series of force signals. The qualitative technique allows for quick identification of the change of state because dynamic modelling provides much richer and more copious information than the traditional quasi-static modeling. A network representation is used to compactly present the modelling state transition information. Lastly, experimental results are given to demonstrate the recognition method. Successful transition recognition was accomplished in a very short period of time: 7-10 ms.
Article
Geometric and force equilibrium conditions for successfully mating rigid parts are presented. The actions of the Remote Center Compliance is explained. Guidelines for choosing RCC parameters are presented. Models of insertion force behavior are verified by experimental data.
Conference Paper
Considerable research has been performed on Robotic Visual Servoing (RVS) over the past decade. Using real-time visual feedback, researchers have demonstrated that robotic systems can pick up moving parts, insert bolts, apply sealant, and guide vehicles. With the rapid improvements being made in computing and image processing hardware, one would expect that every robot manufacturer would have a RVS option by the end of the 1990s. So why aren`t the Fanucs, ABBs, Adepts, and Motomans of the world investing heavily in RVS? I would suggest four seasons: cost, complexity, reliability, and lack of demand. Solutions to the first three are approaching the point where RVS could be commercially available; however, the lack of demand is keeping RVS from becoming a reality in the near future. A new set of applications is needed to focus near term RVS development. These must be applications which currently do not have solutions. Once developed and working in one application area, the technology is more likely to quickly spread to other areas. DOE has several applications that are looking for technological solutions, such as agile weapons production, weapons disassembly, decontamination and dismantlement of nuclear facilities, and hazardous waste remediation. This paper will examine a few of these areas and suggest directions for application-driven visual servoing research.
Article
MOTION IS AN IMPORTANT AND FUNDAMENTAL SOURCE OF VISUAL INFORMATION. IT IS WELL KNOWN THAT THE PATTERN OF IMAGE MOTION CONTAINS INFORMATION USEFUL FOR THE DETERMINATION OF THE 3-DIMENSIONAL STRUCTURE OF THE ENVIRONMENT AND THE RELATIVE MOTION BETWEEN THE CAMERA AND THE OBJECTS IN THE SCENE. HOWEVER, THE ACCURATE MEASUREMENT OF IMAGE MOTION FROM A SEQUENCE OF REAL IMAGES HAS PROVEN TO BE DIFFICULT. IN THIS THESIS, A HIERARCHICAL FRAMEWORK FOR THE COMPUTATION OF DENSE DISPLACEMENT FIELDS FROM PAIRS OF IMAGES, AND AN INTEGRATED SYSTEM CONSIS- TENT WITH THAT FRAMEWORK ARE DESCRIBED. EACH INPUT INTENSITY IMAGE IS FIRST DECOMPOSED USING A SET OF SPATIAL-FREQUENCY TUNED CHANNELS. THE INFORMATION IN THE LOW-FREQUENCY CHANNELS IS USED TO PROVIDE ROUGH DISPLACEMENTS OVER A LARGE RANGE, WHICH ARE THEN SUCCESSIVELY REFINED BY USING THE INFORMATION IN THE HIGHER-FREQUENCY CHANNELS. WITHIN EACH CHANNEL, A DIRECTION- DEPENDENT CONFIDENCE MEASURE IS COMPUTED FOR EACH DISPLACEMENT VECTOR, AND A SMOOTHNESS CONSTRAINT IS USED TO PROPOGATE RELIABLE DISPLACEMENT VECTORS TO THEIR NEIGHBORING AREAS WITH LESS RELIABLE VECTORS. FOR OUR INTEGRATED SYSTEM, BURT''S LAPLACIAN PYRAMID TRANSFORM IS USED FOR THE SPATIAL-FREQUENCY DECOMPOSITION, AND THE MINIMIZATION OF THE SUM OF SQUARED DIFFERENCES MEASURE (SSD) IS USED AS THE MATCH CRITERION. THE CON- FIDENCE MEASURE IS DERIVED FROM THE SHAPE OF THE SSD SURFACE, AND THE SMOOT
Article
ROBOT SYSTEMS ARE BECOMING MORE AND MORE COMPLEX, BOTH IN TERMS OF AVAIL ABLE DEGREES OF FREEDOM AND IN TERMS OF SENSORS. IT IS NO LONGER POSSIBLE TO CONTINUE TO REGARD ROBOTS AS PERIPHERAL DEVICES OF A COMPUTER SYSTEM, AND TO PROGRAM THEM BY ADAPTING GENERAL-PURPOSE PROGRAMMING LANGUAGES. THIS DISSERTATION ANALYZES THE INHERENT COMPUTING CHARACTERISTICS OF THE ROBOT PROGRAMMING DOMAIN, AND FORMALLY CONSTRUCTS AN APPROPRIATE MODEL OF COMPU- TATION. THE PROGRAMMING OF A DEXTROUS ROBOT HAND IS THE EXAMPLE DOMAIN FOR THE DEVELOPMENT OF THE MODEL. THIS MODEL, CALLED RS, IS A MODEL OF DISTRIBUTED COMPUTATION: THE BASIC MODE OF COMPUTATION IS THE INTERACTION OF CONCURRENT COMPUTING AGENTS. A SCHEMA IN RS DESCRIBES A CLASS OF COMPUTING AGENTS. SCHEMAS ARE INSTANTI- ATED TO PRODUCE COMPUTING AGENTS, CALLED SIS, WHICH CAN COMMUNICATE WITH EACH OTHER VIA INPUT AND OUTPUT PORTS. A NETWORK OF SIS CAN BE GROUPED ATOMICALLY TOGETHER IN AN ASSEMBLAGE, AND APPEARS EXTERNALLY IDENTICAL TO A SINGLE SI. THE SENSORY AND MOTOR INTERFACE TO RS IS A SET OF PRIMITIVE, PREDEFINED SCHEMAS. THESE CAN BE GROUPED ARBITRARILY WITH BUILT-IN KNOW- LEDGE IN ASSEMBLAGES TO FORM TASK-SPECIFIC OBJECT MODELS. A SPECIAL KIND OF ASSEMBLAGE CALLED A TASK-UNIT IS USED TO STRUCTURE THE WAY ROBOT PRO- GRAMS ARE BUILT. THE FORMAL SEMANTICS OF RS IS AUTOMATA THEORETIC; THE SEMANTICS OF AN SI
Article
An abstract is not available.
Article
There are 2 limitations in today's assembly robot systems. These systems are not able to deal with the uncertainties encountered in typical real world environments and are thus limited to tasks that are subject only to simple uncertainties. They are not easily programmed and are thus limited to tasks that are to be repeated a large number of times before either being discontinued or modified. A new approach is proposed to programming sensor based assembly tasks: programming in terms of task achieving behavioral modules. This is expected to simplify programming and validation of robot programs; to reduce the computational requirements; to reduce overall system complexity; to facilitate the construction of assembly planner; to provide a principled method of incorporating sensor use; to simplify the uncertainty problem; and to provide early industrially useful spin-off. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This overview of C++ presents the key design, programming, and language-technical concepts using examples to give the reader a feel for the language. C++ is a general-purpose programming language with a bias towards systems programming that supports efficient low-level computation, data abstraction, object-oriented programming, and generic programming. 1
Article
The advantages and limitations of procedural and declarative approaches for product modeling are discussed. Concepts are developed for modeling all levels of product relations with a uniform set of structures and relationships. It is shown that five basic structures,Part-of, Structuring relation, Degrees of freedom, Motion limits, andFit can be used to define relationships between assemblies, parts, features, feature volume primitives, and evaluated boundaries. Generic relations which facilitate constraint specification between target and reference entities are also presented. Methods for the derivation of the location of an assembly unit from high level constraint specifications, such as mating conditions, and techniques for determining the degrees of freedom, motion limits, and assemblability are required. This can be done by uni-directional parameter derivation in the procedural approach, or by symbolic geometric reasoning or numerical equation solution in the declarative approach. The former is less expensive, easy to implement, avoids conflicts, but leads to combinatorial explosion. The latter is general, flexible, decouples constraint specification from validation, but is expensive, and may require conflict resolution.
Conference Paper
A primary source of difficulty in automated assembly is the uncertainty in the relative position of the parts being assembled. This study focuses on a machine learning approach for solving the problem. Force sensor information, responses to recent moves and results from previous assemblies are used to generate a set of production rules. These rules govern the motion of the robot during the assembly process
Conference Paper
This paper provides an analytical overview of the dynamics involved in force control. Models are developed which demonstrate, for the one-axis explicit force control case, the effects on system closed-loop bandwidth of a) robot system dynamics that are not usually considered in the controller design; b) drive-train and task nonlinearities; and c) actuator and controller dynamics. The merits and limitations of conventional solutions are weighed, and some new solutions are proposed. Conclusions are drawn which give insights into the relative importance of the effects discussed.
Conference Paper
Off-line programming of robots will become increasingly more important. This paper describes a robot programming system which is based upon the use of geometrical constraints on the degrees of freedom of a component for specifying robotic assembly actions. Two pieces of software have been developed which allow easy definition of these constraints and the order of execution of the constraints.
Conference Paper
A robot system operating in an environment in which there is uncertainty and change needs to combine the ability to react with the ability to plan ahead. In a previous paper we proposed a solution to the problems of integrating planning and reaction: cast planning as adaptation of a reactive system, the planner-reactor approach. In this paper, we present our first experimental results from this approach. The results indicate that the planner-reactor approach is an attractive option for integrating planning and reaction in a robot system, allowing smooth, online updates to the behavior, of a reactive system with little time and behavior penalties accruing from the use of a planner
Conference Paper
This paper presents an analysis of force data that was generated by a human demonstration of a complex and asymmetrical insertion task. The human was blindfolded. And only had force sensors (i.e. fingers) available. The process is modelled using hybrid dynamics which highlighted the changes in contact as the essential parts of the assembly process. Qualitative reasoning is then applied to the force signals to identify those portions of the force signals that corresponded to either a gain or a loss of contact. The emphasis of the paper is to understand the force signals generated by the human subject. It is shown that qualitative reasoning proves to be a simple, efficient and effective means to understand the force signal by identifying the changes in contact state. Qualitative reasoning is also shown to be suitable for automated industrial assembly
Conference Paper
This paper discusses a new method for teaching a deburring robot based on demonstration of human skilful motion. The robot is programmed to adjust the tool feedrate in accordance with the varying burr characteristics, such as burr size and material properties. This dynamic change of tool feedrate is motivated by the effective human skill in performing a deburring task. The relationship between the tool feedrate and burr characteristics is obtained from human demonstration data and stored in a computer as an associative memory. This associative memory enables the robot to select the tool feedrate that well matches the burr characteristics. Therefore, the robot motion is always effective in removing burrs and generating smooth finish of workpiece surface without severe tool wear. In order to identify burr characteristics, a laser displacement sensor has been used for direct burr height measurement, and a deburring process model has been applied for material property differentiation. The learned associative memory is stored and represented by a neural network, which can be easily incorporated into robot programming. Experimental results show that a robot can perform a deburring task in a manner similar to its human teacher
Conference Paper
A new approach to force guided assembly is developed for the assembly of unsurfaced parts having sharp burrs and irregular surfaces. Due to friction at burrs and irregular surfaces, force signals are very noisy and erratic, preventing reliable sensing and monitoring of the assembly process. In this paper, instead of simply measuring contact forces, we take positive actions by actively shaking the end effector and observing the reaction forces to the perturbation in order to obtain rich, reliable information. By taking the correlation between the input perturbation and the resultant reaction forces, we can determine the direction of the part surface and guide the hand-held part correctly despite burrs and poor surface finish. The principle of active force sensing using a correlation technique is described. An algorithm for guiding an assembly part by using the correlation information is developed based on the theory of direct adaptive control. The method is then applied to a practical task: the assembly of sheet metal parts with sharp edges and burrs. Experiments demonstrate the effectiveness and feasibility of the active force sensing method
Conference Paper
This paper discusses the implementation of complex manipulation tasks with a dextrous hand. The approach used is to build a set of primitive manipulation functions and combine them to form complex tasks. Only fingertip, or precision, manipulations are considered. Each function performs a simple two-dimensional translation or rotation that can be generalized to work with objects of different sizes and using different grasping forces. Complex tasks are sequential combinations of the primitive functions. They are formed by analyzing the workspaces of the individual tasks and controlled by finite state machines. We present a number of examples, including a complex manipulation removing the top of a child-proof medicine bottle-that incorporates different hybrid position/force specifications of the primitive functions of which it is composed. The work has been implemented with a robot hand system using a Utah-MIT hand
Conference Paper
The problem of how human skill can be represented as a parametric model using a hidden Markov (HMM), and how an HMM-based skill model can be used to learn human skill, is discussed. The HMM is feasible for characterizing two stochastic processes, measurable action and immeasurable mental states that are involved in the skill learning. Based on the most likely performance criterion, the best action sequence can be selected from previously measured action data by modeling the skill as an HMM. This selection process can be updated in real-time by feeding new action data and modifying HMM parameters. The implementation of the proposed method in a teleoperation-controlled space robot is discussed. The results demonstrate the feasibility of the method
Article
In this paper we propose a new forthcoming research topic, the Intelligent Assisting System IAS. Using this system, we are approaching the identification and analysis of human manipulation skills to be used for intelligent human operator assistance. A manipulation skill database enables the IAS to perform complex manipulations at the motion control level. Through repeated interaction with the operator for unknown environment states, the manipulation skills in the database can be increased on-line. A model for manipulation skill based on the grip transformation matrix is proposed, which describes the transformation between the object trajectory and the contact conditions. The dynamic behaviour of the grip transform is regarded as the essence of the performed manipulation skill. We describe the experimental system set-up of a skill acquisition and transfer system as a first approach to the IAS. A simple example of manipulation shows the feasibility of the proposed manipulation skill model. Furthermore, this paper derives a control algorithm that realizes object task trajectories, and its feasibility is shown by simulation.
Article
This article proposes a method for automatically designing sensors from the specification of a robot's task, its actions, and its uncertainty in control. The sensors provide the information required by the robot to perform its task, despite uncertainty in sensing and control. The key idea is to generate a strategy for a robot task by using a backchaining planner that assumes perfect sensing while taking careful account of control uncer tainty. The resulting plan indirectly specifies a sensor that tells the robot when to execute which action. Although the planner assumes perfect sensing information, the sensor need not ac tually provide perfect information. Instead, the sensor provides only the information required for the plan to function correctly.
Article
Robots must plan and execute tasks in the presence of uncer tainty. Uncertainty arises from sensing errors, control errors, and the geometry of the environment. By employing a com bined strategy offorce and position control, a robot program mer can often guarantee reaching the desired final configura tion from all the likely initial configurations. Such motion strategies permit robots to carry out tasks in the presence of significant uncertainty. However, compliant motion strategies are very difficult for humans to specify. For this reason we have been working on the automatic synthesis of motion strategies for robots. In previous work (Donald 1988b; 1989), we presented a framework for computing one-step motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions. However, it is not always possible to find plans that are guaranteed to succeed. For example, if tolerancing errors render an assembly infeasible, the plan executor should stop and signal failure. In such cases the insistence on guaranteed success is too restrictive. For this reason we investigate error detection and recovery (EDR) strategies. EDR plans will succeed or fail recognizably: in these more general strategies, there is no possibility that the plan will fail without the exec utor realizing it. The EDR framework fills a gap when guar anteed plans cannot be found or do not exist; it provides a technology for constructing plans that might work, but fail in a "reasonable" way when they cannot. We describe techniques for planning multi-step EDR strat egies in the presence of uncertainty. Multi-step strategies are considerably more difficult to generate, and we introduce three approaches for their synthesis: these are the Push-for ward Algorithm, Failure Mode Analysis, and the Weak EDR Theory. We have implemented the theory in the form of a planner, called LIMITED, in the domain ofplanar assemblies.
Book
From the Publisher: Vision based mobile robot guidance has proven difficult for classical machine vision methods because of the diversity and real time constraints inherent in the task. This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task. But real world problems like vision based mobile robot guidance present a different set of challenges for the connectionist paradigm. Among them are: how to develop a general representation from a limited amount of real training data, how to understand the internal representations developed by artificial neural networks, how to estimate the reliability of individual networks, how to combine multiple networks trained for different situations into a single system, and how to combine connectionist perception with symbolic reasoning. Neural Network Perception for Mobile Robot Guidance presents novel solutions to each of these problems. Using these techniques, the ALVINN system can learn to control an autonomous van in under 5 minutes by watching a person drive. Once trained, individual ALVINN networks can drive in a variety of circumstances, including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 55 miles per hour. The techniques also are shown to generalize to the task of controlling the precise foot placement of a walking robot.
Article
The use of active compliance enables robots to carry out tasks in the presence of significant sensing and control errors. Compliant motions are quite difficult for humans to specify, however. Furthermore, robot programs are quite sensitive to details of geometry and to error characteristics and must, therefore, be constructed anew for each task. These factors motivate the need for automatic synthesis tools for robot programming, especially for compliant motion. This paper describes a formal approach to the synthesis of compliant motion strategies from geometric descriptions of assembly operations and explicit estimates of errors in sensing and control. A key aspect of the approach is that it provides correctness criteria for compliant motion strategies.
Conference Paper
Sensory systems, such as computer vision, can be used to measure relative robot end-effector positions to derive feedback signals for control of end-effector positioning. The role of vision as the feedback transducer affects closed-loop dynamics, and a visual feedback control strategy is required. Vision-based robot control research has focused on vision processing issues, while control system design has been limited to ad-hoc strategies. We formalize an analytical approach to dynamic robot visual servo control systems by first casting position-based and image-based strategies into classical feedback control structures. The image-based structure represents a new approach to visual servo control, which uses image features (e.g., image areas, and centroids) as feedback control signals, thus eliminating a complex interpretation step (i.e., interpretation of image features to derive world-space coordinates). Image-based control presents formidable engineering problems for controller design, including coupled and nonlinear dynamics, kinematics, and feedback gains, unknown parameters, and measurement noise and delays. A model reference adaptive controller (MRAC) is designed to satisfy these requirements.
Conference Paper
Integrating sensors into robot systems is an important step towards increasing the flexibility of robotic manufacturing systems. Current sensor integration is largely task-specific which hinders flexibility. The authors are developing a sensorimotor command layer that encapsulates useful combinations of sensing and action which can be applied to many tasks within a domain. The sensorimotor commands provide a higher-level in which to terminate task strategy plans, which eases the development of sensor-driven robot programs. This paper reports on the development of both force and vision driven commands which are successfully applied to two different connector insertion experiments