Conference PaperPDF Available

MoveIt! Task Constructor for Task-Level Motion Planning

Authors:
MoveIt! Task Constructor for Task-Level Motion Planning
Michael Görner, Robert Haschke, Helge Ritter, Jianwei Zhang
Abstract A lot of motion planning research in robotics
focuses on efficient means to find trajectories between individual
start and goal regions, but it remains challenging to specify
and plan robotic manipulation actions which consist of multiple
interdependent subtasks. The Task Constructor framework we
present in this work provides a flexible and transparent way to
define and plan such actions, enhancing the capabilities of the
popular robotic manipulation framework MoveIt!.1Subprob-
lems are solved in isolation in black-box planning stages and a
common interface is used to pass solution hypotheses between
stages. The framework enables the hierarchical organization of
basic stages using containers, allowing for sequential as well
as parallel compositions. The flexibility of the framework is
illustrated in multiple scenarios performed on various robot
platforms, including bimanual ones.
I. INTRODUCTION
Motion planning for robot control traditionally considers
the problem of finding a feasible trajectory between a start
and a goal pose, where both are specified in either joint
or Cartesian space. Standard robotic applications, however,
are usually composed of multiple, interdependent sub-stages
with varying characteristics and sub-goals. To find trajecto-
ries that satisfy all constraints, all steps need to be planned
in advance to yield feasible, collision-free, and possibly cost-
optimized paths.
Typical examples are pick-and-place tasks, that require
(i) finding a set of feasible grasp and place poses, and
(ii) planning a feasible path connecting the initial robot
pose to a compatible candidate pose. The latter, in turn,
involves approaching, lifting, and retracting – performing
well-defined Cartesian motions during these critical phases.
As there typically exist several grasp and place poses, any
combination of them might be valid and should be considered
for planning.
Such problems present various challenges: Individual plan-
ning stages are often strongly interrelated and cannot be con-
sidered independently from each other. For example, turning
an object upside-down in a pick-and-place task renders a
top grasp infeasible. Whereas some initial joint configuration
might be adequate for the first part of a task, it could interfere
with a second part due to inconvenient joint limits.
The present work proposes a framework to describe and
plan composite tasks, where the high-level sequence of
actions is fixed and known in advance, but the concrete
realization needs to adapt to the environmental context.
These authors contributed equally to this work.
This work was supported by the DFG Center of Excellence EXC 277,
the DFG Transregional Research Centre CML, TRR-169, and has received
funding from EU project SaraFun (grant 644938).
1The Task Constructor framework is publicly available at
https://github.com/ros-planning/moveit_task_constructor
Fig. 1. Example task: a UR5 robot executes a task composed of (a) picking
up a bottle from the table, (b) pouring liquid into a nearby glass, and (c)
placing the bottle in a different location. Markers (arrows and frames) show
key aspects of the task, including approach and lift directions during (a),
bottle poses for (a) and (c), and the tip of the bottle during (b). Fig. 5
illustrates the associated task structure.
With this, we aim to fill a gap between high-level symbolic
task planning and low-level, manipulation planning, thus
contributing to the field of Task and Motion Planning.
Within the framework, tasks are described as hierarchical
tree structures providing both sequential and parallel combi-
nations of subtasks. The leaves of a task tree represent prim-
itive planning stages, which are solved by arbitrary motion
planners integrated within MoveIt!, thus providing the full
power and flexibility of MoveIt! to model the characteristics
of specific subproblems. To account for interdependencies,
stages propagate the world state of their sub-solutions within
the task tree. Efficient schedulers are proposed to first focus
search on critical parts and cheap-to-compute stages of the
task and thus retrieve cost-economical solutions as early as
possible. Continuing planning can improve the quality of
discovered solutions over time, taking into consideration all
generated sub-solutions.
Additionally, the explicit factorization into distinct stages
and world states facilitates error analysis: individual parts of
the task can be investigated in isolation, and key aspects of
individual stages can be visualized easily. Fig. 1 illustrates
an example task with supporting visualizations.
II. REL ATED WORK
The scope of this work lies between two fields of research.
On the one side, manipulation planning emphasizes the
problem of trajectory planning with multiple kinematic and
dynamic constraints [1], [2]. These approaches can cope with
multiple constraints for a single task, but usually do not
factorize efficiently into comprehensive subproblems.
On the other side, the symbolic task planning community
has long realized that reasoning about robotic actions has
to consider geometric constraints at planning level, form-
ing the field of task and motion planning [3]–[6]. While
these approaches demonstrate impressive show-cases, solv-
ing complex, puzzle-like scenarios, they are often too generic
and very complicated to configure for concrete use cases.
As a consequence of their task planning approach involving
both, symbolic- and geometry-level planning simultaneously,
these systems are vulnerable to small changes in parameteri-
zation, strongly depend on an accurate and consistent domain
representation, and can exhibit behavior that is formally
adequate, but surprising to humans. Interrelations between
various subtasks either need to be modeled explicitly in the
symbolic domain or many potential sub-solutions have to
be rejected afterwards. As a consequence, the underlying
backtracking-based search is inefficient and it is difficult to
exploit the local structure of a specific problem.
Instead of solving such generic task problems, we focus
on the common subproblem of finding feasible sequences
of trajectories given that the high-level action sequence
is already known in advance (with the notable exception
of well-defined alternative pathways). The assumed action
sequence could be compared to the plan skeletons defined
in [7]. Whereas the authors propose methods to convert
action sequences into discrete-space constraint satisfaction
problems, we utilize traditional motion planning algorithms
to find solutions in continuous space.
Another closely related work was presented in [8]. They
present the roadmap-based planning algorithm Multi-Modal-
PRM to generate trajectories across multiple different con-
figuration manifolds. Whereas they emphasize the integrated
planning procedure, relying on existing manifold specifica-
tions, our work focuses on these specifications instead. We
explicitly treat the planning process of primitive stages as
black-boxes, to enhance modularity and developer-insight
through the explicit exchange of generated world states.
In MoveIt!, the composition of multiple planning steps
is partially supported by the manipulation stack. However,
this API is limited to sequential pick-and-place requests that
are planned for in a greedy fashion, often resulting in poor-
quality or no solutions at all.
The Descartes planning library [9] follows a strategy
similar to this work, searching all possible paths in a graph
formed by sets of consecutive goal states. In contrast to
Descartes, which restricts itself to Cartesian paths through
fixed waypoints, this framework allows for arbitrary motion
planning stages as well as arbitrary complex hierarchies.
For shared robot control, affordance templates [10] pro-
vide the structure and visualization to implement single
object-centered tasks, like turning a valve, through multiple
Cartesian end-effector waypoints. While their implementa-
tion employs greedy forward-planning to solve for Cartesian
trajectories, the constructed task specifications could be used
as Cartesian planning stages within this work.
GEN l
F F
F F
FW
F F
BW
F F
CON
Fig. 2. Stage types distinguished by their interface: a) generators, b), c)
forward and backward propagating stages, d) connecting stages. Black dots
indicate input states, red stars indicate newly spawned states.
III. TAS K CONS TRUC TOR FRAMEWORK
A. Task Description
Within this framework, tasks are composed in a hier-
archical fashion from primitive planning stages that describe
atomic motion planning problems that can be solved by
existing motion planning frameworks like OpenRAVE [11],
Klamp’t [12] or MoveIt! [13]. These frameworks typically
allow for motion planning from a single start to a goal con-
figuration, which both are usually fully-specified in configu-
ration space. Often they also permit to specify goal regions,
both in configuration and Cartesian space, and appropriate
state samplers are employed to yield discrete configuration-
space states for planning.
Individual planning stages communicate their results via
a common interface using a MoveIt! planning scene to
describe the whole state of the environment relevant for
motion planning. This comprises the shapes and poses of
all objects, the collision environment, all robot joint states,
and information about objects attached to the robot. This
geometric/kinematic state description can be augmented by
additional semantic information in terms of typed, named
properties, forming the final state representation. Each stage
then attempts to connect states from its start and end inter-
faces via one or more planned trajectories.
Container stages allow for hierarchical grouping of stages.
Depending on the type of the container, solutions found by its
children are converted to compound solutions and propagated
up the task hierarchy (for more details, refer to section III-D).
B. Stage Interface Types
We distinguish stages based on their interface type (see
Fig. 2). The traditional planning stage is the connecting
stage, which takes a pair of start/end states and tries to
connect them with a feasible solution trajectory. This type of
planning stage often corresponds to transit motions that move
the robot between different regions of interest. In this case,
any combination of states from the start and end interfaces
is considered for planning, realizing an exhaustive search.
As such a planning stage only affects a small set of active
joints usually, a pair of start and end states need to match
w.r.t. all other aspects of the state representation. Particularly,
all other joints as well as the number, pose, and attachment
status of collision objects need to match.
The second type, generator stages, populate their start
and end interfaces from scratch, without any explicit input
from adjacent stages. Usually, they define key aspects of
an action, for example defining the initial robot state or a
fixed goal state, which subsequently can serve as input for
adjacent stages. Another example is a grasp generator, which
provides pairs of pre- and final grasp poses, computing their
corresponding robot poses based on inverse kinematics. In
this case, generated start and end states usually differ and
are connected by a non-trivial joint trajectory (provided by
the grasp planner) to accomplish actual grasping.
The most common type of stages are propagators, which
read an input state from either its start or end interface, plan
to fulfill any predefined goal or action, and finally spawn one
(or more) new state(s) at the opposite interface together with
a trajectory connecting both states.
Note that propagation can act in both directions, from
start to end as well as from end to start. For this reason,
it is important to distinguish the temporal from the logical
flow. The temporal flow is always from a start to an end
interface and defines the temporal evolution of a solution
trajectory. However, the logical (program) flow defines the
state information flow during planning and is determined
by the propagation direction of individual stages. Backward
propagation allows for planning a relative motion to reach
a given end state from a yet unknown start state. A typical
example is the Cartesian approach phase before grasping:
Here the final grasp pose is given, and a linear approach
motion to the pre-grasp pose needs to be found, whose
extent is only coarsely specified within a range of several
centimeters. Corresponding solutions are planned in reverse
direction, from the end towards the start state. Finally, the
solution is reversed to yield a trajectory properly evolving in
time from start to end.
Obviously, the interface types of stages constrain how they
can be sequenced: A stage spawning new states along one
direction (forward / backward) should be followed/preceded
by a stage that reads from the shared interface and vice
versa. Otherwise, the logical information flow would be
broken. The framework provides mechanisms for automatic
derivation and validation of the connectivity of a specified
Task prior to any planning and thus can reject wrongly
configured task trees already at configuration time.
C. Available Primitive Stages
The Task Constructor library provides a connecting stage
and two basic propagating stages, which all are driven by
individual planner instances. We decided to decouple the
planning from the stage implementation to increase modular-
ity and facilitate code reuse. While stages specify a subtask,
i.e., which robot states to connect, planners perform the
actual work to find a feasible trajectory between these two
states. Hence, planners can be reused in different stages. Two
basic planning instances are provided: (i) MoveIt’s planning
pipeline offering wrappers for OMPL [14], CHOMP [15],
and STOMP [16]; and (ii) a Cartesian path generator based
on straight-line Cartesian interpolation and validation.
The two propagating stages allow for (i) absolute and (ii)
relative goal pose specification, either in joint or Cartesian
space. While in the former case, the goal pose is specified in
an absolute fashion w.r.t. a known reference frame, the latter
case permits specifying relative motions of a specific end-
effector link. In the general case, a twist motion (translation
direction and rotation axis) is specified w.r.t. an arbitrary
known reference frame and finally applied to the given
end-effector link. This representation makes it possible, for
example, to specify a linear approach or lifting motion
relative to the object or a global reference frame.
Generator stages provided are: (i) the current state stage
fetching the current planning scene state from MoveIt!’s
move_group node, and (ii) the fixed state stage allowing
to specify an arbitrary, predefined goal state.
In some cases, the sequential information within the task
pipeline is too restrictive to specify a task: Particularly, gen-
erator stages might depend on the outcome of another, non-
neighboring stage, thus necessitating a short-cut connection
within the task pipeline. For example, to place an object
after usage at the original pick location, the corresponding
place-pose generator needs access to the original pick pose.
To allow for such short-cuts, generators can subscribe to
solutions found by another stage.
D. Available Containers
As mentioned before, container stages are used to hier-
archically compose stages into a tree. Each container en-
capsulates and groups a set of children stages performing
some semantically coherent subtask, e.g., grasping or plac-
ing. Children stages can easily inherit properties from their
parent, thus reducing the configuration overhead. Two main
types are distinguished: serial and parallel containers.
Serial containers organize their children into a linear
sequence of subtasks which need to be performed in the
specified order to accomplish the overall task of the con-
tainer. Accordingly, a solution of a serial container connects
a state from the start interface of the first child stage to the
end interface of the last child via a fully-connected, multi-
stage trajectory path.
In a sequential pipeline, generator-type stages play a
particularly important role: They generate seed states, which
subsequently are extended (in both directions) via propagat-
ing stages to form longer partial solution paths. Finally, con-
necting stages are responsible for linking individual partial
solution paths to yield a fully-connected solution ranging
from the very beginning to the very end of the pipeline.
Note that in general there can be multiple paths connecting
a single pair of start-end states and there can be multiple
solutions corresponding to different pairs of start-end states.
Hence, it becomes essential to rank all found solutions
according to a task-specific cost function (see Sec. III-E).
Parallel containers allow for planning of several alter-
native solutions, e.g., grasping with the left or right arm.
Each solution found by its children directly contributes to
the shared pool of solutions of the container. Different types
of parallel containers are distinguished, depending on the
planning strategy for children:
(i) Alternatives: Consider all children in parallel. All
generated solutions become solutions of the container.
(ii) Fallbacks: Consider children sequentially, only pro-
ceeding to the next child if the previous one has vainly
searched its solution space. Only solutions found by the first
successful child constitute the solutions of the container.
(iii) Independent Components: consider all children in
parallel. In contrast to (i), children generate solutions for
disjoint sets of robot joints (e.g., arm and hand), which
are subsequently merged into a single coherent trajectory
performing all sub-solutions in parallel. Obviously, such a
merge might fail and explicit constraint checks (including
collision checking) are required for final validation. This
divide-and-conquer approach is particularly useful, if the
planning spaces of individual children are truly independent,
as for example in approaching an object for bimanual grasp-
ing. In this case, the motion of both arms can be planned
independently in lower-dimensional configuration spaces. To
enforce independence, one may introduce additional con-
straints, e.g., a plane separating the Cartesian workspaces of
both arms. This task-specific knowledge needs to be provided
with the task specification.
E. Scheduling
The proposed framework exhaustively enumerates all pos-
sible solution paths connecting individual interface states,
which obviously suffers from combinatorial explosion. Thus,
scheduling heuristics are applied to focus the search on
promising solution paths first.
To this end, solutions have an associated cost that is
computed in a task-specific fashion by user-defined cost
functions. Potential functions include, among others, length
of trajectory, amount of Cartesian or joint motion, minimum
or average clearance. Serial container stages accumulate the
costs of all sub-solutions of a full path and only report
the minimal-cost path for any pair of start-end states. In
a similar fashion, parallel containers only report minimal-
cost solutions of their children. Each stage, and particularly
the root stage of the task tree, can then rank their solutions
according to this cost and stop planning when an acceptable
overall solution is found.
Each stage ranks all its incoming interface states according
to (i) the length and (ii) cost of the associated partial
solution. The former criterion biases the search to depth-
first (in contrast to breadth-first), which ensures finding full
solutions as soon as possible. If a partial solution fails to
extend at either end, this failure propagates to the other end,
and the corresponding interface states are removed from the
interfaces of the associated stages as there is no benefit in
continuing work on that particular solution.
Additionally, containers handle the scheduling of their
children stages. Again the serial container plays the most
crucial role for this. Generators need to be scheduled first
in order to generate seed states, which subsequently are
extended via propagating stages, and finally connected to
full solution paths. Obviously, scheduling of connecting
stages should be postponed as long as possible, because
their pair-wise combination of start-end states leads to a
combinatorial explosion of the search space and thus their
planning attempts should be limited to the most promising
candidate pairs only.
On top of these heuristics, there is room for further
optimization. For example, one could try to balance the ex-
pected computation time vs. the expected connection success
(or reduction in overall trajectory cost) by ranking stages
according to the ratio of these values. To yield estimates
for them, one could consider heuristic measures (e.g., joint
or Cartesian-space distance of states), or maintain statistics
over previous stage executions. To yield higher diversity
and randomization, the actual ranking can be based on the
Boltzmann distribution of the computed performance rank.
F. Execution
The main contribution of this work lies in modeling and
planning manipulation tasks. Nonetheless, a solution should
be executed on the actual robot, eventually. Traditionally,
planning research simply forwards the final solution trajec-
tory to a low-level controller. To this end, the proposed
framework provides utilities to access planned task solutions,
such that the user can decide whether to execute, for exam-
ple, (i) the first valid solution, (ii) the first solution below
some cost threshold, or (iii) the best trajectory found within a
given amount of time or after exhaustively searching the full
solution space. Modifications to the world state performed
as part of the task, e.g., attaching or releasing an object, are
performed in the same fashion as trajectories are executed,
thus ensuring a consistent world representation throughout
task execution.
Given the modularity of the task pipeline, several improve-
ments are possible. Assuming feasible trajectories for the
whole task will be found eventually, initial stages (or groups
of stages) could commit early to a particular partial solution
and forward it for execution before a full solution trajectory is
found. As a consequence, this strategy can noticeably reduce
the perceived planning time as the robot can start to move
early. This is particularly useful when initial stages only
yield a single canonical solution, but can also be used to
significantly prune the search space, assuming full solutions
will be available for most early sub-solutions.
To handle failures during task execution (e.g., due to
dynamical changes in the environment, or because an early
executed partial solution eventually turns out to be incom-
patible with later planning stages), a recovery strategy is
essential. Again, the modular structure of the task pipeline
can be exploited for intelligent recovery, dependent on the
failed sub-stage. Potential strategies might replan from the
reached stage, or partially revert sub-solutions to continue
planning from a well-defined state.
In the future, it should also be possible to specify different
execution controllers (or parameterizations) for individual
stages (or groups of stages) to account for different control
needs. For example, an approach stage might employ visual
servoing to account for perception inaccuracies, and a grasp
stage should use a compliant motion strategy until contact is
established and subsequently switch to force-controlled grasp
stabilization. As long as the motion of these reactive, sensor-
driven controllers remains within specified bounds to the
planned trajectory, subsequent stages can connect seamlessly.
Finally, solution segments found by individual planning
stages can be post-processed to yield a globally smooth
solution trajectory. This requires local modifications at the
transition between consecutive segments as they might have
discontinuous velocity or acceleration profiles. To this end,
acceleration-aware trajectory generation [17] can be applied
to splice sub-trajectories smoothly within position bounds.
The resulting trajectory segments might only replace the
original solutions if they satisfy collision checks and other
constraints.
G. Introspection
As pointed out in [13], a key element for the success and
acceptance of a software package is its transparency and ease
of use. Although MoveIt! comes with its own implementation
of a manipulation pipeline, its major drawback is its in-
transparency: the provided pick and place stages are black
boxes that do not allow for inspection of their inner workings.
Hence, essential elements of the presented software pack-
age are pipeline validation, error reporting, and introspec-
tion. Stages can publish both successful and failed solution
attempts and augment them with arbitrary visual markers or
comments, thus providing useful hints for failure analysis.
This information, together with the status of the overall
planning progress of the pipeline (number of successful and
failed solution attempts per stage) is regularly published.
In rviz, the user can monitor the status of the task pipeline
and interactively navigate individual solutions of all stages,
inspecting their associated markers and comments. In the
future, it is also planned to provide an interactive GUI
to configure, validate, execute, and finally save a planning
pipeline directly in rviz.
IV. APPLICATIONS
In the following, we describe two typical manipulation
tasks and showcase involved planning stages. The first task
considers picking up an object. The second task demonstrates
a pouring task, which involves picking up a bottle, approach-
ing a glass, performing the pouring, and placing the bottle
back on the table. The accompanying video shows that the
very same task pipeline can be employed on various robots.
A. Bi-Modal Object Picking
As picking up an object is a common subtask for many
manipulation tasks, a dedicated stage is provided for this. To
apply this stage to a specific robot, only some key properties
need to be configured, namely the end-effector to use, the
name of the object to grasp, and the intended approach
and retract directions. The actual grasping is planned by
another generic stage, the grasp stage, which is provided as
a configurable child stage to the pick template.
In the example shown in Fig. 3, we consider dual-handed
robots, which can use either their left or right hand for grasp-
ing. Consequently, the pipeline comprises two alternative
bimodal pick task
lcurrent state
alternatives
right pick
open gripper
move to object
approach object
lgrasp
lcompute ik
lgenerate grasp pose
permit object collision
close gripper
attach object
lift object
left pick
...
Fig. 3. Bi-modal Pick Task: Left / Right arm is chosen by parallel
container alternatives. The propagation direction of planning states and
solution monitoring are indicated by arrows. Hierarchy is indicated by
indentation.
pick stages (left and right), configured to use the respective
end-effector. The alternatives parallel container follows the
current state generator, which fetches the current planning
scene state from MoveIt!.
Planning for the pick stages starts with the grasp generator
stage and proceeds in both directions: The approach stage
realizes a Cartesian, straight-line approach motion, starting
from a pre-grasp posture and is planned backwards to find a
safe starting pose for grasping (see Fig. 4). On the opposite
side, the lift stage starts from the grasped object state and
realizes the Cartesian lifting motion in a forward fashion.
The grasp stage, in our simple scenarios, samples
collision-free pre-grasp poses relative to the object at hand,
computes the inverse kinematics to yield a joint-state pose
suitable for use in the interface state, and finally performs
grasping by closing the gripper. To this end, first collision
detection between end-effector and object is turned off to
allow the end-effector to contact or penetrate the object
when actuating the grasp pose. For real-world execution, the
close gripper stage obviously requires a force-controlled or
compliant controller to avoid squashing the object. Finally,
the object is attached to the end-effector link, such that
further planning knows about the grasped-object state. These
helper subtasks, which only modify the planning scene state,
but do not actually perform any planning, are realized by
utility stages, which permit to change allowed collisions as
well as to attach and detach collision objects.
The sampling of pre-grasp poses, in our examples, consid-
ers a pre-defined open-gripper posture for the end-effector
and proposes Cartesian poses of the end-effector relative
to object-specific grasp frames. We sample grasp frames
Tg
wby rotating the object frame about its z-axis in steps
of 0.2 rad, resulting in 32 grasp frame samples. The end-
effector is placed relative to these grasp frames by applying
initial gripper opened pre approach pre grasp post grasp lifted
Fig. 4. Temporally ordered sequence of planning scene states of the pick task shown in Fig. 3.
pouring task
lcurrent state
pick bottle
...
lgrasp
...
move to pouring start
lcompute ik
lbottle above glass
pouring
place bottle
move to place
set down bottle
lcompute ik
lbottle place location
release bottle
open gripper
detach object
forbid object collision
retreat gripper
Fig. 5. Pouring Task: The manipulator picks a bottle, performs constrained
pouring motions and places it back on the table. All stages (except pouring)
are provided by the framework.
the inverse of a fixed tool-to-grasp transform Tg
t:Tt
w=
Tg
w·Tt
g. The resulting transform Tt
wis used as the target
for inverse kinematics. Before applying inverse kinematics
sampling, the IK stage validates the feasibility of the targeted
pose, i.e., whether placing the end-effector at the target is
collision-free. If not, IK sampling is skipped and a failure is
reported immediately. While the first solution on all studied
robots is found within a fraction of a second, the planning
time for exhaustive search clearly varies between all studied
robots and is dominated by the number of sampling-based
planning attempts (in stage move to object), which in turn is
determined by the number of solutions found by the grasp
stage. While Pepper, having only a 5-DoF arm, finds a single
feasible grasp pose only (< 1 s), the Baxter robot finds more
than 60 solutions (45 s).
B. Pouring Task
The second described application demonstrates the use of
the task pipeline with a custom stage, using the example of
pouring into a glass. Its specification is shown in Fig. 5,
its execution on a UR5 robot is illustrated in Fig 1 and the
accompanying video. While the scenario requires a custom
pouring stage, all other stages are realized with suitably
parameterized standard stages provided by the planning
framework. The task reuses the previously described pick
container to pick up the bottle. A similar container place
provides the generic subtask to compute place motion se-
quences, given a generator for feasible place poses.
The pouring stage is implemented as tilting the tip of an
attached object (the bottle) in a pouring motion over another
object (the glass) for a specific period of time. A Cartesian
planner solves the path along object-centric waypoints.
The four generator stages involved in this task are interre-
lated: the two last ones, bottle above glass and place location,
depend on the grasp pose chosen in the pick stage. To this
end, they monitor the solutions generated by the grasp stage
and produce matching solutions.
Lastly, moving the bottle over the glass and moving it
towards its place location are transit motions that have to
account for an additional path constraint, namely keeping the
bottle upright to avoid spilling of the liquid. This constraint
is specified as part of the stage description and is passed on
to the underlying trajectory planner. To accelerate planning
with the constraint, we make use of configuration space
approximations [18] implemented for OMPL-based solvers.
In our experiments, using sequential planning, the task
produces its first full solution after 15.6s on average.
V. SUMMARY
We presented a modular and flexible planning system to
fill the gap between high-level, symbolic task planning and
low-level motion planning for robotic manipulation. Given
a concrete task plan composed of individually characterized
sub-stages, our system can yield combined trajectories that
achieve the whole task. Failures can be readily analyzed by
visualization and isolation of problematic stages. A number
of generic planning stages are already in place and were
employed to demonstrate the potential of the framework for
use on multiple robotic platforms. The Task Constructor is
meant to enhance the functionality of the MoveIt! framework
and replace its previous, severely limited pick-and-place
pipeline. The open-source software library is under con-
tinuous development and various extensions were outlined
directly within the corresponding sections.
REFERENCES
[1] M. Stilman, “Global manipulation planning in robot joint space with
task constraints,” IEEE Transactions on Robotics, vol. 26, no. 3, pp.
576–584, 2010.
[2] D. Berenson, S. S. Srinivasa, D. Ferguson, and J. J. Kuffner, “Manip-
ulation planning on constraint manifolds,” in 2009 IEEE International
Conference on Robotics and Automation, May 2009, pp. 625–632.
[3] J. Bidot, L. Karlsson, F. Lagriffoul, and A. Saffiotti, “Geometric
backtracking for combined task and motion planning in robotic
systems,” Artificial Intelligence, vol. 247, pp. 229 – 265, 2017,
special Issue on AI and Robotics. [Online]. Available: http:
//www.sciencedirect.com/science/article/pii/S000437021500051X
[4] S. Srivastava, E. Fang, L. Riano, R. Chitnis, S. Russell, and P. Abbeel,
“Combined task and motion planning through an extensible planner-
independent interface layer,” in Robotics and Automation (ICRA), 2014
IEEE International Conference on. IEEE, 2014, pp. 639–646.
[5] J. Ferrer-Mestres, G. Francès, and H. Geffner, “Combined task
and motion planning as classical ai planning,” arXiv preprint
arXiv:1706.06927, 2017.
[6] F. Gravot, S. Cambon, and R. Alami, “aSyMov: a planner that
deals with intricate symbolic and geometric problems,” in Robotics
Research. The Eleventh International Symposium. Springer, 2005,
pp. 100–110.
[7] T. Lozano-Pérez and L. P. Kaelbling, “A constraint-based method
for solving sequential manipulation planning problems,” in 2014
IEEE/RSJ International Conference on Intelligent Robots and Systems,
Sept 2014, pp. 3684–3691.
[8] K. Hauser and J.-C. Latombe, “Multi-modal motion planning in non-
expansive spaces,The International Journal of Robotics Research,
vol. 29, no. 7, pp. 897–915, 2010.
[9] S. Edwards, R. Madaan, and J. Meyer, “Descartes planning library
for semi-constrained cartesian trajectories,” ROSCon, 2015. [Online].
Available: http://wiki.ros.org/descartes
[10] S. Hart, P. Dinh, and K. Hambuchen, “The affordance template ROS
package for robot task programming,” in Robotics and Automation
(ICRA), 2015 IEEE International Conference on. IEEE, 2015, pp.
6227–6234.
[11] R. Diankov, “Automated construction of robotic manipulation
programs,” Ph.D. dissertation, Carnegie Mellon University,
Robotics Institute, August 2010. [Online]. Available: http:
//www.programmingvision.com/rosen_diankov_thesis.pdf
[12] K. Hauser, “Robust contact generation for robot simulation with
unstructured meshes,” in Robotics Research. Springer, 2016, pp. 357–
373.
[13] D. Coleman, I. A. ¸Sucan, S. Chitta, and N. Correll, “Reducing the
barrier to entry of complex robotic software: a MoveIt! case study,”
Journal of Software Engineering for Robotics, vol. 5, no. 1, pp. 3–16,
May 2014. [Online]. Available: http://moveit.ros.org
[14] I. A. ¸Sucan, M. Moll, and L. E. Kavraki, “The Open Motion Planning
Library,IEEE Robotics & Automation Magazine, vol. 19, no. 4, pp.
72–82, December 2012, http://ompl.kavrakilab.org.
[15] N. Ratliff, M. Zucker, J. A. Bagnell, and S. Srinivasa, “CHOMP:
Gradient optimization techniques for efficient motion planning,” in
Robotics and Automation, 2009. ICRA’09. IEEE International Con-
ference on. IEEE, 2009, pp. 489–494.
[16] M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal,
“STOMP: Stochastic trajectory optimization for motion planning,” in
Robotics and Automation (ICRA), 2011 IEEE International Conference
on. IEEE, 2011, pp. 4569–4574.
[17] T. Kröger, On-Line Trajectory Generation in Robotic Systems: Basic
Concepts for Instantaneous Reactions to Unforeseen (Sensor) Events.
Springer, 2010, vol. 58.
[18] I. A. ¸Sucan and S. Chitta, “Motion planning with constraints using
configuration space approximations,” in Intelligent Robots and Systems
(IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012,
pp. 1904–1910.
... The "Manipulator arm" component is presented as a simulated manipulator arm using the MoveIt software and the ROS (Robot Operating System) real-time operating system. It has five degrees of freedom and a gripper at the end [132][133][134][135]. ...
... MoveIt ( Figure 13) is an open-source manipulation software developed by Willow Garageby, Ioan A. Sucan, and Sachin Chitta [133][134][135]. The software offers solutions for mobile manipulation problems, such as kinematics, planning and motion control, 3D perception, and navigation. ...
... The MoveIt library is part of the ROS package and is widely used in robotic systems. MoveIt is great for developers because it can be easily customized for any job [133][134][135]. For designing a control system using MoveIt software, it is necessary to acquire the URDF (Universal Robot Description Format) file of the device (in our case, it is the URDF of the manipulator's arm). ...
Article
Full-text available
Real-time systems are widely used in industry, including technological process control systems, industrial automation systems, SCADA systems, testing, and measuring equipment, and robotics. The efficiency of executing an intelligent robot’s mission in many cases depends on the properties of the robot’s sensor and control systems in providing the trajectory planning, recognition of the manipulated objects, adaptation of the desired clamping force of the gripper, obstacle avoidance, and so on. This paper provides an analysis of the approaches and methods for real-time sensor and control information processing with the application of machine learning, as well as successful cases of machine learning application in the synthesis of a robot’s sensor and control systems. Among the robotic systems under investigation are (a) adaptive robots with slip displacement sensors and fuzzy logic implementation for sensor data processing, (b) magnetically controlled mobile robots for moving on inclined and ceiling surfaces with neuro-fuzzy observers and neuro controllers, and (c) robots that are functioning in unknown environments with the prediction of the control system state using statistical learning theory. All obtained results concern the main elements of the two-component robotic system with the mobile robot and adaptive manipulation robot on a fixed base for executing complex missions in non-stationary or uncertain conditions. The design and software implementation stage involves the creation of a structural diagram and description of the selected technologies, training a neural network for recognition and classification of geometric objects, and software implementation of control system components. The Swift programming language is used for the control system design and the CreateML framework is used for creating a neural network. Among the main results are: (a) expanding the capabilities of the intelligent control system by increasing the number of classes for recognition from three (cube, cylinder, and sphere) to five (cube, cylinder, sphere, pyramid, and cone); (b) increasing the validation accuracy (to 100%) for recognition of five different classes using CreateML (YOLOv2 architecture); (c) increasing the training accuracy (to 98.02%) and testing accuracy (to 98.0%) for recognition of five different classes using Torch library (ResNet34 architecture) in less time and number of epochs compared with Create ML (YOLOv2 architecture); (d) increasing the training accuracy (to 99.75%) and testing accuracy (to 99.2%) for recognition of five different classes using Torch library (ResNet34 architecture) and fine-tuning technology; and (e) analyzing the effect of dataset size impact on recognition accuracy with ResNet34 architecture and fine-tuning technology. The results can help to choose efficient (a) design approaches for control robotic devices, (b) machine-learning methods for performing pattern recognition and classification, and (c) computer technologies for designing control systems and simulating robotic devices.
... MoveIt! [13] is a common software stack for robotic trajectory planning, which integrates OMPL [14] for planning within the Robot Operating System (ROS) [15]. On top of MoveIt! the work in [16] creates solutions to many robotic tasks with interdependent sub-tasks. Due to the deep integration in MoveIt! the task description of [16] is not portable. ...
... On top of MoveIt! the work in [16] creates solutions to many robotic tasks with interdependent sub-tasks. Due to the deep integration in MoveIt! the task description of [16] is not portable. ...
Preprint
Selecting an optimal robot and configuring it for a given task is currently mostly done by human expertise or trial and error. To evaluate automatic selection and adaptation of robots to specific tasks, we introduce a benchmark suite encompassing a common format for robots, environments, and task descriptions. Our benchmark suite is especially useful for modular robots, where the creation of the robots themselves creates a host of additional parameters to optimize. The benchmark defines this optimization and facilitates the comparison of solution algorithms. All benchmarks are accessible through a website to conveniently share, reference, and compare solutions.
... We use an SD Mask R-CNN [18] trained with simulated depth images to generate segmentation masks and use MoveIt! [36] to plan collision free trajectories for the robot arm. ...
Preprint
Stacking increases storage efficiency in shelves, but the lack of visibility and accessibility makes the mechanical search problem of revealing and extracting target objects difficult for robots. In this paper, we extend the lateral-access mechanical search problem to shelves with stacked items and introduce two novel policies -- Distribution Area Reduction for Stacked Scenes (DARSS) and Monte Carlo Tree Search for Stacked Scenes (MCTSSS) -- that use destacking and restacking actions. MCTSSS improves on prior lookahead policies by considering future states after each potential action. Experiments in 1200 simulated and 18 physical trials with a Fetch robot equipped with a blade and suction cup suggest that destacking and restacking actions can reveal the target object with 82--100% success in simulation and 66--100% in physical experiments, and are critical for searching densely packed shelves. In the simulation experiments, both policies outperform a baseline and achieve similar success rates but take more steps compared with an oracle policy that has full state information. In simulation and physical experiments, DARSS outperforms MCTSSS in median number of steps to reveal the target, but MCTSSS has a higher success rate in physical experiments, suggesting robustness to perception noise. See https://sites.google.com/berkeley.edu/stax-ray for supplementary material.
... Therefore, we provide real-time alarms to restrict human movements during painting if necessary. According to the D-H parameters of the Denso VS060 robot, as listed in Table 1, we calculate the Inverse Kinematics (IK) with the aid of ROS MoveIt motion planning library [35]. When an IK solution fails, we alarm the painter to interrupt the current stroke and make a compensating motion. ...
Article
Full-text available
In this paper, we present an active learning enabled robotic painting system, called ShadowPainter, which acquires artist-specific painting information from the artwork creating process and achieves robotic reproduction of the artwork. The artist’s painting process information, including interactive trajectories of paintbrushes with the environment and states of the canvas, is collected by a novel Visual Measurement System (VMS). A Robotic Painting System (RPS), accompanied by the VSM, is developed to reproduce human paintings by actively imitating the measured painting process. The critical factors that influence the final painting performance of the robot are revealed. At the end of this paper, the reproduced artworks and the painting ability of the RPS are evaluated by local and global criteria and metrics. The experimental results show that our ShadowPainter can reproduce human-level brush strokes, painting techniques, and overall paintings. Compared with the existing work, our system produces natural strokes and painting details that are closer to human artworks.
... In both software calls, the third-party libraries (MoveIt ROS package or Kortex ROS driver package) both perform an inverse kinematic calculation utilizing their own internal kinematic solvers. A MoveIt motionplanning framework [23,24] is used to generate obstacle-free paths. The path is first generated by solving for an inverse kinematic solution using the Trac IK kinematic plugin. ...
Article
Full-text available
Assistive robotic manipulators (ARMs) provide a potential solution to mitigating the difficulties and lost independence associated with manipulation deficits in individuals with upper-limb impairments. However, achieving efficient control of an ARM can be a challenge due to the multiple degrees of freedom (DoFs) of an ARM that need to be controlled. This study describes the development of a vision-guided shared-control (VGS) system and how it is applied to a multi-step drinking task. The VGS control allows the user to control the gross motion of the ARM via teleoperation and commands the ARM to autonomously perform fine manipulation. A bench-top test of the autonomous actions showed that success rates for different subtasks ranged from 80% to 100%. An evaluation with three test pilots showed that the overall task performance, in terms of success rate, task completion time, and joystick mode-switch frequency, was better with VGS than with teleoperation. Similar trends were observed with a case participant with a spinal cord injury. While his performance was better and he perceived a smaller workload with VGS, his perceived usability for VGS and teleoperation was similar. More work is needed to further improve and test VGS on participants with disabilities.
... In this extended abstract, we focus specifically on the planning aspects of this problem. Sample-based global planners revealed to be very efficient over the last decades, and they can easily be combined with perception algorithms avoiding any simplification of the environment [3]. In the case of dynamic environment, one possibility would be to continuously re-plan the global trajectory every time a difference in the environment is perceived. ...
Preprint
Full-text available
The deployment of robots within realistic environments requires the capability to plan and refine the loco-manipulation trajectories on the fly to avoid unexpected interactions with a dynamic environment. This extended abstract provides a pipeline to offline plan a configuration space global trajectory based on a randomized strategy, and to online locally refine it depending on any change of the dynamic environment and the robot state. The offline planner directly plans in the contact space, and additionally seeks for whole-body feasible configurations compliant with the sampled contact states. The planned trajectory, made by a discrete set of contacts and configurations, can be seen as a graph and it can be online refined during the execution of the global trajectory. The online refinement is carried out by a graph optimization planner exploiting visual information. It locally acts on the global initial plan to account for possible changes in the environment. While the offline planner is a concluded work, tested on the humanoid COMAN+, the online local planner is still a work-in-progress which has been tested on a reduced model of the CENTAURO robot to avoid dynamic and static obstacles interfering with a wheeled motion task. Both the COMAN+ and the CENTAURO robots have been designed at the Italian Institute of Technology (IIT).
... For toolpath planning, an online software was developed by [16] for control, path planning, kinematics, collision checking that can also be involved in this robotic assisted additive manufacturing which leads to higher accuracy. Also software named moveit was developed by Gorner [17]. ...
Article
Full-text available
Additive Manufacturing is a green and novel technology which adds material instead of wasting material in form of chips. Additive manufacturing is a manufacturing technique which develops a product layer by layer. To overcome limitations of axes to be used in additive manufacturing a technique had been developed i.e. robot assisted additive manufacturing. Due to this type of technological advancements a lot of improvement in accuracy, increase in flexibility and reduction in production time had been achieved. Even the toolpath had been developed which had been successfully run for robotic assisted additive manufacturing. The present review paper deals with toolpath strategies, robot trajectory, robotic assisted hybrid manufacturing and various applications developed with robotic aided AM. This review paper after describing emerging trends in robotic aided manufacturing describes future scope of the field discussed in paper.
... For Gaka-chu's motion planning, we use the MoveIt Framework [26], with a customized MoveIt module for inverse kinematics. Our custom module extends the standard module by taking into account that the manipulator in different positions has different kinetic energy (i.e., the moment of inertia changes). ...
Preprint
Full-text available
The physical autonomy of robots is well understood both theoretically and practically. By contrast, there is almost no research exploring their potential economic autonomy. In this paper, we present the first economically autonomous robot -- a robot able to produce marketable goods while having full control over the use of its generated income. Gaka-chu ("painter" in Japanese) is a 6-axis robot arm that creates artistic paintings of Japanese kanjis from an autoselected keyword. By using a blockchain-based smart contract, Gaka-chu can autonomously list a painting it made for sale in an online auction. In this transaction, the robot interacts with the human bidders as a peer not as a tool. Using the blockchain-based smart contract, Gaka-chu can then use its income from selling paintings to replenish its resources by autonomously ordering materials from an online art shop, and maintain its activity by remunerating a human assistant for manual tasks such as placing canvases. We built the Gaka-chu prototype with an Ethereum-based smart contract and ran a 6-month long experiment, during which the robot painted and sold four paintings one-by-one, simultaneously using its income to purchase art supplies, human assistance, and paid initial investors. In this work, we present the results of the experiments conducted and discuss the implications of economically autonomous robots.
... We then transform G i into a gripper coordinate system and use points inside the gripper as the input of PointNetGPD, a data-driven grasp evaluation framework. The output grasp will then be sent to the MoveIt Task Constructor [8] to plan a feasible trajectory for a pick and place task. ...
Article
Full-text available
Currently, robotic grasping methods based on sparse partial point clouds have attained excellent grasping performance on various objects. However, they often generate wrong grasping candidates due to the lack of geometric information on the object. In this work, we propose a novel and robust sparse shape completion model (TransSC). This model has a transformer-based encoder to explore more point-wise features and a manifold-based decoder to exploit more object details using a segmented partial point cloud as input. Quantitative experiments verify the effectiveness of the proposed shape completion network and demonstrate that our network outperforms existing methods. Besides, TransSC is integrated into a grasp evaluation network to generate a set of grasp candidates. The simulation experiment shows that TransSC improves the grasping generation result compared to the existing shape completion baselines. Furthermore, our robotic experiment shows that with TransSC, the robot is more successful in grasping objects of unknown numbers randomly placed on a support surface.
Article
Full-text available
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
Article
Full-text available
This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers [1] by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic JSON description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications.
Article
Full-text available
Developing robot agnostic software frameworks involves synthesizing the disparate fields of robotic theory and software engineering while simultaneously accounting for a large variability in hardware designs and control paradigms. As the capabilities of robotic software frameworks increase, the setup difficulty and learning curve for new users also increase. If the entry barriers for configuring and using the software on robots is too high, even the most powerful of frameworks are useless. A growing need exists in robotic software engineering to aid users in getting started with, and customizing, the software framework as necessary for particular robotic applications. In this paper a case study is presented for the best practices found for lowering the barrier of entry in the MoveIt! framework, an open-source tool for mobile manipulation in ROS, that allows users to 1) quickly get basic motion planning functionality with minimal initial setup, 2) automate its configuration and optimization, and 3) easily customize its components. A graphical interface that assists the user in configuring MoveIt! is the cornerstone of our approach, coupled with the use of an existing standardized robot model for input, automatically generated robot-specific configuration files, and a plugin-based architecture for extensibility. These best practices are summarized into a set of barrier to entry design principles applicable to other robotic software. The approaches for lowering the entry barrier are evaluated by usage statistics, a user survey, and compared against our design objectives for their effectiveness to users.
Article
Full-text available
The open motion planning library (OMPL) is a new library for sampling-based motion planning, which contains implementations of many state-of-the-art planning algorithms. The library is designed in a way that it allows the user to easily solve a variety of complex motion planning problems with minimal input. OMPL facilitates the addition of new motion planning algorithms, and it can be conveniently interfaced with other software components. A simple graphical user interface (GUI) built on top of the library, a number of tutorials, demos, and programming assignments are designed to teach students about sampling-based motion planning. The library is also available for use through Robot Operating System (ROS).
Book
We propose an original approach to integrate symbolic task planning and geometric motion and manipulation planning. We focus more particularly on one key aspect: the relation between the symbolic positions and their geometric counterparts. Indeed, we have developed an instantiation process that is able to propagate incrementally task-dependent as well as 3D environment-dependent constraints and to guide efficiently the search until valid geometric configurations are found that satisfy the plan at both levels. The overall process is discussed and illustrated through an implemented example.
Chapter
This paper presents a numerically stable method for rigid body simulation of unstructured meshes undergoing forceful contact, such as in robot locomotion and manipulation. The key contribution is a new contact generation method that treats the geometry as having a thin virtual boundary layer around the underlying meshes. Unlike existing methods, it produces contact estimates that are stable with respect to small displacements, which helps avoid jitter or divergence in the simulator caused by oscillatory discontinuities. Its advantages are particularly apparent on non-watertight meshes and can easily simulate interaction with partially-sensed and noisy objects, such as those that emerge from low-cost 3D scanners. The simulator is tested on a variety of robot locomotion and manipulation examples, and results closely match theoretical predictions and experimental data.
Article
In this paper, we describe a strategy for integrated task and motion planning based on performing a symbolic search for a sequence of high-level operations, such as pick, move and place, while postponing geometric decisions. Partial plans (skeletons) in this search thus pose a geometric constraint-satisfaction problem (CSP), involving sequences of placements and paths for the robot, and grasps and locations of objects. We propose a formulation for these problems in a discretized configuration space for the robot. The resulting problems can be solved using existing methods for discrete CSP.
Conference Paper
The need for combined task and motion planning in robotics is well understood. Solutions to this problem have typically relied on special purpose, integrated implementations of task planning and motion planning algorithms. We propose a new approach that uses off-the-shelf task planners and motion planners and makes no assumptions about their implementation. Doing so enables our approach to directly build on, and benefit from, the vast literature and latest advances in task planning and motion planning. It uses a novel representational abstraction and requires only that failures in computing a motion plan for a high-level action be identifiable and expressible in the form of logical predicates at the task level. We evaluate the approach and illustrate its robustness through a number of experiments using a state-of-the-art robotics simulator and a PR2 robot. These experiments show the system accomplishing a diverse set of challenging tasks such as taking advantage of a tray when laying out a table for dinner and picking objects from cluttered environments where other objects need to be re-arranged before the target object can be reached.
Article
Planners for real robotic systems should not only reason about abstract actions, but also about aspects related to physical execution such as kinematics and geometry. We present an approach to hybrid task and motion planning, in which state-based forward-chaining task planning is tightly coupled with motion planning and other forms of geometric reasoning. Our approach is centered around the problem of geometric backtracking that arises in hybrid task and motion planning: in order to satisfy the geometric preconditions of the current action, a planner may need to reconsider geometric choices, such as grasps and poses, that were made for previous actions. Geometric backtracking is a necessary condition for completeness, but it may lead to a dramatic computational explosion due to the large size of the space of geometric states. We explore two avenues to deal with this issue: the use of heuristics based on different geometric conditions to guide the search, and the use of geometric constraints to prune the search space. We empirically evaluate these different approaches, and demonstrate that they improve the performance of hybrid task and motion planning. We demonstrate our hybrid planning approach in two domains: a real, humanoid robotic platform, the DLR Justin robot, performing object manipulation tasks; and a simulated autonomous forklift operating in a warehouse.
Conference Paper
Robots executing practical tasks in real environments are often subject to multiple constraints. These constraints include orientation constraints: e.g., keeping a glass of water upright, torque constraints: e.g., not exceeding the torque limits for an arm lifting heavy objects, visibility constraints: e.g., keeping an object in view while moving a robot arm, etc. Rejection sampling, Jacobian projection techniques and optimization-based approaches are just some of the methods that have been used to address such constraints while computing motion plans for robots performing manipulation tasks. In this work, we present an approach to handling certain types of constraints in a manner that significantly increases the efficiency of existing methods. Our approach focuses on the sampling step of a motion planner. We implement this step as the drawing of samples from a set that has been computed in advance instead of the direct sampling of constraints. We show how our approach can be applied to different constraints: orientation constraints on the end-effector of an arm, visibility constraints and dual-arm constraints. We present simulated results to validate our method, comparing it to approaches that use direct sampling of constraints.