Conference PaperPDF Available

Abstract and Figures

In industry, several operations require sheet-like materials to be transported from a loading station to the desired location. Such applications are prevalent in the aerospace and textile industry where composite prepreg sheets or fabrics are placed over a tool or fed to a machine. Using robots for sheet transport operations offers a flexible solution for such highly complex tasks. To create high-quality parts, sheets need to be accurately placed at the correct location. This paper presents automated trajectory planning and control algorithms for a robot to pick up sheets from the input station using suction grippers and, transport and place them over the tool surface. Machine vision is used at the pick location for estimating the sheet pose. Unfortunately, pick-up accuracy is not sufficiently high due to sheet movement during suction-based grasping and localization errors. We employ ideas inspired by visual servo techniques to accurately place the sheet on the tool. Our method uses an Eye-to-Hand camera configuration to align the desired image features with the reference markings on the tool. We introduce a sampling-based Jacobian estimation scheme that can reliably achieve the desired accuracy while minimizing the operation time. Experiments are performed to validate our methodology and compute the placement accuracy on an industrial tool.
Content may be subject to copyright.
VISUAL SERVO BASED TRAJECTORY PLANNING FOR FAST AND ACCURATE
SHEET PICK AND PLACE OPERATIONS
Omey M. Manyar
Center for Advanced Manufacturing
University of Southern California
Los Angeles, CA 90007
Email: manyar@usc.edu
Alec Kanyuck
Center for Advanced Manufacturing
University of Southern California
Los Angeles, CA 90007
Email: kanyuck@usc.edu
Bharat Deshkulkarni
Center for Advanced Manufacturing
University of Southern California
Los Angeles, CA 90007
Email: deshkulk@usc.edu
Satyandra K. Gupta
Center for Advanced Manufacturing
University of Southern California
Los Angeles, CA 90007
Email: guptask@usc.edu
ABSTRACT
In industry, several operations require sheet-like materials to be
transported from a loading station to the desired location. Such
applications are prevalent in the aerospace and textile industry
where composite prepreg sheets or fabrics are placed over a tool
or fed to a machine. Using robots for sheet transport operations
offers a flexible solution for such highly complex tasks. To cre-
ate high-quality parts, sheets need to be accurately placed at the
correct location. This paper presents automated trajectory plan-
ning and control algorithms for a robot to pick up sheets from
the input station using suction grippers and, transport and place
them over the tool surface. Machine vision is used at the pick
location for estimating the sheet pose. Unfortunately, pick-up
accuracy is not sufficiently high due to sheet movement during
suction-based grasping and localization errors. We employ ideas
inspired by visual servo techniques to accurately place the sheet
on the tool. Our method uses an Eye-to-Hand camera config-
uration to align the desired image features with the reference
markings on the tool. We introduce a sampling-based Jacobian
estimation scheme that can reliably achieve the desired accuracy
while minimizing the operation time. Experiments are performed
to validate our methodology and compute the placement accuracy
on an industrial tool.
Address all correspondence to this author.
1 INTRODUCTION
Numerous manufacturing processes use compliant sheets. Appli-
cations include aerospace, automotive, and textile. The flexible
nature of the material has made it difficult to use robotic automa-
tion in such applications. Due to recent advances in the field of
robotics and modeling of deformable sheet-like materials, there is
an increased interest in developing flexible automation solutions
for manufacturing processes that utilize sheets. Sheet pick and
place operations are an integral part of the fabrication process.
Sheet pick and place tasks in the industry require high speed
execution while maintaining the desired accuracy for high quality
processing. The sheet being transported in this case can be placed
on top of a tool or fed to a machine for specific operations. For
example, in composite prepreg layup, a carbon fiber composite
sheet is placed on top of a tool and then draped by applying
local forces onto the tool to manufacture the desired part. The
robot-based automation of sheet transport tasks should take into
account the deformable and compliant nature of the sheets and the
uncertainty in sheet position and orientation at pick up location.
Traditional pick and place systems are expensive and designed
precisely to serve for a specific application. With the growing
need for automation in manufacturing, it is important to devise
general purpose solutions for fast and accurate pick and place
tasks.
Visual servo control has been used by the robotics community
to ensure highly accurate and precise object placement in the
1
presence of uncertainty. Visual servo utilizes local features in the
image frame to achieve this high accuracy. The majority of visual
servo concepts focus on applications in which the manipulated
object is rigid (Refer Section. 2 for further details).
In this work, we have developed a system that employs the
concepts of visual servo in accomplishing highly accurate sheet
placement task. We have developed an end-to-end system describ-
ing how the entire sheet transport operation can be accomplished
with high speed and accuracy. Additionally, we introduce an im-
age based Jacobian estimation scheme that can aid in trajectory
planning for this task. Classical visual servo techniques oper-
ate the robot under velocity control mode which slows down the
robot speed towards the end of the operation. We overcome this
by operating the robot in position control mode and executing
trajectories at the highest possible velocities.
2 RELATED WORK
Visual servo based trajectory planning has been an active field of
research for decades. One of the initial pioneering work was pre-
sented by Hutchinson et al. in [1]. Visual servo control is mainly
executed in two modes: Image Based Visual Servo (IBVS) and
Position Based Visual Servo (PBVS). Extensive work has been
performed in both these concepts over the years [2
7]. Although
IBVS and PBVS have a similar control architecture, they have
some fundamental differences. [8] juxtaposes the two and pro-
vides a comparative analysis by implementing the two approaches
on a parallel link robot. Another focus of the visual servo com-
munity has been in estimation of the Image Jacobian. [9
13] have
showcased several analytical methods to estimate the image Ja-
cobian matrix. More recently, the focus of this field of research
has been in leveraging deep learning and statistical concepts in
the Image Jacobian estimation [14, 15].
Additionally, there is a lot of work on deep learning based
image feature extraction for visual servoing. Researchers have
explored numerous deep learning methods to identify features
overcoming several image feature extraction issues [16
18]. The
majority of previous work has been focused mainly on operat-
ing the robot under velocity control mode, rending this process
to converge slowly. This is the main reason for manufacturing
community’s apprehension to implement the visual servo con-
cept. Our proposed method on the contrary, operates the robot
in position control mode overcoming the issue of speed while
maintaining the accuracy.
Automation of the composite layup process has gained an
impetus in the aerospace manufacturing community. The authors
of this work have in the past explored several avenues in automa-
tion of composite prepreg layup [19
24]. However none of the
previous work focuses on the sheet transport problem.
Hence, in this paper we present a concept of implementing
visual servo in position control mode to achieve high accuracy
while executing the process at rapid velocities. The rest of the
paper is structured as follows. We first give an overview of the
proposed system. Consequently, we give further details about
the sheet pick and place system and the visual servo architecture.
Eventually we discuss the results of our experiments performed
on an industrial flat base tool used for composite sheet layup.
3 SYSTEM OVERVIEW
(a)
(b)
FIGURE 1: (a) and (b) shows how the sheet assumes a different
shape based on variability in picking configuration. As can be
seen, the red line feature of the sheet has assumed a different
orientation when the pickup position was changed
In this section, we formulate the sheet transport problem and de-
rive motivation for the optimal system required to accomplish this
complex task. Our work is focused on developing a completely
autonomous system for the sheet transport task independent of
the initial configuration of the sheet at the loading station. To
achieve this, we need a system that can compute the picking loca-
tion of the sheet to enable robotic sheet picking operation. The
proposed system is designed for applications conducive to the use
of suction-based grippers for sheet manipulation. Once the robot
picks up the sheet using suction cups, the suction force acts as
the external constraint on the manipulated sheet. As mentioned in
Section. 1, the compliant nature of the sheet leads it to assume
2
FIGURE 2: The process flow from pick to place for achieving fast and precise sheet placement. The input to the system is a raw RGB
image of the sheet at pick position and the desired feature location for sheet placement.
a different shape under varying external constraints, refer Fig. 1.
Due to the variability in initial loading configuration and uncer-
tainty in the pickup system the sheet gets picked up in a different
orientation as showcased in Fig. 1a and Fig. 1b. Thus, uncertainty
in sheet pick-up can lead to the sheet assuming a different shape
every time it is picked from the loading station. This makes it
important to have real-time feedback during the placement task
to achieve the desired accuracy.
In order to realize this closed loop system we derive motiva-
tion from the domain of visual servo. We employ Jacobian based
trajectory planning by using visual features on the grasped sheet.
This demands for a system that can reliably perform image feature
detection at the placing station. These features are consequently
used as visual cues to compute the robot trajectory to accurately
align the sheet.
Traditionally while performing visual servo, the robot is oper-
ated under velocity control mode rendering the placement action
to be extremely slow as the goal is approached. This happens as
the error approaches zero. Even with fine tuned error gains, it
is difficult to achieve high speed convergence. In our work, we
operate the robot in position control mode so that the robot can
execute its trajectory at maximum possible velocity enabling fast
placement actions without compromising the accuracy.
A camera system similar to the sheet pickup is implemented
on the placement station to compute the initial placement location.
Visual servoing is performed once the robot reaches this location.
Additionally, we use standard motion planning modules to com-
pute a feasible collision free path between the robot’s home, pick
and, place locations.
When the robot is executing trajectories at a high velocity,
there is a probability of the tool vibrating around a nominal po-
sition. This can cause an error in the image feature detection
affecting the accuracy of Jacobian estimation as discussed in Sec-
tion. 6. In order to compensate for this vibration, we estimate the
nominal position about which the tool is vibrating to achieve a
better estimate of the image feature vector.
Fig. 2, summarizes the entire process flow of our proposed
system.
FIGURE 3: The Proposed Sheet Transport System
3
We will now outline the system developed to solve the pro-
posed sheet transport problem. As discussed, following are the
tasks that are required for fast and accurate sheet transport:
1. Sheet Pick Location Determination
2. Suction based Sheet Grasping
3. Motion planning for point to point motion
4. Place Location Determination
5. Image feature recognition for visual servo
6. Jacobian Estimation
7. Visual servo for accurate placement
We propose a system as shown in Fig. 3 to perform the
enumerated tasks. In this work, we have focused on the task
of sheet transport for composite layup operation. We have a 7
Axis robot Kuka-lbr-iiwa 7 with a custom end-effector. The end-
effector used in this work is suction based as shown in Fig. 4
which is appropriate for transportation of a prepreg sheet. The
suction cups used for this application are a non-marking type.
This helps to avoid the introduction of any defects in the part due
to suction cup marks.
We demonstrate the system performance on an industrial flat
plate aluminum tool as shown in Fig. 3.
FIGURE 4: Suction based Sheet Grasping Tool
In the imaging system, we use an OAK-D camera developed
by Luxonis Corporation which has 4K resolution and provides
RGB streams at 30 fps. This camera is used to determine the pick
location for the sheet at the loading station, refer Section. 4 for
further details. At the sheet placement location, we use Intel’s
RealSense D415 camera. This camera provides RGB streams at
30 fps and a resolution of 720p1.
1
note: For a highly accurate and high speed system it is recommended
The software system is designed in the ROS (Robot Operat-
ing System) middleware. We use Open-CV modules for image
processing. Both the cameras have open-source support for inte-
gration with ROS. Communication with the robot is accomplished
over TCP-IP. The overall system architecture is shown in Fig. 5.
FIGURE 5: System Architecture
4 OVERVIEW OF SHEET PICK-UP
The sheet pickup system shown in Fig. 3 uses a 4K OAK-D
camera looking down over a table with a flat 1/4 inch sheet of
glass covered with a stick resistant nylon mesh. Beneath the
glass is a white sheet that serves as a background for the image
detection to more easily filter out the sheet. Pickup is executed
with suction cups placed along an adjustable rail robotic tool. The
suction cups are actuated with a vacuum poppet solenoid valve
attached to a relay and Arduino Uno communicating with ROS.
Suction is generated with a vacuum pump emptying two 15 gallon
vacuum chambers.
4.1 CAMERA EXTRINSICS CALIBRATION
The overhead 4K camera is auto-calibrated on each frame to
account for camera movement over time. This is done by detecting
Aruco markers mounted to the corners of the glass but underneath
the nylon mesh, see Fig. 6. The glass and robot are mounted to
the same table so the glass features stay fixed in the robot frame.
Initial robot calibration entails moving the robot Tool Center Point
(TCP) to the bottom left corner of each Aruco marker and saving
the Cartesian point in the robot frame. These points can then be
tied to the Aruco point in pixel space that is easily recognizable
by the camera. A transformation from the pixel value of the
Aruco marker to robot Cartesian space can then be calculated to
to use 4K cameras with higher frame rates as they are optimal for real-time
implementation
4
FIGURE 6: Pickup Camera View
yield a Cartesian point for each pixel on the glass plane. This
transformation is calculated for every captured frame to yield the
calibrated Cartesian point in robot’s frame for each detected pixel
of the sheet corners.
4.2 IMAGE BASED SHEET LOCALIZATION
The camera raw frame image, as shown in the first image of Fig.
7a requires multiple filters and algorithms to reliably detect the
corners of the sheet. First, the input image is cropped based on the
given ID of the top left and bottom right Aruco marker to remove
any part of the image that is not showing the white table and sheet.
The detected Aruco markers are also hidden with a white circle
for the next steps of sheet detection. This image is then converted
to gray-scale and a bilateral filter is used to reduce the noise. Next
a threshold filter is used to covert all pixels to either black or
white. Three iterations of erosion followed by three iterations
of dilation remove white spots inside the boundary of the sheet,
often caused by lighting reflections. In our experimental setup, we
controlled the indoor ambient lighting. In case of manufacturing
setting where flood lights environment might be used, additional
filtering maybe necessary to remove the effects of the glare.
A Canny edge detection algorithm is then used to detect
contours in the filtered image, and the max pixel length contour is
determined to be the outline of the sheet to pickup. This output
is an ordered list of pixel values that creates a loop in the image.
To find the corners of this contour the Ramer–Douglas–Peucker
algorithm is used to decimate the curve to line segments of fewer
corner points. To find the corners accurately, a low epsilon value
is chosen which often yields multiple points that are not true
corners. These output points are then iterated through to filter
out false corners. False corners can be identified by looking to
see if the angle between the previous and next corner point in the
loop is greater than a given angle value. The Detected Corners
image in Fig. 7b shows false corners in dark red, followed by
the filtered true corners in yellow. The center of the discovered
contour shape is shown with a blue circle and can be used to help
with identification of the sheet for specific cases.
Finally, the key points are detected for this sheet given the
input sheet geometry. We take care of any anomalies and false
corners by checking the outputs we receive from this corner de-
tection procedure. This allows the process to become much more
robust, as anomalies can occur in rare cases.
4.3 SHEET PICK-UP EXPERIMENTS
Pickup experiments were conducted using the output from the
corner detection system and transforming this to the robot’s frame
of reference. Specific corner points can be identified given a
known sheet geometry to localize the ideal sheet origin point.
This generalizes our method for different sheet geometries. In
this experiment the two corners along the longer notch edge are
determined to be the sheet key points. These are used to find the
sheet pose with respect to the robot frame. To ensure our method
is independent of sheet geometry, we do not use the notch for
any sort feature extraction or pose estimation. The notch is a
common feature used in the industry to ensure manual alignment
during sheet layup operations. Such notches are specific to the
part being manufactured. After experimentation, the ideal offset
from the tool TCP to the detected sheet frame was found to pickup
the sheet reliably. Experiments were run to pickup the sheet in
different locations and orientation and then dropped off at the
placement location. An image of the input location variation is
discussed later and shown in Fig. 15. All tests placed the two key
sheet points at the placement location with a precision of less than
10mm.
5 VISUAL SERVO ARCHITECTURE
In this section, we present the visual servo methodology used in
sheet placement. As discussed in Section. 4, the average error
in sheet placement using an open loop system is in the range
of 10 mm. In high-performance applications such as aerospace
manufacturing, maximum allowable error is expected to be less
than
±2.54mm
. To achieve such high accuracy we have to employ
visual servo techniques to compensate for the error in picking up
the sheet.
5.1 CAMERA CONFIGURATION
In this work, we have employed an eye to hand configuration i.e.
the camera is mounted on top of the tool region of interest where
the sheet will be placed. As shown, in Fig. 8a the Realsense is
5
(a) (b)
FIGURE 7: (a) Filters applied to the input Pick-Up image (b) Algorithms run on the filtered image
(a)
(b)
FIGURE 8: (a) shows the Realsense mounted on top of the tool
and (b) shows POV of the RealSense Camera Frame
mounted on top of the tool. In general, the eye-to-hand configu-
ration can lead to lower precision due to several occlusions and
camera proximity factors. In our case, the Realsense is place at
a distance of 50 cm from the tool and any sort of occlusions are
avoided as shown in Fig. 8b. This is suitable to detect the visual
features that we use for performing visual servo. We describe the
visual features in next section.
5.2 Visual Servo Control Law
In this work, since we are using eye-to-hand camera configuration,
our visual servo control law can be represented as follows:
T=λˆ
Js(1)
Where,
T
represents the change in position in robot control
frame,
ˆ
J
represents the pseudo inverse of the Image Jacobian
matrix.
s
is the change in the value of the image feature. In our
case this is the pixel value of the two points representing the edge
of the sheet (Refer Fig. 11).
λ
is the gain which can be tuned to
accelerate convergence. In some cases a dynamic gain might be
more efficient but through experimentation we found that a fixed
gain works well enough for our application.
Now, let us define in detail the terms
T
and
s
.
T
is the
change in pose of the tool center point (TCP) frame in robot
control frame of reference. Let the TCP’s pose be defined by a
vector
T= [X,Y,Z,α,β,γ]IR6
, where
[X,Y,Z]
are the Carte-
sian coordinates and
[α,β,γ]
are the Z-Y-X Euler angles of the
TCP.
Although
TIR6
the visual servo control scheme can be
performed in a reduced dimensional space. Hence,
T
can be of a
dimension lower than 6. In our case, we can assume that
[Z,γ,β]
are redundant dimensions as they are independent of the variations
due to pick location determination. Hence,
T= [X,Y,α]
IR3.
sin our case embodies the change in the sheet’s alignment
feature edge. We define
s
by the change in pixel values of the
endpoints of this line segment. Let
P
1= [x1,y1]
and
P
2= [x2,y2]
be the pixel values of the end points of the line segment. Hence,
sgets defined by [x1,y1,x2,y2]
Thus, Equation. 1 can be written as follows:
X
Y
α
=λˆ
J
x1
y1
x2
y2
(2)
6
Based on this control law, we need to estimate the image
Jacobian
J
that will help us evaluate the best possible value of
T
and compute an optimal trajectory for the robot to reach the goal
location. Additionally, we need to reliably track the edge of the
sheet defining the line segment
s
. The Image Jacobian matrix
J
can be analytically determined as mentioned in [1]. Although due
to camera intrinsic calibration errors and several other factors, it
is advisable to estimate this matrix online as the Jacobian is an
entity dependent on local deltas in position and visual features.
Fig. 9 showcases the non-reduced degrees of freedom that
we optimize for in the visual servo control in robot’s frame.
FIGURE 9: The Non-redundant degrees of freedom in the Robot’s
control frame
5.3 IMAGE FEATURE EXTRACTION
The image features that we use in visual servo have to be carefully
selected in order to achieve high accuracy. As mentioned previ-
ously, when dealing with a deformable object such as a sheet, it is
important to identify the visual features that showcase minimal
variation based on the changes in pick-up location. Additionally,
we need features that are not easily occluded to achieve fast and
accurate placement. In our experiments, we use the edge as shown
in Fig. 11 as an alignment image feature. This edge is the feature
of interest and used to define error metric for the visual servo
control law.
Now we describe in detail the image processing pipeline to
track this visual feature accurately and reliably. The same sheet
detection methodology described in Section 4 was used to find
the output corner locations. An open contour had to be found
for the edge of the sheet in this scenario because the image is
cropped across the middle of the sheet to avoid the need to filter
out the robot and suction gripper tool obstructing the view of the
sheet. The filters and algorithms were executed on this cropped
input image to output the corners as shown in Fig. 10a. The
Aruco markers were placed in the top left and bottom right of the
desired region to crop the image prior to processing. In the output
drop-off detection image shown in Fig. 10b these markers are
highlighted in green. These markers also serve as the reference for
a reliable placement location, as they are fixed to the placement
mold. An offset was set from these pixels to the ideal visual servo
corner pixels. These goal pixels are used to line up the identified
sheet edges corners.
(a)
(b)
FIGURE 10: (a) Filters applied to the input pick-up image (b)
Placement image detection output
It is important to note that using a line fitting algorithm would
not be ideal in this application because the sheet edges sag when
the suction cups are a few inches away from the edge. For a
different application with stiffer sheets, corner value accuracy
could be further improved by fitting a line along the pixel edge
7
points between detected corners. Lines on either side of each
corner would have an intersection yielding a corner location with
sub pixel accuracy. This can also improve accuracy by more than
a full pixel is some cases where the filtering of the input image
caused the true corner to be removed from the detected sheet edge
contour.
FIGURE 11: The Image Feature used in Visual Servo
6 REAL-TIME JACOBIAN ESTIMATION
The control law discussed in Section 5.2 entails for estimation
of the Image Jacobian matrix
J
. In this work, we are operating
the robot under position control mode by executing trajectories
at maximum velocity. This enables high speed execution of the
process without compromising on the accuracy. There are several
techniques to estimate the image Jacobian matrix as discussed
in Section. 2. The image Jacobian matrix is generally computed
by performing minute perturbation about current position of the
robot. This gives us the value of
T
and
s
that can then be used
to estimate the Jacobian at a certain position.
Performing perturbations can lead to robot motion that is not
smooth. Additionally, such perturbations will increase the overall
process time. In order to achieve continuous and smooth real-time
motion, we propose a methodology in which we collect enough
points in the neighbourhood comprising of robot’s nominal cur-
rent pose and the desired goal location. The sheet pickup system
reliably gets the robot withing a vicinity of
±10 mm
and
±5o
from
the desired goal location. Hence we collect position and image
feature data within this region to generate enough samples for
Jacobian estimation. Fig. 12 showcases the data collection phase,
where we generate a time series data of the visual features and
the corresponding cartesian robot pose. In the proposed system,
the Realsense D415 operated at 30 fps i.e. approximately 1 frame
per 30 ms. The robot’s position data can be polled every 1 ms
making RealSense the bottleneck for timeseries data generation.
We performed unix timestamp matching for the two systems to
minimize any sort of phase difference. The robot was commanded
to perform a spiral motion such that the x-y variation can be pa-
rameterized as follows:
x=a cos(t)
and
y=a sin(t)
. We varied
the Euler angle
α
sinusoidally with time. In this manner based on
the discretization factor for
t
we can generate numerous samples.
On experimentation we chose
a=10mm
and
t= [0,π/10,...,4π]
.
Before the trajectory execution begins, we start the time series
data collection and further process the Realsense’s image feature
data and Robot’s position data to generate a one to one corre-
spondence between the two and clean up any missing data in
Realsense. In this manner a timeseries data is generated which
can then be used to compute the Jacobian. The sample generation
phase is quick and takes about 4-5 secs on average at max velocity
of the robot.2
FIGURE 12: Sampling the neighborhood for Jacobian Estimation
Once we generate the samples, we compute Jacobian in real-
time based on the current position of the robot. We search for
k=8
nearest neighbours at a certain location of the robot and
compute the deltas to estimate the Jacobian. This Jacobian is then
used for computing the robot trajectory (Refer Section. 7 for more
details). The Jacobian estimation idea is given in Algorithm 1.
The NearestNeighbors() function mentioned in Algorithm 1 finds
2
note: This time can be further reduced with a high speed camera. With
the aforementioned camera this was the best time that can be achieved without
introducing any blur in the captured frame
8
the
k
nearest neighbors of
x
in the dataset
Y
. These neighbors are
then used to evaluate the
Ts
and
Ss
that are used to compute
the Jacobian
J
. In Algorithm 1, the term
ˆ
T
is the pseudoinverse
of
T
. Using the procedure mentioned in Algorithm 1, we can
compute the Jacobian matrix at any discrete time instance t.
During the data collection phase and when the robot is execut-
ing the computed trajectories at a high speed, the tool carrying the
sheet might have a tendency to vibrate under a nominal position.
Refer Fig. 13 for further details. This is predominant in cases
when the component being transported is considerably large. The
Jacobian is sensitive to the accuracy of the image feature pixel
values. These vibrations might lead to errors while recording the
pixel information of the image feature. The Jacobian estimation
under such circumstances might not be robust, hence causing the
system to be unstable and not converge. In order to overcome this
issue, we introduce a vibration compensation scheme by account-
ing for the tool vibrations. As the tool vibrates about a nominal
point, we can compensate for the error in pixel values of the vi-
brating features by averaging the recorded pixels over a discrete
period of time. This reduces the overall error in pixel values and
ensures accuracy in computation of a stable Jacobian matrix.
FIGURE 13: The Green segment shows the nominal position
about which the visual features are vibrating. The red segment is
how the camera might observe these features over a small period
of time
Once the Jacobian is estimated, we can compute the Trajec-
tory of the robot to reach the desired goal location (Refer Section.
7 for further details).
7 TRAJECTORY PLANNING
Once we have a Jacobian estimation scheme, we can use the
control law in Equation. 2 to find the optimal incremented location
for the robot to reach desired goal location. We developed an
Algorithm 1: Realtime Jacobian Estimation
Input: PIR3
: Current Cartesian Position of the Robot
sIR4: Corresponding Image feature vector at P
k: Number of Desired Nearest Neighbours
Output: JIR(4,3)
Data: TIR(3,n): Time series Cartesian Position Data
SIR(4,n): Time series Image Feature Data
n: Number of Samples Generated
1Y=[[T(1,:), S(1,:)], [T(2,:), S(2,:)], . . . [T(n,:), S(n,:)]]
2x= [P,s]
3nbrs =NearestNeighbors(nneighbors =k,Y,x)
4for i1 to k do
5T(i,:) = Y(nbrs(i),1)P
6s(i,:) = Y(nbrs(i),2)s
7J=sˆ
T
optimization routine to find the optimal value of
T
by defining
an error metric as described in Equation 3.
ε=κ(D1+D2)(3)
Where,
ε
is the image feature error and
κ
is the error gain
constant. Refer Fig. 14 for definition of D1and D2.
FIGURE 14:
D1
is the euclidean distance between the position of
the end point
P
1
in pixel coordinates and the final desired location
of P
1.D2is defined similarly for the other end point P
2and P
2
We can now perform trajectory planning as described in Al-
9
gorithm 2. The error threshold
ε
in our experiments was set
to 2 and the value of
κ
was set to 1. In Algorithm 2, we have
certain methods to acquire the latest values of the robot pose
(getCurrentRobotPose()) and the image feature values (getCur-
rentImageFeatures()). The computeRealtimeJacobian() function
is an implementation of Algorithm 1. The minimize() function
in Algorithm 2 is a quasi-Newton gradient based (specifically
BFGS [25]) method which helps in solving constrained optimiza-
tion problems. In our case, this method returns an optimal value
for the change in robot pose
T
which can then be used to com-
pute a feasible trajectory. Once we have an optimal value of
T
we command the robot to move to this position with maximum
possible velocity. The updateRobotPositionAsync() executes the
motion commands in an asynchronous manner enabling realtime
updates to the trajectory. Hence, while the robot is in motion, it’s
position is constantly updated in real-time until the error value
converges.
The proposed position control method overcomes the limita-
tions of operating the robot in velocity control while performing
visual servo. Since robot’s velocity approaches 0 as it reaches
close to the goal, with our method the robot can reach the goal
with higher velocities while achieving the appropriate placement
accuracy. We discuss this in detail in the results section.
Algorithm 2: Trajectory Planning
Initialize:
ε=
εthreshold =2
k=8
Tinit = [Xinit ,Yinit ,αinit]
bounds = [[0.1,0.1],[0.1,0.1],[0.087,0.087]]
s=getGoalImageFeatures()
1while εεthreshold do
2P=getCurrentRobotPose()
3s=getCurrentImageFeatures()
4J=computeRealtimeJacobian(P,s,k)
5T= minimize(computeError(s,s),Tinit ,bounds)
6updateRobotPositionAsync(T)
7s=getCurrentImageFeatures()
8ε=computeError(s,s)
8 EXPERIMENTAL RESULTS
In this section, we present the results of the proposed sheet trans-
port system. In order to test the repeatability and reliability of
the system we conducted 50 trials by loading the sheet in vary-
ing initial configuration (Refer Fig. 15). We varied the loading
position within a bounding region of
±10cm
and the orientation
i.e.
α
by
±10o
. This captured the variability in sheet loading
commonly experienced in the industry. The bounds for variation
in position and orientation were selected based on robot’s reacha-
bility study. With a larger robot we can increase the magnitude of
these variations.
FIGURE 15: Different position and orientation of the sheet while
loading
To showcase the rate of convergence of our position control
based visual servo we also recorded the number of times the
Jacobian was computed. We have characterized the placement
accuracy in terms of final position and orientation error. Refer
Table. 1. As can be seen from the table our average position error
is 0.8 mm and maximum error recorded was 1.6 mm. Additionally,
the average orientation error was
0.2o
which is well within the
desired tolerances for composite layup processes in the industry.
The results also characterize the efficiency and rate of convergence
of our process, as can be seen from Table. 1, on an average only
two Jacobian computations were required for the control loop
to converge. Hence, supporting our claim about the proposed
process being faster than traditional velocity control based visual
servo techniques. Fig. 16 depicts how our system plans the
path from an initial misaligned position to the desired placement
location.
TABLE 1: Experimental Results
No. of Trials Position Error Orientation Error Jacobian Estimation Iterations
Avg Max Avg Max Avg Max
50 0.8 mm 1.6 mm 0.2o0.5o2 4
10
(a)
(b)
FIGURE 16: a. Initial misalignment of the sheet from the goal
position. b. Final Position of the sheet once visual servo has
converged
9 CONCLUSIONS
We have demonstrated that ideas inspired by visual servo can
be used to significantly improve accuracy of sheet placement in
presence of significant inaccuracies during sheet pickup. We were
able to accurately estimate Jacobians with small amount of care-
fully planned sheet motion and use it to compute the correct sheet
placement locations. This speeds up the process of sheet trans-
port instead of using classical velocity control. Our framework is
capable of handling tool vibrations.
Our current approach cannot handle tool geometries that can
cause significant occlusions. We plan to extend our approach to
incorporate additional cameras. For example, our setup can be
augmented by including camera-in-hand to handle occlusions. In
the current work, we estimated Jacobian matrix numerically at a
given configuration. This work can be extended to learn functions
defining elements of Jacobian matrix. This can further speed up
the time needed for accurate placement. Sheets might undergo
significant distortion. We need to utilize 3D imaging to account
for this during the placement planning.
Acknowledgement: This work is supported in part by National
Science Foundation Grant
#
1925084. Opinions expressed are
those of the authors and do not necessarily reflect opinions of the
sponsors.
References
[1]
Hutchinson, S., Hager, G., and Corke, P., 1996. “A tutorial
on visual servo control”. IEEE Transactions on Robotics
and Automation, 12(5), pp. 651–670.
[2]
Allibert, G., Courtial, E., and Chaumette, F., 2010. “Predic-
tive control for constrained image-based visual servoing”.
IEEE Transactions on Robotics, 26(5), Oct., pp. 933–939.
[3]
Bourquardez, O., Mahony, R., Guenard, N., Chaumette, F.,
Hamel, T., and Eck, L., 2009. “Image-based visual servo
control of the translation kinematics of a quadrotor aerial
vehicle”. IEEE Transactions on Robotics, 25(3), pp. 743–
749.
[4]
Thuilot, B., Martinet, P., Cordesses, L., and Gallice, J. “Posi-
tion based visual servoing: keeping the object in the field of
vision”. In Proceedings 2002 IEEE International Conference
on Robotics and Automation (Cat. No.02CH37292), IEEE.
[5]
Corke, P., and Hutchinson, S. “Real-time vision, tracking
and control”. In Proceedings 2000 ICRA. Millennium Con-
ference. IEEE International Conference on Robotics and
Automation. Symposia Proceedings (Cat. No.00CH37065),
IEEE.
[6]
Flandin, G., Chaumette, F., and Marchand, E. “Eye-in-
hand/eye-to-hand cooperation for visual servoing”. In Pro-
ceedings 2000 ICRA. Millennium Conference. IEEE Inter-
national Conference on Robotics and Automation. Symposia
Proceedings (Cat. No.00CH37065), IEEE.
[7]
Corke, P., and Hutchinson, S., 2001. A new partitioned
approach to image-based visual servo control”. IEEE Trans-
actions on Robotics and Automation, 17(4), pp. 507–515.
[8]
Palmieri, G., Palpacelli, M., Battistelli, M., and Callegari,
M., 2012. “A comparison between position-based and image-
based dynamic visual servoings in the control of a translating
parallel manipulator”. Journal of Robotics, 2012, pp. 1–11.
[9]
Mansard, N., Lopes, M., Santos-Victor, J., and Chaumette,
F., 2006. “Jacobian learning methods for tasks sequenc-
ing in visual servoing”. In 2006 IEEE/RSJ International
Conference on Intelligent Robots and Systems, IEEE.
[10]
Qing-sheng, K., Ting, H., Zheng-da, M., and Xian-zhong,
D., 2006. “Pseudo-inverse estimation of image jacobian ma-
trix in uncalibrated visual servoing”. In 2006 International
Conference on Mechatronics and Automation, IEEE.
[11]
Qing, L. S., and Yue, L. S., 2015. “Online-estimation of
image jacobian based on adaptive kalman filter”. In 2015
34th Chinese Control Conference (CCC), IEEE.
[12]
Pari, L., Sebasti
´
an, J. M., Traslosheros, A., and Angel, L.
“Image based visual servoing: Estimated image jacobian by
using fundamental matrix VS analytic jacobian”. In Lecture
11
Notes in Computer Science. Springer Berlin Heidelberg,
pp. 706–717.
[13]
Nevarez, V., and Lumia, R., 2014. “Image jacobian esti-
mation using structure from motion on a centralized point”.
In 2014 IEEE/RSJ International Conference on Intelligent
Robots and Systems, IEEE.
[14]
Mao, S., Huang, X., and Wang, M., 2012. “Image jacobian
matrix estimation based on online support vector regression”.
International Journal of Advanced Robotic Systems, 9(4),
Oct., p. 111.
[15]
Liu, J., and Li, Y., 2019. An image based visual servo ap-
proach with deep learning for robotic manipulation”. ArXiv,
abs/1909.07727.
[16]
Bateux, Q., Marchand, E., Leitner, J., Chaumette, F., and
Corke, P., 2018. “Training deep neural networks for vi-
sual servoing”. In 2018 IEEE International Conference on
Robotics and Automation (ICRA), IEEE.
[17]
Ribeiro, E. G., de Queiroz Mendes, R., and Grassi, V., 2021.
“Real-time deep learning approach to visual servo control
and grasp detection for autonomous robotic manipulation”.
Robotics and Autonomous Systems, 139, May, p. 103757.
[18]
Lee, A. X., Levine, S., and Abbeel, P., 2017. “Learning
visual servoing with deep features and fitted q-iteration”.
arXiv preprint arXiv:1703.11000.
[19]
Malhan, R. K., Kabir, A. M., Shah, B., Centea, T., and
Gupta, S. K., 2018. “Hybrid cells for multi-layer prepreg
composite sheet layup.”. IEEE International Conference
on Automation Science and Engineering (CASE). Munich,
Germany.
[20]
Malhan, R. K., Kabir, A. M., Shah, B. C., and Gupta, S. K.,
2019. “Identifying feasible workpiece placement with re-
spect to redundant manipulator for complex manufacturing
tasks.”. IEEE International Conference on Robotics and
Automation (ICRA). Montreal, Canada.
[21]
Malhan, R. K., Kabir, A. M., Shah, B., Centea, T., and
Gupta, S. K., 2019. “Determining feasible robot placements
in robotic cells for composite prepreg sheet layup”. ASMEs
14th Manufacturing Science and Engineering Conference.
Erie, PA, USA.
[22]
Malhan, R. K., Shembekar, A. V., Kabir, A. M., Bhatt, P. M.,
Shah, B., Zanio, S., Nutt, S., and Gupta, S. K. “Automated
planning for robotic layup of composite prepreg”. Robotics
and Computer-Integrated Manufacturing, 67, p. 102020.
[23]
Chen, Y., Joseph, R. J., Kanyuck, A., Khan, S., Malhan,
R. K., Manyar, O. M., McNulty, Z., Wang, B., Barbic, J.,
and Gupta, S. K., 2021. “A digital twin for automated layup
of prepreg composite sheets”. ASMEs 16th Manufacturing
Science and Engineering Conference. Virtual, Online.
[24]
Manyar, O. M., Desai, J., Deogaonkar, N., Joesph, R. J.,
Malhan, R., McNulty, Z., Wang, B., Barbic, J., and Gupta,
S. K., 2021. A simulation-based grasp planner for enabling
robotic grasping during composite sheet layup”. In 2021
IEEE International Conference on Robotics and Automation
(ICRA), IEEE.
[25]
Nocedal, J., 2006. Numerical Optimization. Springer New
York, New York, NY.
12
... A widespread tool in practice is the pick-and-place mechanism, where a conveyor belt is used to get the sheets to the machine unit, and the same sheet is lifted by mechanical means and placed on the past layer. However, the pick-and-place robots are very sophisticated for this purpose, where the arm moves the end effector to grip the sheet and place it in the required location (Manyar et al. 2022). The controller is used to operate the arm's position and movements, keep track of time and monitor the movements of the manipulator. ...
Conference Paper
Full-text available
The composite sheet layup process involves stacking several layers of a viscoelastic prepreg sheet and curing the laminate to manufacture the component. Demands for automating functional tasks in the composite manufacturing processes have dramatically increased in the past decade. A simulation system representing a digital twin of the composite sheet can aid in the development of such an autonomous system for prepreg sheet layup. While Finite Element Analysis (FEA) is a popular approach for simulating flexible materials, material properties need to be encoded to produce high-fidelity mechanical simulations. We present a methodology to predict material parameters of a thin-shell FEA model based on real-world observations of the deformations of the object. We utilize the model to develop a digital twin of a composite sheet. The method is tested on viscoelastic composite prepreg sheets and fabric materials such as cotton cloth, felt and canvas. We discuss the implementation and development of a high-speed FEA simulator based on the VegaFEM library [29]. By using our method to identify sheet material parameters, the sheet simulation system is able to predict sheet behavior within 5 cm of average error and have proven its capability for 10 fps real-time sheet simulation.
Article
Full-text available
Hand layup is a commonly used process for making composite structures from several plies of carbon-fiber prepreg. The process involves multiple human operators manipulating and conforming layers of prepreg to a mold. The manual layup process is ergonomically challenging, tedious, and limits throughput. Moreover, different operators may perform the process differently, and hence introduce inconsistency. We have developed a multi-robot cell to automate the layup process. A human expert provides a sequence to conform to the ply and types of end-effectors to be used as input to the system. The system automatically generates trajectories for the robots that can achieve the specified layup. Using the cell requires the automated generation of different types of plans. This paper addresses two main planning problems: (a) generating plans to grasp and manipulate the ply and (b) generating feasible robot trajectories. We use a hybrid-physics based simulator coupled with a state space search to find grasp plans. The system employs a strategy that applies constraints successively in a non-linear optimization formulation to identify suitable placements of the robots around the mold so that feasible trajectories can be generated. Our system can generate plans in a computationally efficient manner, and it can handle a wide variety of complex parts. We demonstrate the automated layup by conducting physical experiments on an industry-inspired mold using the generated plans.
Conference Paper
Full-text available
High-performance composites are widely used in industry because of specific mechanical properties and lightweighting opportunities. Current automation solutions to manufacturing components from prepreg (pre-impregnated precursor material) sheets are limited. Our previous work has demonstrated the technical feasibility of a robotic cell to automate the sheet layup process. Many decisions are required for the cell to function correctly, and the time necessary to make these decisions must be reduced to utilize the cell effectively. Robot placement with respect to the mold is a significant and complex decision problem. Ensuring that robots can collaborate effectively requires addressing multiple constraints related to the robot workspace, singularity, and velocities. Solving this problem requires developing computationally efficient algorithms to find feasible robot placements in the cell. We describe an approach based on successive solution refinement strategy to identify a cell design that satisfies all constraints related to robot placement.
Conference Paper
Full-text available
We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.
Article
Full-text available
Research into robotics visual servoing is an important area in the field of robotics. It has proven difficult to achieve successful results for machine vision and robotics in unstructured environments without using any a priori camera or kinematic models. In uncalibrated visual servoing, image Jacobian matrix estimation methods can be divided into two groups: the online method and the offline method. The offline method is not appropriate for most natural environments. The online method is robust but rough. Moreover, if the images feature configuration changes, it needs to restart the approximating procedure. A novel approach based on an online support vector regression (OL-SVR) algorithm is proposed which overcomes the drawbacks and combines the virtues just mentioned.
Article
Robots still cannot perform everyday manipulation tasks, such as grasping, with the same dexterity as humans do. In order to explore the potential of supervised deep learning for robotic grasping in unstructured and dynamic environments, this work addresses the visual perception phase involved in the task. This phase involves the processing of visual data to obtain the location of the object to be grasped, its pose and the points at which the robot’s grippers must make contact to ensure a stable grasp. For this, the Cornell Grasping Dataset (CGD) is used to train a Convolutional Neural Network (CNN) that is able to consider these three stages simultaneously. In other words, having an image of the robot’s workspace, containing a certain object, the network predicts a grasp rectangle that symbolizes the position, orientation and opening of the robot’s parallel grippers the instant before its closing. In addition to this network, which runs in real-time, another network is designed, so that it is possible to deal with situations in which the object moves in the environment. Therefore, the second convolutional network is trained to perform a visual servo control, ensuring that the object remains in the robot’s field of view. This network predicts the proportional values of the linear and angular velocities that the camera must have to ensure the object is in the image processed by the grasp network. The dataset used for training was automatically generated by a Kinova Gen3 robotic manipulator with seven Degrees of Freedom (DoF). The robot is also used to evaluate the applicability in real-time and obtain practical results from the designed algorithms. Moreover, the offline results obtained through test sets are also analyzed and discussed regarding their efficiency and processing speed. The developed controller is able to achieve a millimeter accuracy in the final position considering a target object seen for the first time. To the best of our knowledge, we have not found in the literature other works that achieve such precision with a controller learned from scratch. Thus, this work presents a new system for autonomous robotic manipulation, with the ability to generalize to different objects and with high processing speed, which allows its application in real robotic systems.