Reconfigurable Micro-Assembly System
for Photonics Applications
Dan Popa, Byoung Hun Kang, Jeongsik Sin, Jie Zou
Center for Automation Technologies (CAT)
Rensselaer Polytechnic Institute
Troy, New York 12180, USA
The assembly of parts with dimensions several
hundred microns or less is a challenging problem, and
has received increasing attention for applications in
areas such as telecommunication, automotive, and bio-
technology. Current state of the art micro-assembly
systems are often specialized devices and software. In this
paper we present a reconfigurable assembly system
designed to handle micro-parts in such a way that high
precision actuation and sensing is used only in the sub-
systems where it is actually necessary. Aspects related to
part gripping, fixturing, sensing, motion and bonding are
discussed. Analysis and experiments are presented to
show that this architecture can lead to a relatively low
cost and flexible assembly solution.
Automated assembly of MEMS has become a
necessary technology in order to reduce manufacturing
costs, and increase production volume. Today, as more
Micro-Mechanical, Opto-Electronic, and, more recently,
Micro-Fluidic devices are commercially available, the
cost and complexity of equipment, as well as the level of
human skills required to assemble such devices has also
increased. Significant progress has already been reported
in MEMS assembly research, including gripping,
handling, positioning and bonding of parts with
dimensions between a few and several hundred µm
[14,15,17]. Due to the small size of these parts, fairly
specialized micro-grippers, fixtures, and positioning
systems have been developed [3,4,8,9].
Most of these assembly solutions employ fairly
conventional assembly concepts scaled down to the
micro-scale, but their application becomes increasingly
difficult as part dimensions shrink below 100 µm .
For such parts commercially available assembly platforms
are increasingly expensive and require the intervention of
human operators for the assembly of parts with tight
More recently, due to the increased demand for
MOEMS in telecommunications, automated assembly
platforms for such devices are receiving special attention
[1,2,5,6]. Applications include fiber-optics and micro-
optical component handling, such as micro-optical
benches, 1xN and NxN fiber arrays (Figs. 1, 2).
Fig. 1: 1×N V-groove Array Fig.2: N×N V-groove Array
2 System Architecture
The Center for Automation Technologies at RPI has
designed and integrated a reconfigurable micro-assembly
system which can be used to assemble MOEMS and
Micro-Fluidic devices. The system includes relatively
low-cost custom and commercial sub-systems. The
strength of our design consists of the architecture of the
system, keeping in mind final assembly objective and not
base its performance entirely on precision positioning
(“task-driven” assembly. Another premise of our design
includes the ability to reconfigure assembly tasks, and
replace system hardware, such as adding sensors and
fixtures. The micro-assembly system is composed of 5
different sub-systems, coordinated by supervisory control
program on a WinNT PC. The sub-systems include:
? A coarse positioning system.
? A fine positioning system.
? A gripper and fixturing system.
? A high resolution vision system.
? An adhesive dispensing and curing system.
The control application interfaces with different sub-
systems independently, using multi-thread routines
coordinated via a separate assembly planning and
0-7803-7272-7/02/$17.00 © 2002 IEEE
Proceedings of the 2002 IEEE
International Conference on Robotics & Automation
Washington, DC • May 2002
Fig 3: High Level Assembly System user interface
A general purpose, large space positioning robot with
4 DOFs (x,y,z, and θ), RobotWorld by Yaskawa, is used
as a coarse motion manipulator. Several “pucks” are used
to hold various end-effectors. One of them is a micro
gripper for handling optical fibers. Another end-effector
includes a syringe tip and UV curing lamp used by the
adhesive dispensing and curing system. For achieving
micron level accuracy, the system is equipped with a fine
motion manipulator which also has 4DOFs (x,y,z,θ), and
is controlled by a PC-based DSP motion control card.
Fig 4: Mechanical system diagram
Using the same PC-based DSP board we control the
micro-gripper, the UV adhesive dispensing and curing
system, as well as the part fixturing system.
To determine the relative position between fixture
and assembled parts we used a vision system, including a
½” CCD camera, optics, and the Intel Open Source
Computer Vision libraries for image processing.
In this paper, gripping, fixturing, and vision
algorithms have been customized for the assembly of 1xN
(N=32) fiber arrays.
To assemble the 32-fiber connector, it was necessary
to machine a groove chip using a standard KOH wet-
etching micro-machining process.
3 System Components
3.1 Coarse Positioning System
The role of the coarse positioning system is to cover
a large workspace area, while providing enough positional
accuracy in order to bring it into the range of the vision
and fine positioning systems. The Yaskawa Robot we
used is equipped with several 4DOFs end-effectors, using
open-loop control along the x and y axes and closed-loop
control for z and theta axes.
Fig 5: Coarse Manipulator and its end-effector.
Figure 5 shows a 4 DOF puck equipped with two end
effectors: a micro gripper and a dispensing syringe and
The accuracy of the manipulator is 12 microns along
x, y, with a resolution of 0.5 microns. Each manipulator is
controlled by an ORC (Open Robot Controller),
consisting of I/O modules, motor amp board and motion
controller on a 32bit VME bus. The ORC controller
communicates with the supervisory PC via TCP/IP
communication. After position data from the vision
system is transferred to ORC, the robot will pick the
fibers located in the fiber holding fixture and move the
coarse manipulator in front of the V-groove situated on
the fine motion manipulator.
Due to the limited repeatability of the coarse
manipulator, fibers cannot be placed inside the grooves
without a fine positioning system. While the coarse
manipulator is holding the fiber in front of the V-groove,
the final alignment is performed by the fine motion
manipulator with higher accuracy.
with two arms
dispenser UV gun
w/ two bars
3.2 Fine Positioning System
An additional positioning system with 4 DOFs is
used to compensate for translation and rotation
misalignments between fiber and V-groove chip. The
resolution of each linear axis is 0.02µm and that of rotary
axis is 3.16 arc-seconds. In addition, a fiber fixturing
mechanism is used to hold the fibers already placed in
grooves until the 32 V-groove chip is completely
assembled. The fixturing mechanism is composed of the
so-called “indexed” and “push” fingers. The assembly
sequence of the fingers is achieved via two stepper motors
for moving each finger up and down and via one micro-
dc motor for indexing, as shown in Figure 6.
Fig 6: Fine Stages and Fixturing Mechanism
The fingers themselves were designed as cantilever
titanium structures coated with silicone in order to
generate a compliant line contact across all fibers. The
compliance is necessary in order to offset the outer-
diameter tolerances of the fiber cladding (up to 2 µm).
3.3 Micro Gripper
To pick and hold the fiber, we used a commercially
available fiber gripper from Piezojena (Figure 7). The
gripper is piezo-actuated, with a EDM machined groove
closing by the levered transmission of a solid state hinge
joint. The maximum opening of the gripper gap is 385µm
and the opening gap distance is controlled by proportional
voltage signals from our DSP-based controller board.
(a) Gripper design (b) Groove machined by EDM
Fig 7: Micro Gripper
The micro gripper can hold optical fibers either with
plastic jackets (250µm diameter), or bare cladding
3.4 Vision System
In order to determine the position and orientation of
the fiber and V-Grooves, we used several image
processing algorithms. The information we need to align
the fiber to the V-groove chip consists of xf, yf, zf, and θf
of the optical fiber and xg, yg, zg, and θg of the V-groove
with respect to the world coordinates (RobotWorld
coordinates). We used one camera positioned above the
V-groove to determine the 6 unknown values (xf, yf, θf
and xg, yg, θg).
For our specific assembly task we used a Pulnix ½
inch CCD camera and a VZM 450 lens, achieving a field
of view of 3.2 mm, a resolution of 5 microns/pixel and a
combined measurement accuracy of 15 microns.
3.5 Bonding System
UV curing adhesives are a very convenient way to
attach optical fibers to the V-groove chip. The
advantages of this bonding method are: 1) dispensing and
curing takes a short time and it is easy to handle, and 2)
relatively high bonding strength, and low shrinkage can
Adhesive viscosity is a key parameter that would
determine the output quality, measured both in terms of
shrinkage (which affects fiber shifts on the V-groove),
and in dispensing time. Low viscosity makes dispensing
easier, but it may result in higher shrink rates.
The finger mechanism
eliminating the shifting effects that occur when the
adhesive cures. These shifts can cause losses of 5 to 10
dB at the interface between fibers. By holding down the
fibers on the V-groove chip, these losses can be reduced
by a factor of 10 or more.
Other important considerations are the ability of the
adhesive to flow under the fingers, and to avoid sticking
to the fingers after curing.
A syringe and UV curing lamp connected to an
adhesive dispensing machine were mounted on the coarse
positioning end-effector, and was used to bond the entire
fiber array after it is assembled.
is indispensable for
4 Calibration for Fiber Alignment
Each positioning sub-system of our assembly
platform has its own coordinate system, and axial
displacement measurement sensors. In order to perform
the assembly tasks, the local coordinate systems must be
calibrated with respect to each other.
4.1 Calibration Between Camera, Fine
Manipulator and V-groove chip.
The camera needs to be calibrated with respect to the
V-groove chip placed on the fine positioning system.
Because the V-groove chips are cut from a wafer using
mechanical saws, their dimensions will vary, and camera
calibration has to be repeated for each one. The
calibration procedure involves several steps:
? Bringing the V-groove chip in focus by search
along the z-axis of the fine positioning system.
? Registering the location (xvf1, yvf1) and angle θvf1
of the first groove in the field of view.
? Registering the translation and rotation effects
along the local x and y axes in the field of view.
The first calibration method is based on extracting
the V-groove features from the acquired camera image.
We used two extraction algorithms in order to determine
the position and orientation of V-grooves. This assumes
that the illumination condition of the image is good,
however, due to the V-groove depth, this is not easy to
achieve. Once the lighting system is set in place (in our
case we used two goose neck fiber-optic light sources),
even the slightest rotation of the V-groove chip can cause
the groove edges to appear blurred. Assuming that the
extracted image is reasonable, as shown in Figure 8, the
grooves are augmented (or “closed” in order to remove
false edges). This is done using the Gray level
morphology operator . The image is then edge-
detected using the Canny edge-detector , and using
the Radon Transform we can find groove orientations.
The second algorithm does not require crisp
definition of the edges of the V-groove. In this case, we
used a moment-based algorithm to detect the groove
position and orientation . We used the first moments
to determine the area and the position of the region’s
centroid, and the second order moments to extract the
orientation of the groove as eigenvectors.
The results, including segmentation using global
thresholding, connected component by Freeman chain
code, and moment analysis are shown in Figure 10.
Fig 8: Extraction of V-groove location and
orientation: original image, closed image, edge-detection,
and Radon transform.
Fig 10: Extraction of V-groove location and
orientation:, closed component image, and the resulting
In order to register the effect of translations and
rotations of the fine positioning stage in the field of view,
we have made the following assumptions:
? The x and y axes of the fine positioning system
are not necessarily orthogonal. The fine
commercial precision stages assembled together
in our lab.
? The plane of rotation corresponding to the θ
DOF is perpendicular to the optical axis of the
camera. This assumption is reasonable due to the
consists of several
fact that the V-groove chip is in focus within
2mm along the z-axis, and also because the z
coordinate is not critical for our assembly task
due to the finger mechanism.
? The x and y axes of the fine positioning system
are not aligned to the x and y coordinates of the
The translation registration is necessary to ensure that
the fine manipulator is moving along known directions in
the field of view of the camera. If dx,dy, and dθ are the
relative displacements along the x, y, and θ axes of the
fine manipulator, we can determine the transformation
matrix from the fine manipulator coordinate system to the
camera system, by interpolating the experimental data in
Fig 11: Translation and rotation of V-grooves.
is the end of the V-groove in the field of view
in camera coordinates, we relate the effect of rotations
and translations from the fine positioning stage to the
camera coordinates as follows:
? For translation:
, where A is a
linear transformation matrix.
? For rotation: r
is the rotation matrix.
Because the rotation DOF is placed on the translation
stages, measuring several the coordinates of V-groove
origins after rotations, will only provide confirmation of
the decoupling between DOFs, as well as calculate the
position of origin of rotation
entries of the transformation matrix are determined from
at least four measurements. After A has been determined,
we can find the fine positioning coordinates necessary to
translate the V-groove along it’s direction in local
in which αis a scalar, and
the camera coordinate system.
, where R
. For translation, the four
is the V-groove direction in
4.2 Calibration Between Coarse Manipulator
In order to determine the transformation between the
coarse manipulation system and the vision system, we
have chosen the following approach:
? Experimentally determine the translation and
rotation in the Robot World coordinate system
necessary to bring the fiber in focus, and in the
field of view of the camera.
? Detect the position and orientation of the fiber
using image processing algorithms.
? Using the gripped fiber,
transformation matrix from the Robot World
coordinate system to the camera system.
? Detect the position and orientation of the fiber
relative to the V-groove, using the following
The relative position and orientation of the fiber with
respect to V-grooves is then used in a “look and move”
closed-loop feedback scheme.
5 Dynamic Parameter ID and Control
In order to control the fine positioning manipulator,
we performed a system identification to measure the
dynamic parameters. Because of the high gear ratio of the
servos, we assumed that the fine position manipulator’s
axes are de-coupled.
On each axis, we used the energy method to
determine coefficients for the effective mass, effective
damping, effective spring constant and friction constant,
corresponding to a second order model with friction:
mx bx kxf sign x
kxx f sign x x
f sign x xdtxdt
xsign x ttx
Trajectory following a=1mm/sec2
We collected experimental data using a sufficiently
exciting input signal by the superposition of 3 different
sinusoidal inputs: 6sin(4πt), 4sin(6πt+0.3) and
2sin(8πt+0.6), on each of the four axes:
# of Data 200
71.3315 71.5524 71.6485 71.7710
0.0054 0.0057 0.0063 0.0077
71.7190 71.9321 71.9594 72.0117
0.0037 0.0040 0.0040 0.0043
80.3130 80.5184 80.5687 80.6490
0.0050 0.0055 0.0063 0.0070
178.609 180.558 181.153 182.195
0.0056 0.0062 0.0073 0.0086
The maximum normalized standard deviation is low and
the results can be considered acceptable. The model of the
system was then used to design a closed-loop controller.
Figures 12 and 13 show the simulated step-response using
PD control with friction feed-forward, which was then
used experimentally to track a trapezoidal velocity profile.
Fig 12: Simulated step response with PID gains.
Fig 13: Tracking a trapezoidal velocity profile.
Simulated response with PID gains