Content uploaded by Veronica Penza
Author content
All content in this area was uploaded by Veronica Penza on Sep 20, 2019
Content may be subject to copyright.
A Hybrid Visual Servoing Approach for Robotic Laser Tattoo Removal
D. Salerno
1,3
, V. Penza
1
, C. Fantacci
2
, A. Acemoglu
1
, G. Muscato
3
, L. S. Mattos
1
1
Biomedical Robotics Lab, Advanced Robotics, Istituto Italiano di Tecnologia, Genova
2
Humanoid Sensing and Perception, Istituto Italiano di Tecnologia, Genova,
3
DIEEI, Università degli Studi di Catania, Catania.
damiano.salerno@iit.it
INTRODUCTION
According to a survey conducted in 2015 by The Harris
Poll, 29% of US adults have at least one tattoo with an
increase of 8% from 4 years earlier. It is very popular
among young people, with nearly half of millennials
reporting having at least one tattoo [1]. However, human
nature is inclined to changes: one-fourth of people with
tattoos regret getting them, and the increasing demand of
tattoo removal creates the need of non-invasive
techniques. Recent advances have led to the use of more
efficacious laser technologies, which present fewer side
effects with respect to surgical excision, dermabrasion,
and chemical destruction, many of which cause damages
to surrounding tissue [2]. However, the current practice
has some drawbacks: (i) dependence on the manual skills
of the medical doctor, (ii) pain during the procedure, and
(iii) need of more than one treatment for a complete tattoo
removal, affecting the patient psychologically, physically
and socially.
This paper introduces DeTattoo, an innovative system
to autonomously perform laser tattoo removal by
automatizing and optimizing the treatment. The system
combines robotics and image processing in order to (i)
identify the tattoo, (ii) plan the optimal laser path to avoid
tissue thermal damage and (iii) control the robotic arm
and laser scanning system [3] to accurately perform the
laser tattoo removal. Image processing is used to identify
salient characteristics of the unknown tissue surface and
to define the trajectory to follow. The final aim of this
system is to achieve the following benefits compared to
the current procedure: a reduced number of treatments
due to an improvement in the removal strategy and
reduced tissue damage due to a precise and automatic
laser positioning, with an error in the order of the
millimeter.
This work is focused on the control of a robotic arm to
accurately position the laser scanning system based on
visual feedback information. To jointly maximize the
laser efficiency and minimizing thermic tissue damages,
the control strategy has to guarantee the correct pose of
the laser with respect to the tattooed tissue while
compensating body motions. To this end, preliminary
results of a hybrid visual servoing approach, validated in
a simulated environment, are presented.
MATERIALS AND METHODS
The system setup is shown in Fig. 1. A robotic arm is
equipped with an RGB-D camera and a magnetically
actuated laser scanning system [3]. An eye-in-hand
configuration [4] is used in order to avoid target
occlusion during the execution of the trajectory.
A vision-based control strategy is necessary to ensure:
• A constant laser-tissue working distance;
• Laser perpendicularity to the surface;
• Compensation of body motion during the operations.
The aim of vision-based control schemes is to define a
relationship between the end-effector velocity and
changes that occur in the observed object, in order to
minimize an error [4], defined as:
where is the vector of visual and/or geometrical
features identified in the object and
contains their
desired values. In image-based visual servoing (IBVS)
[4], consists of a set of visual features extracted from
the image and consequently its error trajectory is defined
in the image plane. IBVS relies on the online calculation
of the image (or feature) Jacobian and on the distance
between the target object and the camera, which may be
difficult to estimate [5]. In position-based visual servoing
(PBVS) [4, 5], consists of a set of 3D parameters
estimated from image measurements, and its error
trajectory is defined in the Cartesian frame. PBSV
depends on the optical parameters of the visual system
and it is very sensitive to calibration errors [5]. To
overcome these drawbacks,
hybrid visual
servoing approach was introduced in [6], combining a set
of 2D and 3D features in s. This approach combines
object pose estimation and image information, improving
the robustness of the control.
Keeping into account the aforementioned
considerations, a hybrid approach has been chosen to
control the robotic arm based on the information
retrieved from the RGB-D camera. In the contest of our
application, the object shape (i.e., the tattooed surface to
scan) is unknown and 2D and 3D features must be
detected online.
Figure 1 DeTattoo system.
2D feature extraction: RGB images are used to extract 2D
features. At the current development of the system, a flat
chessboard is used as the tattooed surface with black
circles as features (see Figure 2). Initial position of the
features in the image is set manually, followed by an
automatic segmentation and tracking during the process.
3D feature extraction: depth images provided by the
RGB-D camera are used to obtain a 3D reconstruction of
the object and to extract the following 3D features:
Orientation ( ) expressed as axis/angle
parametrization of the rotation that the camera has to
perform to reach the desired pose. It can be defined as:
where
and
are respectively the desired and
current transformation between the camera and the
object, as shown in Fig. 2. The object pose, thus, has to
be computed. The origin of its reference frame is
placed on the selected feature. The axis is considered
as the normal of the plane estimated from a subset of
neighbor 3D points using RANSAC. x and y axes are
taken as the projection of the camera axes, assuring
orthonormality with z axis.
Depth which defines the current depth () with respect
to the desired depth (
) as
.
The resulting vector of features is expressed as:
Based on this set of features, the laser positioning relies
on 2D feature coordinates and depth, while the laser
orientation with respect to the object is assured with
estimation. A velocity controller is used to control the
robot trajectory. The camera spatial velocity is denoted
by
. The relationship between and
is:
where
represents the interaction matrix.
Considering
as input to the robot controller, and to
ensure an exponential decrease of the error (i.e.,
), we obtain:
Using the robot Jacobian (
) and the transformation
(
) between the camera {C} and the robot {W}, joint
velocities are computed as:
RESULTS
The system was simulated in Gazebo - ROS Kinetic on
Ubuntu 16.04. It is composed of Universal Robot UR5
and RealSense Depth Camera D435. ViSP library
(https://visp.inria.fr/) is used for the visual servoing
control, while the image processing is managed with
OpenCV and PCL libraries (http://pointclouds.org/).
Performances of the object pose estimation and control
system have been evaluated varying the orientation of the
chessboard on x or y axis (30º, 45º, 60º) and measuring
three errors: (i) object pose error in terms of estimated
normal, (ii) error between current and desired 2D feature
and (iii) depth error.
is set to 0.35 m, while is equal
to . The execution of the system is stopped when 2D
feature error is orientation error and
error < Errors averaged over 12 trials are
reported in Table 1.
CONCLUSION AND DISCUSSION
This paper presented a preliminary evaluation of a hybrid
visual servoing approach to control the motion of a
robotic arm for laser tattoo removal in a simulated
environment. The proposed implementation uses a
combination of online estimated 2D and 3D features to
demonstrate the potential to fulfil the DeTattoo
requirements of a constant laser-tissue working distance,
laser perpendicularity to the surface and motion
compensation. The global positioning error in the order
of sub-millimeters and the orientation error less than 0.1°
satisfy the application requirements. Future work will
focus on the improvement and validation of this
algorithm on a synthetic real scenario, integrating more
advanced vision processing for automatic tattoo
segmentation and tracking.
REFERENCES
[1] Naga L. I., and Tina S. A. "Laser tattoo removal: an
update." American journal of clinical dermatology 18.1
(2017): 59-65.
[2] Bernstein, E.F. "Laser tattoo removal." Seminars in plastic
surgery. Vol. 21. No. 3. Thieme Medical Publishers, 2007.
[3] Acemoglu A., Deshpande N., and L.S. Mattos. "Towards
a Magnetically-Actuated Laser Scanner for Endoscopic
Microsurgeries." Journal of Medical Robotics Research
3.02 (2018): 1840004.
[4] Chaumette, F., & Hutchinson, S. (2006). Visual servo
control. I. Basic approaches. IEEE Robotics &
Automation Magazine, 13(4), 82-90.
[5] Chaumette, F., & Hutchinson, S. (2007). Visual servo
control, Part II: Advanced approaches. IEEE Robotics and
Automation Magazine, 14(1), 109-118.
[6] E. Malis, F. Chaumette and Sylvie Boudet, "2 1/2 D Visual
Servoing", IEEE Transactions on Robotics and
Automation, vol. 15, no.2, pp. 238-250, 1999.
Table 1 Averaged errors
involved in the object pose estimation
(Normal) and control strategy (2D Point, Depth).
x axis rotation [°] y axis rotation [°]
Errors
30
45
60
30
45
60
normal [°]
-0.37 0.001 0.27 -1.68 -1.62
-1.73
2D feature
[mm]
x
-0.969
-0.192
-0.041
y
-0.980
depth [mm]
Figure 2 Reference frames and transformation involved in
the
computation of the object pose {O}
with respect to the camera
frame {C}.