ChapterPDF Available

Abstract and Figures

Hand-guiding is a main functionality of collaborative robots, allowing to rapidly and intuitively interact and program a robot. Many applications require end-effector precision positioning during the teaching process. This paper presents a novel method for precision hand-guiding at the end-effector level. From the end-effector force/torque measurements the hand-guiding force/torque (HGFT) is achieved by compensating for the tools weight/inertia. Inspired by the motion properties of a passive mechanical system, mass subjected to coulomb/viscous friction, it was implemented a control scheme to govern the linear/angular motion of the decoupled end-effector. Experimental tests were conducted in a KUKA iiwa robot in an assembly operation.
Content may be subject to copyright.
End-effector precise hand-guiding for
collaborative robots
Mohammad Safeea1, Richard Bearee2, and Pedro Neto
1University of Coimbra, 3000-033 Coimbra, Portugal,
ms@uc.pt,
2Arts et Metiers ParisTech, LSIS Lille 8, Boulevard Louis XIV,
59046 LILLE Cedex, France
Abstract. Hand-guiding is a main functionality of collaborative robots,
allowing to rapidly and intuitively interact and program a robot. Many
applications require end-effector precision positioning during the teach-
ing process. This paper presents a novel method for precision hand-
guiding at the end-effector level. From the end-effector force/torque mea-
surements the hand-guiding force/torque (HGFT) is achieved by com-
pensating for the tools weight/inertia. Inspired by the motion properties
of a passive mechanical system, mass subjected to coulomb/viscous fric-
tion, it was implemented a control scheme to govern the linear/angular
motion of the decoupled end-effector. Experimental tests were conducted
in a KUKA iiwa robot in an assembly operation.
Keywords: hand-guiding, collaborative robot, end-effector
1 Introduction
Human-robot interaction has been studied along the last decades, from text-
based programming to off-line programming [1] to the more recent intuitive
techniques in which humans interact with the robots like interact with each
other using natural means like speech, gestures and touch [2,3]. In the touch
interaction mode the robot can be hand guided to teach the required paths.
Hand-guiding is a representative functionality of collaborative robots, al-
lowing unskilled users to interact and program robots in a more intuitive way
than using the teach pendant. Existing collaborative robots include hand-guiding
functionality with limitations in terms of the accuracy required for many tasks
like assembly. For precision positioning (position and orientation of the end-
effector) the teach pendant is still used (even for sensotive robots). The use of
the teach pendant limits the intuitiveness of the teaching process provided by the
hand guiding and it is time consuming (an important parameter on the factory
floor). When the operator is using the teach pendant, he/she has to visualize
mentally different reference axes of the robot. In some scenarious, the teach pen-
dant based robot positioning can conduct to undesirable collisions, which can
cause damage in sensitive equipment.
2 Mohammad Safeea et al.
This paper presents a novel method for end-effector precision hand-guiding
to be applied in collaborative robots equipped with joint torque sensors or with
a force/torque sensor attached to the end-effector. It was implemented a control
scheme that utilizes force feedback to compensate for the end-effector weight, so
that a minimal effort is required from the operator to perform the hand-guiding
motion. Inspired by the motion properties of a passive mechanical system, mass
subjected to coulomb/viscous friction, it was implemented a control scheme to
govern the linear/angular motion of the decoupled end-effector. Experimental
tests were conducted in a KUKA iiwa 7 R800 robotic manipulator in an as-
sembly operation requiring precision and fine tuning. It is demonstrated that
the proposed method allows precise hand guiding with smooth robot motion in
terms of position and orientation of the end-effector.
1.1 State of the art
The importance of collaborative robots and hand-guiding teaching is well known
in the robotics community. It is still a challenge to have robots and humans shar-
ing the same space performing collaborative tasks with the required safety levels
to minimize the risk of injuries [4]. Human subjects can interact physically with
a robot arm moving and guiding the robot into the desired grasping poses, while
the robot’s configuration is recorded [5]. Different control schemes for human-
robot cooperation have been proposed, for example physical interaction in 6
DOF space using a controller that allows asymmetric cooperation [6]. Robot
assistance through hand-guiding is also used to assist in the transport of heavy
parts. In [7] it is studied the application of a human-industrial robot coopera-
tive system to a production line. Safety, operability and the assistance of human
skills were studied as they relate to hand-guiding. Hand-guiding teaching for
conventional industrial robots are normally sensor based. In [8] it is proposed a
sensor less hand guiding method based on torque control. The dynamic model
of a robot along with the motor current and friction model is used to determine
the users intention to move the end-effector of a robot instead of directly sensing
the external force by the user.
The problem of Cartesian impedance control of a redundant robot executing
a cooperative task with a human has been addressed in [9]. Redundancy was used
to keep robots natural behavior as close as possible to the desired impedance
behavior, by decoupling the end effector equivalent inertia. The authors claim
that this allows easily finding a region in the impedance parameter space where
stability is preserved. A method for using impedance control of multiple robots
to aid a human moving an object and conducted an experiment with motion
along one degree of freedom (DOF) was presented in [10].
In [11] it is proposed an approach for collision detection and reaction for an
industrial manipulator with a closed control architecture and without external
sensors. The robot requirements are the joint velocity, motor currents and joint
positions. Robot assisted surgery demands high precision hand-guiding accu-
racy. An approach that enables an operator to guide a robot along a predefined
End-effector precise hand-guiding for collaborative robots 3
geometric path while taking required joint constraints into account and thus
implicitly considering the actuator dynamics is proposed in [12].
2 Work principal
Assuming force feedback at the robot end-effector (the robot may have joint
torque sensors or a force/torque sensor attached to the end-effector), three groups
of robot hand-guiding motion are considered, Fig. 1:
1. The first motion-group represent the positioning (with linear displacements)
of the end-effector in the X,Y and Z coordinates system of the robot base;
2. The second motion-group is used for orienting the end-effector in the Carte-
sian space;
3. The third motion group is used to rotate the end-effector around its axis.
Those motion groups are introduced because they are the most intuitive for
humans when we perform a motion that requires precision.
Fig. 1. The proposed robot motion groups for precision hand-guiding
Let us consider the hand-guiding force the force applied by the operator
at the end-effector for linear positioning, and the hand-guiding moment the
moment applied by the operator at the end-effector for angular positioning.
The force/moment measurements represent the forces/moments due to (1) end-
effector weight (2) the inertial forces/moments due to the acceleration of the
end-effector, and (3) the external hand-guiding force/moment (to be applied by
the operator for achieving hand-guiding). To simplify the calculations we omit
the inertial forces/moments due to end-effector acceleration because in precise
hand-guiding applications these inertial forces are relatively small compared to
its weight and to the external forces/moments. The measured forces and torques
can be approximated to be due to the hand-guiding forces and the weight of the
end-effector. The components of the hand-guiding force described relative to the
robot base frame are used as inputs for the proposed linear motion controller,
i.e., for positioning the end-effector according to the robot base frame.
4 Mohammad Safeea et al.
2.1 Hand-guiding force
The components of the hand-guiding force described in robot base frame fb=
(fx, fy, fz) serve as input to move the robot end-effector along the X,Yor Z
directions of the robot base, 1st motion group. The maximum of the components
(fx, fy, fz) is used to calculate the control command to control the end-effector
one axis at a time. Those hand-guiding forces (fe
x, f e
y, f e
z) described on the end-
effector reference frame are calculated by subtracting the weight of the end-
effector from the measured forces:
fe
x
fe
y
fe
z
=
ue
x
ue
y
ue
z
Re
b
0
0
w
(1)
where (ue
x, ue
y, ue
z) are the components of end-effector force/torque measure-
ments, described in sensor frame (in this study the frame of the force/torque
measurements is the same as the frame of the end-effector, otherwise a constant
transform between the two frames can be introduced and the methodology de-
scribed is still valid). Here, wis the weight of the end-effector and Re
bis the
rotation matrix from base frame to end-effector frame. This matrix is obtained
from the transpose of the rotation matrix from the end-effector to the base frame:
Re
b=Rb
eT(2)
where Rb
eis obtained from the direct kinematics.
The hand-guiding force fbdescribed in base frame is calculated from:
fb= Rb
e
fe
x
fe
y
fe
z
(3)
The force command fsent to the control algorithm is:
f=
a0 0
0b0
0 0 c
fb(4)
where we have to respect the following conditions:
a b c Condition
100|fx|>|fy|&& |fx|>|fz|
0 1 0 |fy|>|fx|&& |fy|>|fz|
0 0 1 |fz|>|fx|&& |fz|>|fy|
2.2 Hand-guiding moment
For end-effector orientation (2nd and 3rd motion group) the hand-guiding mo-
ment meshall be calculated from the end-effector force/torque measurements:
End-effector precise hand-guiding for collaborative robots 5
me=
τe
x
τe
y
τe
z
µe
x
µe
y
µe
z
(5)
where τe
x, τ e
y, τ e
zare measurements of the three components of torques from
the end-effector force/torque measurements, and µe
x, µe
y, µe
zare components of
moment due to weight of end-effector described in end effector frame:
µe
x
µe
y
µe
z
=
0ze
cye
c
ze
c0xe
c
ye
cxe
c0
Re
b
0
0
w
(6)
where (xe
c, ye
c, ze
c) are the coordinates of the center of mass of the end effector,
described in the end-effector reference frame.
In the 2nd motion group, the end-effector axis zeef is oriented in space, Fig.
1. For this type of motion the input to the controller is calculated from the hand-
guiding moment. The input command is the vector mxy. To calculate this vector,
the vector me
xy shall be calculated first, where it represents the component of
the hand-guiding moment in the XY plane of the end-effector frame, Fig. 2. The
components of vector me
xy are described in sensor reference frame. This vector
is calculated from:
me
xy =
100
010
000
me(7)
The input command for 2nd motion group is calculated from:
mxy = Rb
eme
xy (8)
where mxy represents the vector me
xy after being transformed to base frame.
For the 3rd motion group, the end-effector is allowed to rotate around its
axis zeef . For this type of motion the input command to the controller is the
vector me
z. To calculate this vector, the vector me
zshall be calculated first, where
it represents the Zdirection of the end effector frame, as shown in the Fig. 3.
This vector is calculated from:
me
z=meme
xy (9)
The controller command for the 3rd motion group is calculated from:
mz= Rb
eme
z(10)
where mzrepresents the vector me
zafter being transformed to base frame.
Only one of motion group (2nd and 3rd motion group) is allowed to be
performed once at a time. The motion command vector sent to the control
algorithm mis calculated from:
m=mxy if kmxyk>kmzk
mzif kmzk>kmxyk(11)
6 Mohammad Safeea et al.
Fig. 2. Hand-guiding moment in sensor reference frame for 2nd motion group (at left)
and for the 3rd motion group (at right)
3 Controller
The robot is controlled at the end-effector level, so that we consider the decoupled
end-effector as a mass moving under the effect of Coulomb and viscous friction.
The equation of linear motion of the center of gravity of the mass moving is:
m¨x+b˙x+fr=f(12)
where ¨xis the linear acceleration of the center of mass, ˙xis the linear velocity
of the center of mass, mis the mass, bis the damping coefficient, fris Coulomb
friction, and fis the external force acting on the mass. The equation of angular
rotations around the center of gravity of the mass is:
i¨
θ+β˙
θ+τr=τ(13)
where ¨
θis the angular acceleration around rotation axes, ˙
θis the angular velocity,
iis the moment of inertia around the rotation axes, βis the damping coefficient,
τris the torque due to Coulomb friction, and τis the external torque.
We consider that the Coulomb and viscous friction effects are much bigger
than the effect of the inertia of the mass. In this context, the inertial factors
from previous equations are omitted (Fig. 4) so that equation (12) becomes:
b˙x+fr=f|f|>|fr|(14)
and:
˙x= 0 otherwise (15)
End-effector precise hand-guiding for collaborative robots 7
where |f|is the absolute value of the external force.
Equation (13) becomes:
β˙
θ+τr=τ|τ|>|τr|(16)
and: ˙
θ= 0 otherwise (17)
where |τ|is the absolute value of the external torque.
Fig. 3. Damper-mass mechanical system
3.1 Robot control
The linear motion of the end-effector is controlled by:
v=(f(kfk−kfrk)
bkfkkfk>kfrk
0otherwise (18)
where vis the linear velocity vector of the end-effector, fis the force vector
command issued to the controller, fris the sensitivity threshold (motion is only
executed when the force command reaches a value above this threshold), and bis
the motion constant that defines the rate of conversion from force measurement
to velocity.
The angular motion of the end-effector is controlled by:
$=(m(kmk−kτrk)
βkmkkmk>kτrk
0otherwise (19)
where $is the angular velocity vector of the end-effector, mis the moment
command issued to the controller, τris the sensitivity threshold (where motion
is only valid when the force command value is higher than this threshold), and β
is a motion constant that defines the rate of conversion from force measurement
to velocity.
The end effector velocity ˙xis:
˙x=v
$(20)
8 Mohammad Safeea et al.
After calculating the velocity of the end-effector, angular and linear, joint
velocities are calculated using the pseudo inverse of the Jacobean J:
˙q= J˙x(21)
From the angular velocities and by using an iterative solution, the state space
vector can be calculated, which is used to control the manipulator, Fig. 4.
Fig. 4. Robot control algorithm
4 Experiments and results
The experiments were performed in a KUKA iiwa 7 R800 robotic manipulator
in an assembly operation requiring precision positioning and fine tuning. The
off-the-shelf hand-guiding functionality provided by KUKA lacks in the preci-
sion that is crucial for fine and precise positioning of the end-effector for many
applications. The proposed method was implemented in a control loop updated
at each 8.5 milliseconds. The controller compensates automatically for the piece
End-effector precise hand-guiding for collaborative robots 9
weight such that when no force is applied by the operator, the piece is held in
place by the robot. Different tests demonstrated that in the 1st motion group
the robot position along X,Yand Zcan be precisely adjusted, in the 2nd
motion group the end-effector rotation around X,Yand Zis achieved, while
in the 3rd motion group we can rotate around the end-effector axis with accu-
racy, Fig. 5. Fig. 6 shows the end-effector position along Y axis according to a
given applied force. It is clear to visualize the end-effector displacement when
the force is applied. The proposed hand-guiding method accuracy was measured
and compared with a nominal path in plane xy, Fig. 7. The operator moves the
end-effector along the y axis (250 mm) so that it was achieved a maximum error
of 0.09 mm along x direction. In summary, it was demonstrated that the preci-
sion hand-guiding abilities of the proposed method work well and are useful in
collaborative robotics.
Fig. 5. Precision hand-guiding for assembly operation
5 Conclusion and future work
A novel precision hand-guiding method at end-effector level for collaborative
robots was proposed. Experimental tests demonstrated that with the proposed
method it is possible to hand-guide the robot with accuracy, with no vibration,
and in a natural way by taking advantage of the proposed three motion groups. It
was also demonstrated that the system is intuitive and compares favorably with
off-the-shelf KUKA hand-guiding. Future work will be focused on optimizing
the control by utilizing the redundancy of the KUKA iiwa for achieving better
stiffness while hand-guiding, and also for avoiding collisions with obstacles while
hand-guiding. In addition the control can be optimized to give some feedback to
the user when reaching near the joint limits.
Acknowledgments
This research was partially supported by Portugal 2020 project DM4Manufacturing
POCI-01-0145-FEDER-016418 by UE/FEDER through the program COMPETE
2020, the European Unions Horizon 2020 research and innovation programme
under grant agreement No 688807 - ColRobot project, and the Portuguese Foun-
dation for Science and Technology (FCT) SFRH/BD/131091/2017.
10 Mohammad Safeea et al.
Fig. 6. End-effector position along yaxis according to applied force
Fig. 7. Nominal end-effector path against the real robot end-effector path in plane xy
End-effector precise hand-guiding for collaborative robots 11
References
1. Neto, P., Mendes, N.: Direct off-line robot programming via a common CAD
package. Robotics and Autonomous Systems, vol. 61, no. 8, pp. 896-910 (2013).
doi:10.1016/j.robot.2013.02.005
2. Neto, P., Pereira, D., Pires, J.N., Moreira, A.P.: Real-time and continuous hand
gesture spotting: An approach based on artificial neural networks. 2013 IEEE In-
ternational Conference on Robotics and Automation (ICRA), pp. 178-183 (2013).
doi:10.1109/ICRA.2013.6630573
3. Simao, M.A., Neto, P., Gibaru, O.: Unsupervised Gesture Segmentation by Motion
Detection of a Real-Time Data Stream. IEEE Transactions on Industrial Informat-
ics, vol. 13, no. 2, pp. 473-481 (2017). doi:10.1109/TII.2016.2613683
4. Haddadin, S., Albu-Schaffer, A., Hirzinger, G.: Requirements for safe robots: Mea-
surements, analysis and new insights. Int. J. of Robotics Research, vol. 28, no. 11/12,
pp. 15071527 (2009). doi:10.1177/0278364909343970
5. Balasubramanian, R., Xu, L., Brook, P.D., Smith, J.R., Matsuoka, Y.: Physical
Human Interactive Guidance: Identifying Grasping Principles From Human-Planned
Grasps. IEEE Transactions on Robotics, vol. 28, no. 4, pp. 899-910 (2012). doi:
10.1109/TRO.2012.2189498
6. Whitsell, B., Artemiadis, P.: Physical Human-Robot Interaction (pHRI) in 6 DOF
with Asymmetric Cooperation. IEEE Access, vol. 5, pp. 10834-10845 (2017). doi:
10.1109/ACCESS.2017.2708658
7. Fujii, M., Murakami, H., Sonehara, M.: Study on Application of a Human-Robot
Collaborative System Using Hand-Guiding in a Production Line. IHI Engineering
Review, Vol.49 No.1 pp. 24-29 (2016).
8. Lee, S.D., Ahn, K.H., Song, J.B.: Torque control based sensorless hand guiding
for direct robot teaching. 2016 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), pp. 745-750 (2016). doi:10.1109/IROS.2016.7759135
9. Ficuciello, F., Villani, L., Siciliano, B.: Variable Impedance Control of Redundant
Manipulators for Intuitive HumanRobot Physical Interaction. IEEE Transactions
on Robotics, vol. 31, no. 4, pp. 850-863 (2015). doi:10.1109/TRO.2015.2430053
10. Kosuge, K., Yoshida, H., Fukuda, T.: Dynamic control for robot-human collabora-
tion. Proceedings of the 2nd IEEE International Workshop on Robot and Human
Communication, pp. 398-401 (1993). doi:10.1109/ROMAN.1993.367685
11. Geravand, M., Flacco, F., De Luca, A.: Human-Robot Physical Interaction and
Collaboration using an Industrial Robot with a Closed Control Architecture. 2013
IEEE International Conference on Robotics and Automation, pp. 4000-4007 (2013).
doi:10.1109/ICRA.2013.6631141
12. Hanses, M., Behrens, R., Elkmann, N.: Hand-guiding robots along predefined ge-
ometric paths under hard joint constraints. 2016 IEEE 21st International Confer-
ence on Emerging Technologies and Factory Automation (ETFA), pp. 1-5 (2016).
doi:10.1109/ETFA.2016.7733600
... In [10] the authors presented the virtual tool method for kinesthetic teaching of robotic manipulators. In [11] the authors proposed a method for guiding the end-effector (EEF) of a manipulator precisely in Cartesian space. In [12] Task execution combined with in-contact obstacle navigation by exploiting torque feedback of sensitive robots ...
... In [10] the authors presented the virtual tool method for kinesthetic teaching of robotic manipulators. In [11] the authors proposed a method for guiding the end-effector (EEF) of a manipulator precisely in Cartesian space. In [12] the authors presented a method for hand-guiding redundant manipulators in cluttered ...
... In [10] the authors presented the virtual tool method for kinesthetic teaching of robotic manipulators. In [11] the authors proposed a method for guiding the end-effector (EEF) of a manipulator precisely in Cartesian space. In [12] ...
Article
Full-text available
Collaborative redundant manipulators are becoming more popular in industry. Lately, sensitive variants of those robots are introduced to the market. Their sensitivity is owed to the unique technology of integrating torque sensors into their joints. This technology has been used extensively for collision detection. Nevertheless, it can be used in other collaborative applications. In this study, we present a novel control method that uses the torque feedback at the joints to perform automatic adjustment of the self-motion manifold during a contact with surrounding obstacles, while allowing the user to control the robot at the end-effector (EEF) level. This makes the interaction with sensitive redundant manipulators more intuitive to users. Experimental tests on KUKA iiwa robot proved the effectiveness of the proposed method for navigating obstacles during a contact with robot’s structure while keeping the precision in the task under execution.
... The subject of robot hand-guiding for precise and intuitive positioning at the EEF level is still a research topic facing technological challenges. Previously, we proposed a precision hand-guiding method at the EEF level that demonstrated to be effective [24]. However, it needs to be improved in order to make robot motion smoother and contemplate robot joint limits avoidance. ...
... The working principle is in line with the one previously proposed by the authors in [24]. Having force/torque feedback at the robot EEF (from joint torque sensors or a force/torque sensor attached to the flange of the robot), three groups of robot hand-guiding motions are considered: ...
... The control algorithm compensates automatically for the part weight such that when no hand-guiding force is applied, the part is held in place by the robot. Different tests demonstrated that in the 1st motion group the robot EEF position along x, y and z can be precisely adjusted with a slightly better accuracy than the reported in authors previous study featuring 1 mm range [24]. In the 2nd motion group, each axis of the EEF is oriented smoothly in 3D space, while in the 3rd motion group we can rotate the EEF around its axis with accuracy. ...
Article
Full-text available
Hand-guiding is a key feature of collaborative robots, allowing unskilled users to interact with them in an intuitive manner. This physical interface is extensively used for robot positioning during the human–robot interactive process. However, end-effector (EEF) precise and smooth positioning is still difficult to achieve by hand-guiding. This study extends and improves a previously proposed precision hand-guiding method at the EEF level. From the force/torque applied by the user at the EEF, the hand-guiding force/moment is calculated after compensating for the weight/inertia of the EEF. The proposed control solution is inspired by the motion properties of a passive mechanical system subjected to Coulomb/viscous friction. As a result, the human co-worker can control the linear/angular motion of the decoupled EEF. To ensure that the robot does not exceed its joints limits while hand-guided, a solution to scale down the linear/angular velocity of the EEF near the joints limits is integrated into the control strategy. Several experimental tests demonstrated the system performance considering positioning precision, joints limits avoidance, intuitiveness of use and motion smoothness (robot vibrations while hand-guided). A programming by demonstration application using the proposed method is also presented and discussed.
... A relationship between the kinematics of the cobot and the application was not explicitly considered, since we believe that other parameters, such as the presence of force sensors in each axis, influenced the cobot choice made in these papers. However, it should be noted that the kinematics-precisely, the number of axes-was a feature considered in [28], whereas future works are focused on verifying their findings with kinematically redundant robots [29] or utilizing the redundancy for achieving better stiffness in hand-guiding [30]. Figure 3 presents the different control systems in the selected human-robot collaboration (HRC) studies. ...
... Since 2016 and the introduction of ISO/TS 15066:2016, the considered research sample began to study other methodologies, especially the HG method, which, as shown in Figure 4, has become prevalent in recent years. The HG method is indeed a representative function of collaborative robots [30], since it allows even unskilled users to interact with and program the cobot, which can allow some degree of flexibility-even if the robot moves only on predefined directions-without the need for expensive algorithms [35]. It should be noted that the HG method could also be employed with traditional industrial robots, such as a COMAU NJ130 [36]: This allows one to take advantage of the robot's characteristics, such as high speed and power, and increase the system's flexibility. ...
... The key findings of these studies highlight challenge areas that research has successfully addressed, or even solved, when cobots are used for industrial tasks. Multiple studies reported an increase in task performance-e.g., by reducing completion time and minimizing error [25,37,38,43]as well as a better understanding of the operator space [29,31,32,41] and higher precision of workpiece manipulation [28,30,45]. Thematic areas of research intent can be identified, such as increasing and quantifying the trust of the operator in the robotic system [29,46,47], as well as improving safety by minimizing collisions [40]. ...
Article
Full-text available
This paper provides an overview of collaborative robotics towards manufacturing applications. Over the last decade, the market has seen the introduction of a new category of robots—collaborative robots (or “cobots”)—designed to physically interact with humans in a shared environment, without the typical barriers or protective cages used in traditional robotics systems. Their potential is undisputed, especially regarding their flexible ability to make simple, quick, and cheap layout changes; however, it is necessary to have adequate knowledge of their correct uses and characteristics to obtain the advantages of this form of robotics, which can be a barrier for industry uptake. The paper starts with an introduction of human–robot collaboration, presenting the related standards and modes of operation. An extensive literature review of works published in this area is undertaken, with particular attention to the main industrial cases of application. The paper concludes with an analysis of the future trends in human–robot collaboration as determined by the authors.
... When designing end-effector hand-guiding control algorithms for cooperative robots using force/torque sensors, the control model is considered a damper-mass mechanical system [18]. The torque due to the gravity force is compensated, and the robot is controlled in the velocity domain [19]. ...
Article
Full-text available
This paper presents a control strategy that secures both precision and manipulation sensitivity of remote center motion with direct teaching for a surgical assistant robot. Remote center motion is an essential function of conventional laparoscopic surgery, and the most intuitive way a surgeon manipulates a robot is through direct teaching. The surgical assistant robot must maintain the position of the insertion port in three-dimensional space during the four-degree-of-freedom motions such as pan, tilt, spin, and forward/backward. In addition, the robot should move smoothly when controlling it with the hands during the surgery. In this study, a six-degree-of-freedom collaborative robot performs the cone-shaped trajectory with pan and tilt motion of an end-effector keeping the position of the remote center. Instead of the bulky mechanically constrained remote center motion mechanism, a conventional collaborative robot is used to mimic the wrist movement of a scrub nurse. A force/torque sensor that is attached between the robot and end-effector estimates the surgeon’s intention. A direct teaching control strategy based on position control is applied to guarantee precise remote center position maintenance performance. A motion generation algorithm is designed to generate motion by utilizing a force/torque sensor value. The parameters of the motion generation algorithm are optimized so that the robot can be operated with uniform sensitivity in all directions. The precision of remote center motion and the torque required for direct teaching are analyzed through pan and tilt motion experiments.
... Hand-guiding is a critical function of a collaborative robot, which has been proved with high precision and is widely used in industrial applications [19,20], such as 0.09 mm positioning error for KUKA iiwa robot [21]. For UR5, the maximum position error is 0.3 mm with the maximum repeat positioning error of 0.03 mm, which is inadequate compared to the clinical requirements on implant placement within the jaw bone [22]. ...
Article
Background The purpose of this study was to develop and validate a positioning method with hand-guiding and contact position feedback of robot based on a human-robot collaborative dental implant system (HRCDIS) for robotic guided dental implant surgery. Methods An HRCDIS was developed based on a light-weight cooperative robot arm, UR5. A three-dimensional (3D) virtual partially edentulous mandibular bone was reconstructed using the cone bone computed tomography images. After designing the preoperative virtual implant planning using the computer software, a fixation guide worn on teeth for linking and fixing positioning marker was fabricated by 3D printing. The fixation guide with the positioning marker and a resin model mimicking the oral tissues were assembled on a head phantom. The planned implant positions were derived by the coordinate information of the positioning marker. The drilling process using the HRCDIS was conducted after mimicking the experimental set-up and planning the drilling trajectory. Deviations between actual and planned implant positions were measured and analyzed. Results The head phantom experiments results showed that the error value of the central deviation at hex (refers to the center of the platform level of the implant) was 0.79 ± 0.17 mm, central deviation at the apex was 1.26 ± 0.27 mm, horizontal deviation at the hex was 0.61 ± 0.19 mm, horizontal deviation at the apex was 0.91 ± 0.55 mm, vertical deviation at the hex was 0.38 ± 0.17 mm, vertical deviation at the apex was 0.37 ± 0.20 mm, and angular deviation was 3.77 ± 1.57°. Conclusions The results from this study preliminarily validate the feasibility of the accurate navigation method of the HRCDIS.
... In this example, the user can guide the robot at EEF level while keeping the orientation of the robot fixed, Fig 2. A video demonstration of the test is available in [18]. For the sake of simplicity, this example is made different to our previous work in [19], in which the user can move the robot manually, including orientation control, at one direction at a time (implementing force and position feedback). Figure 3 shows the Simulink diagram used for the control scheme in Test 1. ...
Article
Full-text available
Simulation and model-based design software packages are widely used in many engineering disciplines. When it comes to robotics those tools are very important for robot design, simulation and the development of control algorithms before the implementation on the real robot. Simulink by MathWorks® is an advanced model-based design tool. It is popular in education, industry and research. In addition, Simulink supports several hardware components, facilitating a rapid deployment of the developed programs on the target hardware. In this study, the SimulinkIIWA interface for controlling KUKA iiwa robots from Simulink is presented and compared to the KUKA Sunrise Toolbox (KST). This interface is based on the User Datagram Protocol (UDP) and allows graphical real-time control of iiwa robots from Simulink without a need for writing any code. The interface supports different robot control modes, at the joints level and at the end-effector (EEF) level. Example applications are also provided showing the flexibility and the ease of use of the proposed interface.
... This usecase consists in a user performing one of the five valid gestures, and when it is recognized, a command is sent to a collaborative robot. In turn, the robot executes an industrial task, associated to that particular gesture, it can be a simple robot movement, open or close a gripper, turn a digital input on or off, or initialize a collaborative task whose robot moves by compliance between human hand contact and the robot itself [26], [27]. ...
Conference Paper
Full-text available
The use of gestures as interface between humans and robots to facilitate communication between them is a long-sought goal. Although many gesture solutions have been presented, none of them cope entirely with wrong gesture recognition. This study proposes a novel electromyography (EMG) prototype sensor to capture gestures and also algorithms and procedures to discriminate data containing valid gestures (segmentation). Gestures are recognized using convolutional neural network (CNN) model. The proposed solution presented high recognition accuracy overcoming other similar studies in literature. Test results demonstrated that the proposed solution presents high performance and suggested its use in industrial environment.
Article
Collaborative robots are set to play an important role in the future of the manufacturing industry. They need to be able to work outside of the fencing and perform new tasks to individual customer specifications. The necessity of frequent robot re-programming is a great challenge for small and medium sized companies alike. Learning from demonstration is a promising approach that aims to enable robots to acquire from their end users new task knowledge consisting of a sequence of actions, the associated skills, and the context in which the task is executed. Current systems have limited support for integrating semantics and environmental changes. This paper introduces a system combining several modalities as demonstration interfaces, including natural language instruction, visual observation and hand-guiding, which enables the robot to learn a task comprising a goal concept, a plan and basic actions, with consideration for the current environment state. The task thus learned can then be generalized to similar tasks involving different initial and goal states.
Article
Full-text available
The research behind this paper arose out of a need to use an open-source system that enables hand guiding of the robot effector using a force sensor. The paper deals with some existing solutions, including the solution based on the open-source framework Robot Operating System (ROS), in which the built-in motion planner MoveIt is used. The proposed concept of a hand-guiding system utilizes the output of the force–torque sensor mounted at the robot effector to obtain the desired motion, which is thereafter used for planning consequential motion trajectories. Some advantages and disadvantages of the built-in planner are discussed, and then the custom motion planning solution is proposed to overcome the identified drawbacks. Our planning algorithm uses polynomial interpolation and is suitable for continuous replanning of the consequential motion trajectories, which is necessary because the output from the sensor changes due to the hand action during robot motion. The resulting system is verified using a virtual robot in the ROS environment, which acts on the real Optoforce force–torque sensor HEX-70-CE-2000N. Furthermore, the workspace and the motion of the robot are restricted to a greater extent to achieve more realistic simulation.
Conference Paper
Full-text available
In this paper a method is presented that allows an operator to hand-guide a robot along a predefined geometric path. This is a common use case in robot assisted surgery, which often has high demands on precision. In order to ensure the path accuracy of the robot, joint velocity and joint acceleration constraints are enforced to prevent undesired saturation effects of the actuators. Furthermore, necessary optimization steps are calculated in an offline phase and utilized during runtime to ensure realtime capabilities. The functionality of the method is evaluated using simulated sensor readings, controlling a kinematic model of the robot. While the focus is on surgical applications, the method can be useful in other domains as well, e.g. rehabilitation robotics or industrial applications.
Article
Full-text available
This paper presents an experimental study on human–robot comanipulation in the presence of kinematic redundancy. The objective of the work is to enhance the performance during human–robot physical interaction by combining Cartesian impedance modulation and redundancy resolution. Cartesian impedance control is employed to achieve a compliant behavior of the robot's end effector in response to forces exerted by the human operator. Different impedance modulation strategies, which take into account the human's behavior during the interaction, are selected with the support of a simulation study and then experimentally tested on a 7-degree-of-freedom KUKA LWR4. A comparative study to establish the most effective redundancy resolution strategy has been made by evaluating different solutions compatible with the considered task. The experiments have shown that the redundancy, when used to ensure a decoupled apparent inertia at the end effector, allows enlarging the stability region in the impedance parameters space and improving the performance. On the other hand, the variable impedance with a suitable modulation strategy for parameters’ tuning outperforms the constant impedance, in the sense that it enhances the comfort perceived by humans during manual guidance and allows reaching a favorable compromise between accuracy and execution time.
Conference Paper
Full-text available
In physical Human-Robot Interaction, the basic problem of fast detection and safe robot reaction to unexpected collisions has been addressed successfully on advanced research robots that are torque controlled, possibly equipped with joint torque sensors, and for which an accurate dynamic model is available. In this paper, an end-user approach to collision detection and reaction is presented for an industrial manipulator having a closed control architecture and no additional sensors. The proposed detection and reaction schemes have minimal requirements: only the outer joint velocity reference to the robot manufacturer's controller is used, together with the available measurements of motor currents and joint positions. No a priori information on the robot dynamic model and existing low-level joint controllers is strictly needed. A suitable on-line processing of the motor currents allows to distinguish between accidental collisions and intended human-robot contacts, so as to switch the robot to a collaboration mode when needed. Two examples of reaction schemes for collaboration are presented, with the user pushing/pulling the robot at any point of its structure (e.g., for manual guidance) or with a compliant-like robot behavior in response to forces applied by the human. The actual performance of the methods is illustrated through experiments on a KUKA KR5 manipulator.
Article
Full-text available
New and more natural human-robot interfaces are of crucial interest to the evolution of robotics. This paper addresses continuous and real-time hand gesture spotting, i.e., gesture segmentation plus gesture recognition. Gesture patterns are recognized by using artificial neural networks (ANNs) specifically adapted to the process of controlling an industrial robot. Since in continuous gesture recognition the communicative gestures appear intermittently with the noncommunicative, we are proposing a new architecture with two ANNs in series to recognize both kinds of gesture. A data glove is used as interface technology. Experimental results demonstrated that the proposed solution presents high recognition rates (over 99% for a library of ten gestures and over 96% for a library of thirty gestures), low training and learning time and a good capacity to generalize from particular situations.
Article
Human-Robot interaction is a growing area of research as robotic applications expand into unstructured environments. However, much of the current research has focused on tasks involving limited degrees of freedom (DOF), while not allowing the human the ability to choose the DOF they desire to focus on. In this paper, a controller that allows humanrobot cooperation in 6 DOF Cartesian space is presented, which allows the human to direct their focus as they desire. The developed scheme was tested using a virtual reality system while maintaining physical interaction with the robot. Overall, the subjects were 100% successful in completion of the tasks and were able to exchange leader/follower roles with the robot bidirectionally. Additionally, a reinforcement learning algorithm was shown to decrease the estimated mechanical power applied by the human to exchange roles. The latter proves the efficiency of the proposed scheme and makes it a strong candidate for applications that involve sophisticated human-robot interaction in collaborative tasks found in a plethora of cases, e.g. industry, manufacturing, semi-autonomous driving etc.
Article
Continuous and real-time gesture spotting is a key factor in the development of novel human-machine interaction (HMI) modalities. Gesture recognition can be greatly improved with previous reliable segmentation. This paper introduces a new unsupervised threshold-based hand/arm gesture segmenta-tion method to accurately divide continuous data streams into dynamic and static segments from unsegmented and unbounded input data. This segmentation may reduce the number of wrongly classified gestures in real world conditions. The proposed approach identifies sudden inversions of movement direction which are a cause of oversegmentation (excessive segmentation). This is achieved by the analysis of velocities and accelerations numerically derived from positional data. A genetic algorithm is used to compute feasible thresholds from calibration data. Experimental tests with three different subjects demonstrated an average oversegmentation error of 2.70% in a benchmark for motion segmentation with a feasible sliding window size.
Article
We present a novel and simple experimental method called physical human interactive guidance to study human-planned grasping. Instead of studying how the human uses his/her own biological hand or how a human teleoperates a robot hand in a grasping task, the method involves a human interacting physically with a robot arm and hand, carefully moving and guiding the robot into the grasping pose, while the robot's configuration is recorded. Analysis of the grasps from this simple method has produced two interesting results. First, the grasps produced by this method perform better than grasps generated through a state-of-the-art automated grasp planner. Second, this method when combined with a detailed statistical analysis using a variety of grasp measures (physics-based heuristics considered critical for a good grasp) offered insights into how the human grasping method is similar or different from automated grasping synthesis techniques. Specifically, data from the physical human interactive guidance method showed that the human-planned grasping method provides grasps that are similar to grasps from a state-of-the-art automated grasp planner, but differed in one key aspect. The robot wrists were aligned with the object's principal axes in the human-planned grasps (termed low skewness in this paper), while the automated grasps used arbitrary wrist orientation. Preliminary tests show that grasps with low skewness were significantly more robust than grasps with high skewness (77-93%). We conclude with a detailed discussion of how the physical human interactive guidance method relates to existing methods to extract the human principles for physical interaction.
Article
Physical human–robot interaction and cooperation has become a topic of increasing importance and of major focus in robotics research. An essential requirement of a robot designed for high mobility and direct interaction with human users or uncertain environments is that it must in no case pose a threat to the human. Until recently, quite a few attempts were made to investigate real-world threats via collision tests and use the outcome to considerably improve safety during physical human–robot interaction. In this paper, we give an overview of our systematic evaluation of safety in human–robot interaction, covering various aspects of the most significant injury mechanisms. In order to quantify the potential injury risk emanating from such a manipulator, impact tests with the DLR-Lightweight Robot III were carried out using standard automobile crash test facilities at the German Automobile Club (ADAC). Based on these tests, several industrial robots of different weight have been evaluated and the influence of the robot mass and velocity have been investigated. The evaluated non-constrained impacts would only partially capture the nature of human–robot safety. A possibly constrained environment and its effect on the resulting human injuries are discussed and evaluated from different perspectives. As well as such impact tests and simulations, we have analyzed the problem of the quasi-static constrained impact, which could pose a serious threat to the human even for low-inertia robots under certain circumstances. Finally, possible injuries relevant in robotics are summarized and systematically classified.