Conference PaperPDF Available

Tactile-based Object Center of Mass Exploration and Discrimination

Authors:
  • BMW Group & Eindhoven University of Technology(TU/e)

Abstract and Figures

In robotic tasks, objects can be discriminated according to their physical properties, such as color, shape, stiffness, and surface textures, which can be detected by using either cameras or tactile sensors. However, objects can hardly be discriminated if their external properties are identical. In this case, internal properties of the objects should be considered, for example, the center of mass. Center of mass is an important inherent physical property of rigid objects; however, due to the difficulties in its determination, it has never been applied in object discrimination tasks. In this work, we present a tactile-based approach to explore the center of mass of rigid objects and apply it in object discrimination task. This work comprises three aspects: (a) continuous estimation of the target object's geometric information, (b) exploring the center of mass, and (c) using the center of mass feature to discriminate among objects. Experimental results show that by following our proposed approach, the center of mass of rigid objects can be accurately estimated, and objects of identical external properties but different mass distributions can be successfully discriminated according to their center of mass features. We show that our approach is also robust against the textural properties and stiffness of target objects.
Content may be subject to copyright.
Tactile-based Object Center of Mass Exploration and Discrimination
Kunpeng Yao, Mohsen Kaboli*, and Gordon Cheng
Abstract In robotic tasks, object recognition and discrim-
ination can be realized according to their physical properties,
such as color, shape, stiffness, and surface textures. However,
these external properties may fail if they are similar or even
identical. In this case, internal properties of the objects can be
considered, for example, the center of mass. Center of mass is an
important inherent physical property of objects; however, due
to the difficulties in its determination, it has never been applied
in object discrimination tasks. In this work, we present a tactile-
based approach to explore the center of mass of rigid objects
and apply it in robotic object discrimination tasks. This work
comprises three aspects: (a) continuous estimation of the target
object’s geometric information, (b) exploration of the center
of mass, and (c) object discrimination based on the center of
mass features. Experimental results show that by following our
proposed approach, the center of mass of experimental objects
can be accurately estimated, and objects of identical external
properties but different mass distributions can be successfully
discriminated. Our approach is also robust against the textural
properties and stiffness of experimental objects.
I. INTRODUCTION AND RE LATED WO RK
Tactile object recognition are of great significance in
robots’ interaction with the environment [1]. Objects can
usually be distinguished from their physical properties such
as shape, surface texture, and stiffness [2]–[6]. However,
if the target objects have the identical external physical
properties, the above-mentioned features can not be applied
for discrimination tasks. To declare if the target objects
are identical, the robot should also verify their internal
properties. Center of mass (CoM) is an important inherent
physical property of the objects. It reveals the object’s mass
distribution. In particular, CoM is a constant position with
respect to rigid objects. However, CoM has never been used
in robotic object recognition/discrimination tasks, due to the
complexity and difficulty in its determination.
A. Related Work
Several previous work has been done on the topic of
estimating the target object’s center of mass. One approach
is to estimate the CoM of the target object via robotic
manipulation tasks. Atkeson et al. [7], [8] estimated the CoM
of the load of a robotic arm during a manipulation task. The
CoM position is formulated as a parameter of this robotic-
load system and estimated by solving the dynamic equa-
tion during manipulation. However, this approach requires
accurate models and parameters of the robotic system, and
Kunpeng Yao, Mohsen Kaboli, and Gordon Cheng are with the Institute
for Cognitive Systems, Department of Electrical and Computer Engineering,
Technical University of Munich, Germany. * Mohsen Kaboli is the cor-
responding author. Email: mohsen.kaboli@tum.de. Video to this
paper: http://web.ics.ei.tum.de/˜mohsen/videos/Humanoids2017.mp4
UR10 Robot
Robotiq Gripper
Finger A
Finger C
Finger B
SCF
ZS
YS
XS
OptoForce 3-Axis
Force Sensor
(OMD-20-SE-40N)
OCF
YO
ZO
XO
WCF
XW
ZW
Fig. 1: System description: the UR10 robotic arm, the
Robotiq gripper, and the OptoForce 3D Force sensors.
the experimental result was inaccurate due to the unmodeled
dynamics. This approach also suffers from the influence of
the gravitational torque. Another approach estimate the CoM
by executing tipping actions on the object. In [9]–[11], the
CoM of the target object is obtained by determining the
“gravity equi-effect planes” or the “passing-C.M lines”. The
robotic arm installed with force sensors tips the object using
its fingertip. The planes or lines that pass through the CoM
can be calculated based on the finger position and force
information recorded during the tipping movement. However,
this approach requires the estimation of the fingertip vector
and the accurate representation of lines and planes, which
are of high computational complexity. The precise shape,
position, orientation of the target object is already given as
prior knowledge. In addition, the target object must maintain
a stable contact on the table surface without any slip while
it is being tilted by the fingertip. In this work, we propose
a purely tactile-based approach to determine the CoM of
target object, which is model-free and of low computational
complexity, thus can be applied in on-line robotic tasks.
B. Contribution
We propose a tactile-based approach to explore the CoM
of rigid object in an unknown workspace and apply the CoM
feature in robotic object discrimination task.
We first propose a strategy to continuously estimate the
geometric information of regular shaped target objects
in an unknown workspace.
Then, we present a tactile-based approach to explore
the CoM of rigid objects by applying lifting actions in
a three-sensing-point case.
Furthermore, we formulate the CoM information as a
constant physical feature of the object, which can be
applied in object discrimination or identification tasks.
II. SYS TEM DESCRIPTION
The robotic system (see Fig. 1) is composed of a 6-DoF
UR10 (Universal Robots) robotic arm, a RobotiQ 3-finger
industrial gripper, and an OptoForce sensor set.
The gripper has three fingers, denoted as A, B, and C.
The OptoForce OMD-20-SE-40N 3D tactile sensor set has
four sensor nodes, each one can measure 3D forces on its
surface. A corresponding Sensor Coordinate Frame (SCF) is
defined for each sensor node on the vertex of its external
semi-sphere surface1(see Fig. 1). Three sensor nodes were
installed on each fingertip of the gripper. The sensing point
of each finger (i.e. fingertip installed with tactile sensor node)
are denoted as P
A,P
B, and P
C, respectively; P
Band P
Care on
the same side and symmetric with respect to P
A. The World
Coordinate Frame (WCF) is a Cartesian coordinate system
located at the origin of the workspace. The table surface is
set as the reference plane. The workspace is a cuboid volume
above the reference plane: [xW,xW]×[yW,yW]×[zW,zW], and
spacial position located inside is denoted as (xW,yW,zW).
We use normal force f n
ito denote the amplitude of the
force component in Z+
Sdirection, which is also referred to as
the grasping force. The tangential force can be decomposed
into two components: the one along Z+
Waxis is named lifting
force, and denoted as fl
i, while the other component is
neglected, since it does not influence the analysis.
III. METHODOLOGY
We first introduce the estimation of geometric information
of target objects in Sec. III-A; the tactile-based criterion for
CoM and the CoM exploration strategy are analyzed from
Sec. III-B to Sec. III-C, followed by the extraction of CoM
feature in Sec. III-D.
A. Tactile-based Object Geometric Information Estimation
We explain how to continuously explore the shape of a
quadrilaterally-faced hexahedron object in order to estimate
its geometric information, which is required for the CoM
exploration.
Tactile information detected on the sensor node is used as
feedback to control the movement of the robot. Two kinds
of points are of interest during exploration: contact point,
which is detected when the exploratory sensor touches the
object surface, i.e. as soon as the resultant force measured on
the exploratory sensor has exceeded a pre-determined small
value (|fA|>¯
fε); and separate point, which is detected when
the exploratory sensor detaches from the contacted object,
i.e. as soon as the temporal resultant force detected by the
exploratory sensor reduces below a threshold (|fA|<fε).
The proposed approach can be applied to quadrilaterally-
faced hexahedron objects, whose faces are quadrilaterals.
Each face can be defined by detecting three non-collinear
contact points on it. Then explore the contacted face by
moving from contact points on the plane towards different
directions, the robot can collect separate points on edges of
1The subscript “S”, “W”, and “O” denote the SCF, WCF, and OCF (object
coordinate frame) coordinate frame, respectively; “+” and “-” denotes the
positive and negative direction of the corresponding axis.
Xw
Yw
Zw
PS1
PS2
Starting Plane
Tar ge t Plane
Exploratory
Direction
PC1
PC2
PE1
PE2
PE3
Main Axis
Estimated Centroid
Fig. 2: The continuous exploration of a cuboidal experimen-
tal object in an unknown workspace.
the face, whereas each edge is determined by two separate
points. As soon as all of the edges are known, the vertices of
this side face are obtained; and the geometric information,
such as location, orientation, and shape, can be estimated
based on all the vertices of this target object.
Here we take a cuboidal object as example (see Fig. 2),
and explain the continuous exploration process in detail. To
increase the available exploration space, only one finger is
stretched out, which is referred to as the exploratory finger,
while the other two fingers are curled up. The sensor node
installed on the fingertip of exploratory finger is referred
to as the exploratory sensor. Without loss of generality,
defined the X
Waxis as the exploratory direction, and the
plane perpendicular to it, i.e. XW-YW-ZW, as the starting plane
(all points on this plane satisfy x=xW), while XW-YW-ZW
as the target plane (x=xW). The gripper first stretches out
the exploratory finger (e.g. finger A), moves the exploratory
sensor to a starting position P
S1on the starting plane, and
adjusts the orientation of the exploratory sensor towards
X
Wdirection. Then the robot pushes the exploratory sensor
towards the exploratory direction and tries to detect contact
with the target object. If a contact point is detected, the
current WCF coordinate of the exploratory sensor is recorded
as the first contact point P
C1, and the robot immediately
stops its current movement. However, if no contact point
is detected until the sensor node has moved to the target
plane, the robot will retreat back, select another starting
position, and repeat this exploration, until a contact point is
detected. Since side faces of a cuboid are perpendicular to the
reference plane, under this condition, only two contact points
on the same side face are sufficient for representing this
plane. Starting from P
C1, the robot finger continues to slide2
the exploratory sensor node upwards until a separate point
P
E1is detected, indicating the detection of an edge of object.
Then, the robot retreats the exploratory sensor back to the
starting position P
S1. The robot then chooses another starting
2Slide means that the exploratory sensor node is pressed to maintain a
non-zero contact force on the face of the target object during the entire
movement.
position P
S2, which is selected by horizontally moving a
distance from P
S1on the starting plane. Starting from P
S2, the
robot repeats the same movement towards the X
Wdirection
and tries to detect another contact point P
C2. Set the line
P
C1P
C2as the trajectory of the exploratory sensor, the robot
slides the exploratory sensor node starting from P
C1and P
C2
to obtain another two separate points P
E2and P
E3on two
vertical edges of this side face. Using these collected points
(two contact points to determine the plane, and three separate
points to determine three edges respectively), this side face
can be fully reconstructed, and all of its four vertices are
obtained. After this, the robot moves to the other side of
the workspace and starts exploration in the X+
Wdirection to
obtain the four vertices of the opposite side face, following
the same process as described above. The entire process of
geometric information estimation is completed as soon as
all of the vertices are obtained. Represent the set of vertices
as {Vi},i1,2,...,N(N=8 for hexahedrons), and the
coordinate of Viis (Vx
i,Vy
i,Vz
i). The centroid of the object O
is calculated as O= (xo,yo,zo) = 1
NiVx
i,iVy
i,iVz
i,i=
1,2,...,N. Since the object lies on the reference plane XW-
OW-YW, its location in the workspace is the projection of its
centroid on this plane, i.e. (xo,yo). We define the main axis l
of the target object as the line that passes through its centroid
and is parallel to the reference plane, and along which the
object has the largest length. The orientation of the object
is represented as the included angle θ,θ[π/2,π/2],
between its main axis and the X+
Waxis.
The robot rotates the gripper according to θ, such that
the line passes through two sensing points (P
Band P
C)
is parallel to l. For cuboid, its length, width, and height
can be easily obtained by directly calculating the distances
between adjacent vertices. The origin of its OCF can locate
at one arbitrary vertex, and the axes XO,YO, and ZOare
defined along its length edge, width edge, and height edge,
respectively.
B. Center of Mass Determination
We propose to determine the CoM of the target rigid object
by applying lifting action. Consider the process of lifting
a steelyard balance as an example. It can be lifted up and
maintain balance without rotation if and only if the lifting
force passes through its CoM.
Represent the CoM in OCF of the target object as C=
(cx,cy,cz). Each component can be determined by searching
for one point of application for the lifting force along the
corresponding axis in OCF; through this point of application,
the object can be lifted up while maintaining equilibrium
state. We take the determination of cxas an example. In this
work, we discuss the three-sensing-point case (each contact
point can sense the force signals), satisfying that (1) two
sensing points (e.g. P
Band P
C) are aligned on the one side of
the grasped object, and (2) their positions are symmetric with
respect to the other sensing point (e.g. P
A) on the opposite
side of the object. This condition can be satisfied here by
controlling the gripper in pinch mode. While applying lifting
action to the target object, the gripper grasps the object at one
lifting position and lifts it up for a small distance h. At the
equilibrium, both force condition and torque condition are
satisfied, which state that both resultant force and resultant
torque applied on the object are zero. We show that during
lifting, the force condition can be checked via linear slip
detection, whereas the torque condition can be verified by
detecting the rotation of the target object.
1) Force Condition Verification via Linear Slip Detection:
According to the Coulomb’s law of friction, the largest value
of fiction that the gripper can provide is µ·fN, where µis
the friction coefficient on the contact plane and is considered
as a constant value. If the grasping force applied by the
gripper is insufficient, the resultant lifting force F=fl
iis
not able to balance the gravity, and linear slip happens on the
grasping point, i.e. F<GFN,Gis the weight of the object
and FNis the supporting force from the table (if exists).
Force condition is sufficient but not necessary for equilibrium
state, it can be satisfied by regulating the grasping force. The
force regulation is realized by linear slip detection on each
one of the contact points. We detected linear slip signals
by measuring the changing rate of tangential force on the
contact surface [12].
If slip signal is detected on anyone of the contact points,
the applied grasping force is considered insufficient. Then
the gripper increases its grasping force by further closing its
fingers and tries to lift the object again. The robot repeats this
procedure until the target object can be lifted up to the target
height without linear slip, indicating the satisfaction of force
condition. The next step is to check the torque condition.
2) Torque Condition Verification via Rotation Detection:
Torque is hard to measure without torque sensors’ feedback.
Non-zero resultant torque applied on the object causes ro-
tation of the object with respect to the contact point, hence
we verify the torque condition by detecting the rotation of
the target object during lifting process. Due to the positional
symmetry, the tangential forces on P
Band P
Cshould be equal
during the entire lifting process, if and only if the object
is in equilibrium, i.e. tangential force applied on P
Apasses
through the CoM. Represent the sequence of force signals
that recorded continuously on contact point iduring lifting
process as fi,iA,B,C. According to the analysis above, if
fBis highly similar to fC, it can be concluded that the current
lifting position is close to the real CoM of the object.
We propose to use the cross-correlation to measure the
similarity of force signal sequences due to its robustness
and sensitiveness. Cross-correlation measures the correlation
between two jointly stationary series and has a normalized
measurement in the range of [1,1](see Fig. 3). The cross-
correlation criterion for checking the torque condition can be
formulated as:
ρBC =cov(fB,fC)
σfBσfC
δ,ρBC [1,1],δ(0,1)(1)
with cov(fB,fC)being the cross-covariance of fBand fC,
whereas σfBand σfCthe standard deviation of fBand fC,
respectively. The closer ρBC to 1, the higher the similarity
between fBand fC. This criterion is independent of the
5 10 15 20 25 30 35 40
Position of Lifting
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Cross Correlation
Cross-Correlation ;BC
(a) ρBC
5 10 15 20 25 30 35 40
Position of Lifting
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Cross Correlation
Cross-Correlation ;AB and ;AC
;AB
;AC
(b) ρAB and ρAC
Fig. 3: The cross-correlations (Y axis) at different lifting
positions (X axis) along the XOaxis of an experimental
object. At each position, the robot stably lifted the object
to h=30mm. Lifting forces on each contact point are
recorded during lifting for analysis.
absolute force values and thus can be applied on objects
of different shapes, textures, and stiffness.
To determine if a lifting position can be estimated as CoM,
the robot first tries to lift the object at this position and
regulates its grasping force via linear slip detection. Once
the force condition is satisfied, the robot evaluates ρBC for
the torque condition. If ρBC is close to 1, the current lifting
position is considered close to the CoM.
C. Center of Mass Exploration
Now we explain how to search for the CoM of a regular
shaped rigid object along one dimension. The geometric
information estimated in Sec. III-A is required. In the fol-
lowing, we take the exploration of cx(in the XO-axis) as an
example. At the target lifting position, both force condition
and torque condition can be satisfied. The binary search
algorithm of the computational complexity O(log2N)(for
Npossible sampling points) is the optimal candidate for this
one-dimensional search problem, and tactile feedback is used
to guide the search.
In this three-sensing-point case, we show that between the
two contact points on the same side (e.g. P
Band P
C), the one
that is closer to the real CoM of the target object senses larger
linear friction force than the other while the object is lifted
up by these three contact points.
Since the lifted object does not move horizontally, normal
forces satisfy fn
1=fn
2+fn
3; and due to the symmetry of P
B
and P
C,fn
2=fn
3,fn. Assume the object does not rotate
with respect to the main axis, thus it holds that fl
1=fl
2+fl
3
according to the torque condition. Represent the ratio of fl
to fnfor contact points 2 and 3 as αand β, respectively.
Since force condition is satisfied, no linear slip happens on
the contact surface:
fl
2=αfn
2µfn
2=µfn,(2)
fl
3=βfn
3µfn
3=µfn.(3)
Forces in ZWaxis balance and meet fl
2+fl
3= (mg fN)/2,
where fNis the supporting force from the reference plane
(fN=0 if the object is not supported by the table). Select the
reference point on the main axis, and the distances between
each lifting force vector to the reference point are denoted
as ri,i=1,2,3; rgdenotes the distance from the weight
vector mgto the reference point. According to the torque
condition, r1fl
1+r2fl
2+r3fl
3=mgrgwith mbeing the mass
of the object and gthe gravitational acceleration.
Reformulating the above equations result in:
α+β= (mg fN)/(2fn)>0,(4)
(r1+r2)α+ (r1+r3)β=mgrg/fn,(5)
and thus the following relationship holds:
α+β= ((r1+r2)α+ (r1+r3)β)/C,(6)
C=2mgrg
mg fN
>0.(7)
Without loss of generality, we assume r3>r2. Since (α+
β)fnrepresents the minimal resultant lifting force to main-
tain force condition in ZWdirection, for a given fn,α+β
has the minimal possible value. If α>0,β>0, according
to Eq. 6, α+βreaches its minimum value if and only
if α/β= (r1+r3)/(r1+r2). Then α>βand fl
2>fl
3. If
α·β<0, then α>0>βand |α|>|β|, and fl
2>fl
3is also
satisfied. Fig. 3(b) shows an experimental verification of this
conclusion. This conclusion is used to determine the next
lifting position. We use the cross-correlation ρAB and ρAC to
evaluate the similarity of signal sequences. The next lifting
position lies in the range that is closer to the contact point
that senses larger tangential force. For example, if ρAB >ρAC ,
the next lifting position should be closer to P
Bwhile further
to P
C.
In each exploration step, the robot bisects the remaining
search region and chooses the middle point as the next lifting
position, until has found one lifting position that can be
estimated as the CoM.
D. CoM Feature Extraction
The CoM feature is defined with respect to the OCF.
Along each dimension of the OCF, the edge of the object
is segmented by the CoM component into two parts. We
use the superscript “ +” to denote the longer segment and
” the shorter segment. Then in each dimension, the CoM
feature is defined as the ratio of these two parts.
λ
λ
λ= (λx,λy,λz) = x
O/x+
O,y
O/y+
O,z
O/z+
O.(8)
Each component of λ
λ
λis normalized in (0,1]. As long as the
OCF is determined, the CoM feature can be extracted as a
constant vector.
IV. EXP ERI MEN TAL EVA LUATI ON
We designed two scenarios to experimentally evaluate
the performance of our proposed approaches. In the first
scenario, the robot estimated target object’s geometric infor-
mation and then explored its CoM. Experimental objects of
distinct textures, stiffness, and sizes were used. In the second
scenario, the robot tried to discriminate several experimental
objects according to their CoM features.
Object 1 Object 2 Object 3
S: + T: ++ C: +S: ++ T: -- C: +S: + T: -- C: +
Object 4
S: ++ T: -- C: +
Object 5 Object 6
S: ++ T: ++ C: + S: + T: - C: ++
Object 9 Object 10 Object 11
S: - T: + C: +
Object 12
S: - T: ++ C: 0 S: - T: + C: ++ S: -- T: ++ C: +
S: -- T: -- C: 0 S: - T: - C: ++
Object 7 Object 8
Fig. 4: Experimental objects. ‘S’: stiffness, from very soft (-
-) to very hard (++). ‘T’: textural properties, ranging from
very fine (- -) to very rough (++). ‘C’: distance between CoM
and geometric center, ‘0’ means CoM coincides geometric
center, while ‘++’ indicates CoM is far from geometric
center.
(I) (II) (III)
Fig. 5: For multiple objects, the entire workspace is seg-
mented into different regions, i.e. (I), (II), and (III), and the
robot explores target object located in each region.
In this work, we used cuboidal objects and only consider
the 1D CoM feature λx(i.e. the length edge of the object),
due to the hardware constraints, which mainly come from
the shape of the gripper. In addition, neither location nor
orientation of the target object is supposed to change after
the geometric information estimation. Therefore, to maintain
the stability of the target object during CoM exploration,
cylinder-shaped objects or objects with curved surfaces are
not taken into consideration.
A. CoM Exploration of Single Experimental Object
In this scenario, the task of the robot is to explore the
CoM of several experimental objects, which have different
physical properties, such as shape, textures, stiffness, and
CoM locations. Due to the hardware constraint, it is difficult
to estimate the geometric information of all the target objects
simultaneously. For multiple target objects (see Fig. 5), the
entire workspace is segmented into several regions, and the
robot explores each one of the target objects located in the
corresponding region successively.
Here we take the object in the region (II) as an example.
Fig. 6 shows the reconstructed object after the geometric
information estimation (see Fig. 2). The size of Object 1 was
estimated as 40.2mm ×3.6mm ×5.8mm, while the measured
real size is 40.0mm ×3.8mm ×6.0mm.
Then the robot adjusted the orientation of the gripper and
started to explore the CoM along the XOaxis (see Fig. 8). At
each lifting position, the object was lifted up to h=30mm
above the reference plane. The CoM exploration terminates
as soon as either the next search range is smaller than 10mm
Fig. 6: The reconstructed shape of the target object.
TABLE I: Explored CoM Features of Experimental Objects
Object Nr. 1 2 3 4 5 6
λx0.784 0.865 0.897 0.579 0.600 0.428
Object Nr. 7 8 9 10 11 12
λx0.998 0.291 0.833 0.909 0.218 0.769
or ρBC >δ=0.9. The CoM component was estimated as
λx=0.681 within six steps, with an error range of L×26=
6.25mm (L=40.0mm).
The estimated CoM components λxof each experimental
object are listed in Table I.
Object 1 Object 2 Object 3
S: + T: -- C: 0 S: + T: -- C: + S: + T: -- C: ++
Fig. 7: We deliberately manufactured three experimental
objects, which have identical stiffness, surface textures, and
sizes, while distinct CoMs (marked by the red region).
B. CoM-based Object Discrimination
Three manufactured objects (see Fig. 7) are used in
this scenario (see Fig. 1). They have the identical physical
properties, while their CoMs are modified to be distinct by
deliberately adjusting their inner structures.
The robot collected the CoM feature of each object for
20 trials. The mean values of the explored λxof each
experimental object are listed in Table II. This 1D dataset
was clustered by segmenting the estimated kernel density at
its local minimum values. We used Gaussian kernel with a
bandwidth of 0.025 for the KDE (Kernel Density Estimation)
analysis. Result shows that the sampled CoM featured can
be clustered into three classes, with an adjusted rand index
of 1.0, indicating that all the collected CoM features are
successfully clustered (see Fig. 9).
TABLE II: Explored CoM Features in Sec. IV-B
Object Nr. 1 2 3
λx0.927 0.784 0.346
(A-1) (B-1) (C-1) (D-1) (E-1) (F-1)
A
B C
Explored CoM
2 4 ×103 2 4 ×103 2 4 ×103 2 4 ×103 2 4 ×103 2 4 ×103
(A-2) (B-2) (C-2) (D-2) (E-2) (F-2)
Real CoM
Lifting Position
Fig. 8: The process of exploring the CoM of a regular shaped rigid object using binary search approach based on tactile
feedback. In each column, the upper subfigures (A-1) - (F-1) show the lifting position at each sample step. The real CoM
of this target object is marked by the red region, and the yellow triangle in each figure indicates the current lifting position.
The lower subfigures (A-2) - (F-2) show the corresponding sensor signal sequences of each finger recorded during lifting.
Fig. 9: Clustering result of CoM features based on KDE.
V. C ONCLUSION AND FUTURE WORK
In this paper, we proposed a tactile-based approach to
explore the CoM of rigid objects in an unknown workspace
for robotic object discrimination tasks. We first presented a
continuous exploration approach for the robot to estimate
target object’s geometric information, which can be applied
to quadrilaterally-faced hexahedron objects. Then, we ana-
lyzed the conditions that the applied force and torque should
satisfy when the object is lifted up at its CoM, and proposed
the strategy to explore the CoM of a target rigid object. It
is worth mentioning that the applicability of the proposed
CoM exploration approach is independent of the number
of grasping fingers, rather only depends on the sensing
points. Furthermore, we formulated the CoM information as
a constant feature defined in the OCF, which is insusceptible
to the external properties of the object and can be applied in
object discrimination and recognition tasks.
The scope of this work is focused on the regular shaped
rigid objects. In the future, we plan to generalize our
approach for irregular shaped objects or soft objects by
removing the constraints from hardware, i.e. using dexterous
robotic hand and multi-modal tactile sensors. In our proposed
method, at least three sensing points are required to detect
the rotation of lifted object. The number of sensing points
can be further reduced if the occurrence and direction of
rotational slip can be detected. In addition, it is possible to
accelerate the exploration process of the CoM for a novel
object by taking advantage of the prior knowledge, i.e. by
transferring the mass distribution information of the explored
objects to a novel target object.
ACKNOWLEDGMENT
Many thanks to OptoForce Ltd. for providing tactile
sensors for this study.
REFERENCES
[1] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, “Tactile sensing—
from humans to humanoids,” IEEE Transaction on Robotic, vol. 26,
no. 1, pp. 1–20, 2010.
[2] M. Kaboli, P. Mittendorfer, V. Hugel, and G. Cheng, “Humanoids
learn object properties from robust tactile feature descriptors via multi-
modal artificial skin,” IEEE Int. Conf. on Humanoid Robots, 2014.
[3] M. Kaboli, R. Walker, and G. Cheng, “In-hand object recognition via
texture properties with robotic hands, artificial skin, and novel tactile
descriptors,” IEEE International Conference on Humanoid Robots,
pp. 2242–2247, 2015.
[4] M. Kaboli, R. Walker, and G. Cheng, “Re-using prior tactile expe-
rience by robotic hands to discriminate in-hand objects via texture
properties,” IEEE International Conference on Robotics and Automa-
tion, pp. 2242–2247, 2016.
[5] M. Kaboli, D. Feng, K. Yao, P. Lanillos, and G. Cheng, “A tactile-
based framework for active object learning and discrimination using
multimodal robotic skin,” IEEE Robotics and Automation Letters,
vol. 2, no. 4, pp. 2143–2150, 2017.
[6] M. Kaboli, D. Feng, and G. Cheng, “Active tactile transfer learning for
object discrimination in an unstructured environment using multimodal
robotic skin,” International Journal of Humanoid Robotics, 2017.
[7] C. G. Atkeson, C. H. An, and J. M. Hollerbach, “Rigid body load
identification for manipulators,” IEEE Conference on Decision and
Control, pp. 996–1002, 1985.
[8] C. H. An, C. G. Atkeson, and J. M. Hollerbach, “Estimation of inertial
parameters of rigid body links of manipulators,” in IEEE Conference
on Decision and Control, vol. 24, pp. 990–995, 1985.
[9] Y. Yu, K. Fukuda, and S. Tsujio, “Estimation of mass and center of
mass of graspless and shape-unknown object,” in IEEE International
Conference on Robotics and Automation, vol. 4, pp. 2893–2898, 1999.
[10] Y. Yu, T. Kiyokawa, and S. Tsujio, “Estimation of mass and center
of mass of unknown and graspless cylinder-like object,International
Journal of Information Acquisition, vol. 1, no. 01, pp. 47–55, 2004.
[11] Y. Yu, T. Arima, and S. Tsujio, “Estimation of object inertia parameters
on robot pushing operation,” in IEEE International Conference on
Robotics and Automation, pp. 1657–1662, 2005.
[12] M. Kaboli, K. Yao, and G. Cheng, “Tactile-based manipulation of
deformable objects with dynamic center of mass,” IEEE International
Conference on Humanoid Robots, pp. 790–799, 2016.
... In [51], the authors estimated only the mass of an object by controlled push, which required prior knowledge of the friction coefficient of the surface. Similarly, the study [52] used tactile forces during a 3-finger robotic grasp to determine the center of mass of the object. To estimate the complete inertial matrix of a rigid object, the authors in [53] used a factor graph approach that involved in-hand manipulation with precise tactile sensing. ...
Article
Full-text available
Interactive exploration of the unknown physical properties of objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. Precise identification of these properties is essential to manipulate objects in a stable and controlled way, and is also required to anticipate the outcomes of (prehensile or non-prehensile) manipulation actions such as pushing, pulling, lifting, etc. Our study focuses on autonomously inferring the physical properties of a diverse set of various homogeneous, heterogeneous, and articulated objects utilizing a robotic system equipped with vision and tactile sensors. We propose a novel predictive perception framework for identifying object properties of the diverse objects by leveraging versatile exploratory actions: non-prehensile pushing and prehensile pulling. As part of the framework, we propose a novel active shape perception to seamlessly initiate exploration. Our innovative dual differentiable filtering with Graph Neural Networks learns the object-robot interaction and performs consistent inference of indirectly observable time-invariant object properties. In addition, we formulate a N-step information gain approach to actively select the most informative actions for efficient learning and inference. Extensive real-robot experiments with planar objects show that our predictive perception framework results in better performance than the state-of-the-art baseline, and demonstrate our framework in three major applications for i) object tracking, ii) goal-driven task, and iii) change in environment detection.
... In [51], the authors estimated only the mass of an object by controlled push, which required prior knowledge of the friction coefficient of the surface. Similarly, the study [52] used tactile forces during a 3-finger robotic grasp to determine the center of mass of the object. To estimate the complete inertial matrix of a rigid object, the authors in [53] used a factor graph approach that involved in-hand manipulation with precise tactile sensing. ...
Preprint
Full-text available
Interactive exploration of the unknown physical properties of objects such as stiffness, mass, center of mass, friction coefficient, and shape is crucial for autonomous robotic systems operating continuously in unstructured environments. Precise identification of these properties is essential to manipulate objects in a stable and controlled way, and is also required to anticipate the outcomes of (prehensile or non-prehensile) manipulation actions such as pushing, pulling, lifting, etc. Our study focuses on autonomously inferring the physical properties of a diverse set of various homogeneous, heterogeneous, and articulated objects utilizing a robotic system equipped with vision and tactile sensors. We propose a novel predictive perception framework for identifying object properties of the diverse objects by leveraging versatile exploratory actions: non-prehensile pushing and prehensile pulling. As part of the framework, we propose a novel active shape perception to seamlessly initiate exploration. Our innovative dual differentiable filtering with Graph Neural Networks learns the object-robot interaction and performs consistent inference of indirectly observable time-invariant object properties. In addition, we formulate a N-step information gain approach to actively select the most informative actions for efficient learning and inference. Extensive real-robot experiments with planar objects show that our predictive perception framework results in better performance than the state-of-the-art baseline and demonstrate our framework in three major applications for i) object tracking, ii) goal-driven task, and iii) change in environment detection.
... Furthermore, we demonstrated category-level pose estimation with the reconstructed object model and our approach outperforms baseline ICP and S-ICP methods. As future work, we would like to extend ACTOR for safely manipulating transparent objects in unstructured scenarios with possible deformability and dynamic center of mass [11,44,45]. ...
Preprint
Full-text available
Accurate shape reconstruction of transparent objects is a challenging task due to their non-Lambertian surfaces and yet necessary for robots for accurate pose perception and safe manipulation. As vision-based sensing can produce erroneous measurements for transparent objects, the tactile modality is not sensitive to object transparency and can be used for reconstructing the object's shape. We propose ACTOR, a novel framework for ACtive tactile-based category-level Transparent Object Reconstruction. ACTOR leverages large datasets of synthetic object with our proposed self-supervised learning approach for object shape reconstruction as the collection of real-world tactile data is prohibitively expensive. ACTOR can be used during inference with tactile data from category-level unknown transparent objects for reconstruction. Furthermore, we propose an active-tactile object exploration strategy as probing every part of the object surface can be sample inefficient. We also demonstrate tactile-based category-level object pose estimation task using ACTOR. We perform an extensive evaluation of our proposed methodology with real-world robotic experiments with comprehensive comparison studies with state-of-the-art approaches. Our proposed method outperforms these approaches in terms of tactile-based object reconstruction and object pose estimation.
Article
Object rearrangement is widely demanded in many of the manipulation tasks performed by industrial and service robots. Rearranging an object through planar pushing is deemed energy efficient and safer compared with the pick-and-place operation. However, due to the unknown physical properties of the object, re-arranging an object toward the target position is difficult to accomplish. Even though robots can benefit from multi-modal sensory data for estimating novel object dynamics, the exact estimation error bound is still unknown. In this work, firstly, we demonstrate a way to obtain an error bound on the center of mass (CoM) estimation for the novel object only using a position-controlled robot arm and a vision sensor. Specifically, we extend Mason's Voting Theorem (MVT) to object CoM estimation in the absence of accurate information on friction and object shape. The probable CoM locations are monotonously narrowed down to a convex region, and the Extended Voting Theorems (EVT's) guarantee that the convex region contains the CoM ground truth in the presence of contact normal estimation error and pushing execution error. For the object translation task, existing methods generally assume that the pusher-object system's physical properties and full-state feedback are available, or utilize iterative pushing executions, which limits the application of planar pushing to real-world settings. In this work, assuming a nominal friction coefficient between the pusher and object through contact normal error bound analysis, we leverage the estimated convex region and the Zero Moment Two Edge Pushing (ZMTEP) method [1] to select the contact configurations for object pure translation. It is ensured that the selected contact configurations are capable of tolerating the CoM estimation error. The experimental results show that the object can be accurately translated to the target position with only two controlled pushes at most.
Article
Full-text available
In this paper, we propose a probabilistic active tactile transfer learning (ATTL) method to enable robotic systems to exploit their prior tactile knowledge when discriminating among objects via their physical properties (surface texture, stiffness, and thermal conductivity). Using the proposed method, the robot autonomously selects and exploits its most relevant prior tactile knowledge to efficiently learn about new unknown objects with a few training samples or even one. The experimental results show that using our proposed method, the robot successfully discriminated among new objects with 72% discrimination accuracy using only one training sample (on-shot-tactile-learning), and consistently outperformed the baseline methods. The results also show that our method is robust against transferring negative prior tactile knowledge.
Article
Full-text available
In this paper, we propose a complete probabilistic tactile-based framework to enable robots to autonomously explore unknown workspaces and recognize objects based on their physical properties. Our framework consists of three components: (1) an active pre-touch strategy to efficiently explore unknown workspaces; (2) an active touch learning method to learn about unknown objects based on their physical properties (surface texture, stiffness, and thermal conductivity) with the least number of training samples; and (3) an active touch algorithm for object discrimination, which selects the most informative exploratory action to apply to the object, so that the robot can efficiently distinguish between objects with a few number of actions. Our proposed framework was experimentally evaluated using a robotic arm equipped with multimodal artificial skin. The robot with the active pre-touch method reduced the uncertainty of the workspace up to 30% and 70% compared to uniform and random strategies, respectively. By means of the active touch learning algorithm, the robot used 50% fewer samples to achieve the same learning accuracy than the baseline methods. By taking advantage of the prior knowledge obtained during the learning process, the robot actively discriminated objects with an improvement of 10% recognition accuracy compared to the random action selection approach.
Conference Paper
Full-text available
This paper presents new methods for the recognition and categorization of object properties such as surface texture, weight, and compliance using a multi-modal artificial skin mounted on both arms of a humanoid. In addition, it introduces two novel feature descriptors, which are useful for providing high-level information to learning algorithms. The artificial skin has built-in 3-axis accelerometer, normal force, proximity, and temperature sensors. To explore different surface textures and weights, objects were left sliding between the NAO humanoid's arms. The caused vibration was detected by accelerometers. Surface texture and weight recognition models were learned from the extracted features of the vibration signals thanks to two learning algorithms, namely the support vector machine (SVM) and the Expectation Maximization (EM). In order to recognize objects having different compliances, SVM and EM took into account total amount of forces applied by the arms to hold the object firmly. The experimental results show that the humanoid can distinguish between different objects having different surface textures and weights with a recognition rate of 100%. Furthermore, it can categorize objects with hard and soft surfaces and classify objects having similar compliance with 100% and 70% accuracy rates respectively.
Article
Full-text available
Starting from human ??sense of touch,?? this paper reviews the state of tactile sensing in robotics. The physiology, coding, and transferring tactile data and perceptual importance of the ??sense of touch?? in humans are discussed. Following this, a number of design hints derived for robotic tactile sensing are presented. Various technologies and transduction methods used to improve the touch sense capability of robots are presented. Tactile sensing, focused to fingertips and hands until past decade or so, has now been extended to whole body, even though many issues remain open. Trend and methods to develop tactile sensing arrays for various body sites are presented. Finally, various system issues that keep tactile sensing away from widespread utility are discussed.
Article
In manipulating an object stably and accurately by a robot, the mass and the center of mass of the object is often required. For cases when the weight or shape of an object is over the grasp capacity of a robot's hand, a technique that can estimate the mass and center of mass of a graspless unknown object, which has curved surfaces and a base plane is proposed in this paper. A line called Passing-C.M. Line which contains the center of mass, is defined. For estimating the passing-C.M. line, we proposed the Tip Operation by a robot finger, which tips the object slowly and repeatedly in a parallel motion with a vertical operation plane. Using the fingertip position and force information measured from tip operations, an algorithm to estimate the passing-C.M. line is described. Then an algorithm to estimate the mass and center of mass of the object is given by estimating the intersecting point of several differently oriented passing-C.M. lines.
Conference Paper
A method for estimating the mass, the center of mass, and the moments of inertia of a rigid body load during general manipulator movement is presented. The algorithm is derived from the Newton-Euler equations and incorporates measurements of the force and torque from a wrist force/torque sensor and of the arm kinematics. The identification equations are linear in the desired unknown parameters, which are estimated by least squares. We have implemented this identification procedure on a PUMA 600 robot equipped with an RTI FS-B wrist force/torque sensor, and on the MIT Serial Link Direct Drive Arm equipped with a Barry Wright Company Astek wrist force/torque sensor.
Conference Paper
This paper proposes a technique that can estimate the inertia parameters of a graspless unknown object, which is pushed by robot fingers. Using the fingertip different accelerations (or angular accelerations), velocities (or angular velocities) and forces information measured in pushing operations, the algorithms to estimate the object mass (or moment of inertia) are described. Then, a line called C.M. Line, is defined in this paper. The line contains the center of mass and is between two fingertips which are in point-contact with an object side. By using two or more than two orientation-different C.M. lines, an algorithm to estimate the center of mass of the object is given. Lastly, experimental verification on the proposed approach is performed and its results are outlined.