Content uploaded by Christopher Yee Wong
Author content
All content in this area was uploaded by Christopher Yee Wong on Mar 29, 2024
Content may be subject to copyright.
Available via license: CC BY-SA 4.0
Content may be subject to copyright.
1
Sensor Observability Analysis for Maximizing Task-Space
Observability of Articulated Robots
Christopher Yee Wong, Member, IEEE and Wael Suleiman, Senior Member, IEEE
Abstract—We propose a novel performance metric for articu-
lated robots with distributed directional sensors called the sensor
observability analysis (SOA). These robot-mounted distributed
directional sensors (e.g., joint torque sensors) change their
individual sensing directions as the joints move. SOA transforms
individual sensors axes in joint space to provide the cumulative
sensing quality of these sensors to observe each task-space axis,
akin to forward kinematics for sensors. For example, certain joint
configurations may align joint torque sensors in such a way that
they are unable to observe interaction forces in one or more
task-space axes. The resultant sensor observability performance
metrics can then be used in optimization and in null-space
control to avoid sensor observability singular configurations or
to maximize sensor observability in particular directions. We
use the specific case of force sensing in serial robot manipulators
to showcase the analysis. Parallels are drawn between sensor
observability and the traditional kinematic manipulability; SOA
is shown to be more generalizable in terms of analysing non-
joint-mounted sensors and can potentially be applied to sensor
types other than for force sensing. Simulations and experiments
using a custom 3-DOF robot and the Baxter robot demonstrate
the utility and importance of sensor observability in physical
interactions.
Index Terms—Force and tactile sensing, kinematics, physical
interaction, robot safety
I. INTRODUCTION
SENSORS are invaluable tools for robots as they are
their way to observe themselves (introspection) and the
world around them (extrospection). Unfortunately, sensors
have limitations beyond their technical specifications, partic-
ularly directional sensors. Directional sensors are those with
explicit axes along which measurements are performed, for
example joint torque sensors, strain gauges, accelerometers,
gyroscopes, distance sensors, cameras, etc. While optimal
sensor placement is an active area of research for mobile
robots and sensor networks [1], [2], the same cannot be said
for articulated and reconfigurable robots [3]. It is often a
common assumption for articulated and reconfigurable robots
that, given the presence of sensors, all task-space quantities
are fully observable at all times. Consider a serial robotic
manipulator with joint torque sensors at each joint; one would
normally assume that the end effector (EE) forces could be
reconstructed from the joint torque sensors [4], [5] for use
in compliant control either directly [6] or through machine
learning [7]. In fact, this assumption is not always true. It
is possible that particular robot configurations lead to cases
This work was supported in part by the Fonds de Recherche du Qu´
ebec
- Nature et technologies and the Natural Sciences and Engineering Research
Council of Canada (NSERC). (Corresponding author: Christopher Yee Wong.)
C. Y. Wong and W. Suleiman are with the Universit´
e de Sherbooke,
Sherbrooke, Canada (e-mail: christopher.wong2,wael.suleiman
(at) usherbrooke.ca).
(a)
(b)
Fig. 1. Comparison of a) joint axes ˆzkand kinematic manipulability wkand
b) positioning of joint-mounted and link-mounted sensors ˆsiand the sensor
observability oand their respective ellipsoids for the same robot configuration.
where the joint torque sensors are unable to observe certain
interaction forces at the end effector [8]. A similar assumption
may be present in cases where a robot is equipped with an
array of distributed distance [9], proximity [10], or contact
[11], [12] sensors on the arms. If these sensors are sparsely
placed, then certain robot configurations may lead to poten-
tially unobserved directions.
If a robot unknowingly enters a configuration where obser-
vations along certain external task-space axes can no longer
be made, the end result could be disastrous for the robot, the
task, or the environment. These situations must be avoided in
especially critical applications such as physical human-robot
interaction and minimally invasive surgery. Although the use
of multi-axis sensors, e.g. 6-axis force-torque sensors and 3-
axis accelerometers, automatically observes all possible task-
space axes and renders the issue of task-space observability
trivial, these advanced sensors may not always be available.
For example, 6-axis force-torque sensors are not available for
many lower-cost systems given their cost or their size makes
them infeasible to mount, and alternative methods must be
used [13]. Additionally, interactions with the robot body may
not be observed if the sensor is mounted at the end effector.
Thus, a tool is required to analyse a particular robot con-
figuration and provide a measure or index on the task-space
observability of the configuration. Parallels can be drawn with
the various kinematic performance measures used with serial
robots [14], particularly the well-known concept of kinematic
robot manipulability [15], a quality measure of a robot’s
mobility and closeness to kinematic singularities. In this paper,
we extend this concept to task-space observability of robots
based on the robot joint configuration and sensor placement,
which may differ from the joint axes. As this issue of task-
2
space sensor observability is highly dependent on the sensor
configuration and kinematic structure of the robot, not all
robots are equally affected. Such an analysis is especially
important for robots that do not have enough sensors to cover
all task space dimensions reliably and must prioritize one over
others. According to the semantics defined in [14], sensor
observability analysis is classified as a local kinematic and
intrinsic performance index.
A. Background
To note, vectors and matrices are represented by bold-faced
lower case and upper case letters, respectively, whereas scalar
values are not.
The traditional analytical Jacobian matrix J(q)∈Rnt×nq
is defined as the matrix of first order partial derivatives relating
nqjoint-space velocities ˙qto nttask-space velocities ˙x[16]:
˙x=J(q) ˙q,J(q)=
∂x1
∂q1. . . ∂x1
∂qnq
.
.
.....
.
.
∂xnt
∂q1. . . ∂xnt
∂qnq
(1)
In typical cases, the task space is defined as end effector
position (nt= 3) or pose (nt= 6). For readability, we
will continue the manuscript without explicitly writing the
Jacobian’s dependency on the vector of joints q, in other words
J(q)→J. The Jacobian can be used as a tool to measure
different properties of the robot, notably to verify whether a
specific robot configuration is at a kinematic singularity.
The kinematic manipulability index wk, commonly referred
to as simply the manipulability, is a scalar quality measure of
the robot’s ability move in the task space based on the current
joint configuration. Manipulability can also be used as a scalar
measure of a robot’s closeness to a kinematic singularity [15].
It is an important tool that allows postures to be evaluated
based on their mobility:
wk=qdet(JJT)(2)
The manipulability index can be exploited in different ways,
typically with optimization algorithms to ensure that the robot
motions stay away from any singularities [17]. While the
index wkis a scalar measure, the manipulability ellipsoid, as
described in [15], [18] and shown in Fig. 1(a), is a volumetric
representation of mobility for a specific robot configuration
that is proportional to the length of the ellipsoid principal
axes. Mobility indicates the ease with which the end effector
can move in a certain direction in task space proportional to
joint motion. As such, the manipulability ellipsoid itself can
be used as a target either as the main task or as a redundancy
resolution sub-task [19], [20]. Controlling the manipulability
ellipsoid ensures that a certain level of manipulability is
present, especially if a particular shape is desired.
The concept of manipulability has greatly evolved to en-
compass different calculation methods and applications since
its early introduction by Yoshikawa [15]. For example, some
authors modified the concept to instead calculate the manipu-
lability of the centre of mass of floating base robots [21], [22].
Manipulability has also been extended to multi-robot closed-
chain systems [23] and continuum robots [24].
B. Manuscript Organization and Contributions
In the same vein as the kinematic manipulability index and
ellipsoid, we introduce the novel concept of sensor observ-
ability analysis and the resulting sensor observability index
and sensor observability ellipsoid. The proposed concepts
qualitatively evaluate, based on the current joint configuration,
the cumulative ability of distributed directional sensors on an
articulated robot to measure external quantities in the task
space, akin to forward kinematics but for sensors. While the
analysis of distributed sensing is thoroughly studied in sensor
networks and swarm robotics, the analysis introduced here is
from the viewpoint of a single multi-jointed and articulated
robot. As an example, the type of directional sensors to be
analysed may include force-sensing elements, accelerometers,
or distance sensors. Sensor observability analysis would then
provide a performance metric to determine if the onboard
sensors are able to observe interaction forces, accelerations, or
object distance in all directions, or if the current configuration
is potentially blind to forces, accelerations, or objects in
a certain direction. The derivations in this paper use force
sensing as the case study for the observability of end effector
forces [4], [5] as it is the most intuitive case. Other types
of axis-based sensors, like those mentioned above, will be
explicitly developed in future work. The proposed formulation
also allows the analysis of non-joint-mounted sensors, for
example strain gauges, accelerometers, or distance sensors
placed on a link, as shown in Fig. 1(b). We also perform
simulations and experiments to demonstrate the differences
between sensor observability analysis and traditional kinematic
analysis and present certain cases where sensor observability
is superior.
This paper first provides the analytical framework in Sec.
II for defining task-space sensor observability based on the
cumulative transformations of each individual sensor. Dis-
cussions surrounding the peculiarities of sensor observabil-
ity analysis, including analogies to the traditional Jacobian,
thresholds for sensor observability, and sensor observability
applied to non-traditional robot architectures, are explored in
Sec. III. Next in Sec. IV, we discuss how sensor observability
can be used as a secondary task in the nullspace of the
kinematics task or formulated as an optimization problem.
Sec. V showcases practical implications of sensor observability
analysis during physical interaction with a robot. The utility
of sensor observability is showcased through simulations and
experiments on both a custom 3 degree of freedom (DOF)
robot and the Baxter robot. Finally, closing comments and
future research directions are provided in Sec. VI.
A preliminary version of this paper was presented at a con-
ference [25]. This article is a continuation where the concept
of sensor observability is expanded to provide a fuller picture
as the first major work of this framework. Secs. I to III-B
were previously presented in [25], but more insights have been
added throughout these sections to provide a deeper analysis of
the sensor observability framework. However, Sec. III-C and
3
Algorithm 1 Summary of Sensor Observability (SO)
NB: Sensor i∈1...nsand task-space axis j∈1...nt
1: ˆs′,i for i∈1...ns▷Def. local sensor axes (Sec II-A)
2: ˆsi←Rˆs′,i ▷Rotate to match task frame
3: ˜si←T□(ˆsi,ri)▷Sensor-type transf. (Sec II-B)
4: ˜si
j=f(˜si
j, si,∗
j)▷Noise thresholding (Sec III-C)
5: S=˜s1· · · ˜sns▷SO matrix (Sec II-C)
6: s←Γ□(S)▷SO func. & System SO (Sec II-C)
7: o←Qnt
j=1 sj▷SO index (Sec II-C)
onwards are all novel additions to further establish the utility
of sensor observability both theoretically and practically.
II. SE NS OR OBSERVABILITY ANA LYSI S
Prior to introducing the concept of sensor observability, we
would like to note that the analysis presented here assumes
that joints do not have mechanical limits, and thus in this work
we ignore special treatment of sensors that may be affected
by joint limits. Furthermore, we assume that sensors are
bidirectional, in the sense that they are capable of measuring
along both positive and negative directions of their sensing axis
(for example, laser-based distance sensors are unidirectional,
whereas accelerometers and joint torque sensors are bidirec-
tional). Unidirectional sensors require more complex sensor
axis analyses and will be addressed in future work.
Particularly in the case of force detection, barring dynamics
and inertial effects, there must be an equal and opposite
reaction force to properly detect forces, e.g. constraint forces
from ground contact. As such, fixed base robots have full
constraint forces in all directions. Conversely, floating base
and mobile robots do not always have the luxury of perfect
constraint forces. Friction cones must be taken into account
and any slippage or lack of adequate friction forces will affect
force detection and control [26]. Thus, to simplify this initial
analysis of sensor observability, we will only consider fixed
base open kinematic chain serial manipulators for the time
being to remove the question of imperfect constraint forces.
Floating base robots and slippage will be examined in future
work. A summary of the method is presented in Algorithm 1.
A. Local Sensor Axis ˆs′,i and Rotated Sensor Axis ˆsi
First, for each individually measured sensor axis i∈1...ns,
as seen in Fig. 1, we define a local sensor axis vector ˆs′,i ∈
Rnt, where each element indicates whether a task-space axis
is observed or not by taking on a value between [0,1]. Note
the difference between ns, the number of sensor axes, and
nt, the number of task space axes. A zero value means that
that particular axis is not observed, whereas a value of one
means that the axis is directly observed, i.e. the sensor axis
is parallel with the task-space axis. Values between 0 and 1
mean that the task space axis is only partially observed by an
off-axis sensor1. For example, a single one-axis joint torque
sensor could be seen as an element of SE(3) with nt= 6 and
represented by:
ˆs′
τ z =ˆs′
p,τ z
ˆs′
θ,τ z =000001T(3)
where ˆs′
τ z is in the local joint frame according to Denavit-
Hartenberg (DH) parameters [27] and (·)pand (·)θsubscripts
are the translational and rotational components, respectively.
Similarly, a single axis load cell in the x-axis is represented
by ˆs′
fx =100000T. Multi-axis sensors, e.g.
a 3-axis load cell that can detect forces in the xyz-axes
but not torques, would be represented by the set of three
individual sensor axis vectors, one in each x-, y-, and z-
axis, i.e. {ˆs′
fx,ˆs′
fy,ˆs′
fz}. In the same vein, a 6-axis force-
torque sensor would be the set of six individual sensor axes
represented by {ˆs′
fx,ˆs′
fy,ˆs′
fz,ˆs′
τ x,ˆs′
τ y,ˆs′
τ z }. The reason for
this separation is that it simplifies the axis normalization
process during rotations and transformations.
It is important to note that the nsis defined as the number
of individually measured sensor axes and not the number of
physical sensors. For example, a robot with two physical 3-axis
sensors would have ns= 6, where sensor frames Fi∈i=
{1,2,3}and Fi∈i={4,5,6}are located at their respective
physical sensors. Defining nsin this manner simplifies the
derivations that follow.
The prime symbol in ˆs′,i denotes that it is defined in the
local i-th sensor frame Fi. A rotated sensor axis vector ˆsi
without the prime symbol represents the set of axis vectors
rotated to the task frame FEE . For demonstration purposes,
we set FEE at the end effector, but aligned with the world
frame. All local sensor axis vectors are rotated to match the
orientation of the task frame FEE , i.e. ˆsi=Rˆs′,i .
B. Sensor Transformation T□(ˆsi,ri)
We define the sensor transformation function T□(ˆsi,ri)as
a sensor type and physics-dependent transformation that maps
individual sensors from their local sensor axes to the task-
space. For example, when discussing wrenches and force sens-
ing, torque-sensing axes may also observe linear forces at FEE
if there exists a moment arm, analogous to f=τ×r. Thus,
a single generalized force-torque sensor ˆsiwould undergo the
following force sensor transformation Tf(ˆsi,ri), designated
by the subscript f, in the task frame:
1The term “partially observed“ indicates that the sensor axis is not
completely in line with the task space axis. For example, if a load cell is
oriented at an angle θfrom the x-axis in the xy-plane and a force is applied
along the x-axis, the sensor will only detect the component of force that is
projected along the sensor axis, i.e., Fobserved =Factual cos(θ). Similarly,
if a laser distance sensor is used to measure the velocity of an object, then
the sensor will only detect the component of velocity that is projected along
the sensor axes.
4
˜si=˜si
p
˜si
θ
=Tf(ˆsi,ri) =
"|ˆsi
p|+0
|ˆsi
θ|#, if ˆsi
θ×ri=0
"|ˆsi
p|+|ˆsi
θ×ri|
∥ˆsi
θ×ri∥
|ˆsi
θ|#, otherwise.
(4)
where riis the position vector from the i-th sensor axis to the
task frame FEE ,|·| is the element-wise absolute function2,
and ∥ · ∥ is the Euclidean norm to normalize the cross product
as directional analysis of sensor axes should not be influenced
by the magnitude of the moment arm. The piece-wise defined
function is used in the case where riand ˆsi
θare collinear
such that ∥ˆsi
θ×ri∥= 0, which would otherwise result in
an undefined fraction. Note that the hat operator ˆ
·designates
a locally-defined sensor axis, whereas the tilde operator ˜
·
designates the transformed sensor axis.
The method to interpret the transformed quantity ˜siis as
follows: each element of ˜sirepresents a task-space axis that
is observed by the various locally-defined terms of ˆsithat it
contains. For example, in (4), given that both ˆsi
pand ˆsi
θterms
appear in the translational force term ˜si
p, any translational
forces at the EE would be observed by both the linear and
rotational axes of the i-th sensor (if they exist).
The use of the absolute function is two-fold: a) we assume
that the sensors are bidirectional and b) it ensures that sensor
axes do not subtract from each other. It is important to note that
the exact transformation T□(·)is dependent on the sensor type
and the laws of physics that govern it. Certain transformation
functions, depending on the sensor type, may simply be the
identity function. Other types of systems and transformations
will be explored in future work.
C. Sensor Observability Matrix S, System Vector s, Function
Γ□, Index oand Ellipsoid
We define the sensor observability matrix S(q)∈Rnt×ns
as the matrix of column vectors of the transformed sensor axis
vectors ˜si. For readability, we will continue the manuscript
without explicitly writing the dependency on q, in other words
S(q)→S:
S=˜s1· · · ˜sns(5)
Next, we define the overall system sensor observability
vector s∈Rnt×1as the cumulative sensing capabilities of all
individual sensors of the system in the task frame FEE with nt
task axes. The sensor observability function Γ□(S)calculates
sby synthesizing all nstransformed sensor axes ˜siaccording
to a desired metric for analysis. Here, we give example
definitions of Γ□(S). Recall that s=s1· · · sjT, where
j∈1...nt.
2The derivative of the absolute function is not defined at 0, which affects
the derivative terms. Thus, practically, one should use an alternate represen-
tation to the absolute function that is smooth around 0, e.g. |x| ≈ xtanh(cx)
where cis a positive constant.
1) Row-wise sum function:
s= Γsum(S) =
ns
X
i=1
˜si(6)
2) Row-wise p-norm function:
s= Γ∥·∥p(S), where sj=p
v
u
u
t
ns
X
i=1
(˜si
j)p∀j∈1...nt(7)
3) Row-wise max function:
s= Γmax (S), where sj= max
i=1...ns
˜si
j∀j∈1...nt(8)
where the subscript jin sjand ˜sjindicates the j-th task-space
axis of sand ˜s, respectively3, and also corresponds to the j-
th row of S. The sum function Γsum(·), as the name implies,
performs a row-wise summation across all transformed sensor
axes in Sand measures the cumulative task-space sensing
capabilities across all sensors. The summation can potentially
provide a measure of redundancy if multiple sensors measure
the same task space axis. One potential issue with this method
is that, for the same value, the sum function does not differen-
tiate between an axis that is directly observed by one or a few
sensors, or only minimally observed by many off-axis sensors.
The lack of this differentiation may result in unintended low
quality readings from non-closely aligned sensors. The p-norm
function Γ∥·∥p(·)partially alleviates this issue by reducing the
impact of smaller values.
Conversely, the element-wise max function Γmax (·)deter-
mines the maximum alignment between the individual sensor
axes and each task space axis. It provides a quality measure of
how directly a task-space axis is observed and, in a sense, its
trustworthiness. The max function Γmax(·)is always bounded
between [0,1], where a value of sj= 1 indicates that there is
at least one sensor that is directly and fully observing the j-th
task space axis, while sj<1indicates that it is only measured
indirectly by all sensors. In all cases, sj≈0would indicate
that the j-th axis is in danger of no longer being observed.
Other sensor observability functions may be used as well,
depending on the preferred analysis. For example, a sum with
minimum thresholding could potentially negate the masking
effect if many low quality observations by minimally observed
sensor axes are present. Note that certain formulations of
Γ□(·)may also use the sensor positions (in addition to the
sensor orientations in S) in case it is relevant, e.g. for mod-
elling sensor-to-sensor interactions. An example is discussed
in Sec. III-B.
Next, we define the sensor observability index o:
o=
nt
Y
j=1
sj(9)
Analogous to the kinematic manipulability index wkin (2),
the sensor observability index ois a scalar quality measure
of task-space observability. If any sj→0, then o→0,
3Recall: superscript iis for the i-th sensor axis, which is different from
the subscript jfor the j-th task space axis (and similarly for subscript kfor
the k-th joint axis, which will be defined later).
5
and the system is at risk of being unable to observe one
or more task space axes. The case where o= 0 is called
asensor observability singularity where the robot is in a
sensor observability singular configuration, and the system
has lost the ability to observe one or more task space axes.
This situation should be avoided for risk of potentially causing
failure resulting from the robot being blind in certain task
space axes. As such, ocan be used as an optimization
variable during motion planning to avoid low quality joint
configurations (examples shown in Sec. IV). While numerical
interpretation of the sensor observability index is system-
dependent, it can easily used as a relative gauge of system
sensor observability performance, as discussed in [14] for wk.
Similar to the manipulability ellipsoid defined previously in
Sec. I-A, we define the sensor observability ellipsoid in Rnt
where the principal axes are proportional to the magnitude
of the task-space observability. Fig. 2 showcases various joint
configurations for the robot Baxter and their resulting sensor
observability ellipsoid. For visualization purposes, sensor ob-
servability is split into the force sp(red dashed line ellipsoid)
and torque sθ(blue solid line ellipsoid) components. In the
arbitrary configuration in Fig. 2(b), all axes are observable, as
can be seen by the 3D shape of the force and torque ellipsoids.
In the sensor observability singular configurations shown in
Figs. 2(d) and 2(e), an axis of the sensor observability ellipsoid
collapses to zero. This indicates that the corresponding axis,
torques along the x-axis in Fig. 2(d) and forces along the x-
axis in Fig. 2(e), is not observable by the joint torque sensors.
D. Effect of Joint Configuration on Sensor Observability
We use the fixed-base dual-arm robot Baxter from Rethink
Robotics to demonstrate the importance of sensor observability
analysis. Each Baxter arm contains 7 degrees of freedom
whose kinematic structure is shown in Fig. 2(a) and described
in detail in [28]. Each joint contains position encoders and the
joints are capable of torque estimation. Given the structure of
Baxter, there exists sensor observability singular configura-
tions, as shown in Fig. 2.
To demonstrate the evolution of the various indices, we
simulate the kinematic structure of a single Baxter arm in
MATLAB and sweep through multiple configurations shown
in Fig. 3(a). The robot begins in an arbitrary configuration at
t= 0 s. At t= 4 s, the robot moves to the configuration shown
in Fig. 2(e), which incurs simultaneous sensor observability
and kinematic manipulability singularities o, wk= 0. At t= 8
s, joints q2to q6are set to zero. The robot finally ends in
another arbitrary configuration at t= 12 s. The joint angles
are plotted in Fig. 3(b). Joint q7is not shown in the plot as it is
held at a constant q7= 0 and has no effect on the results. Fig
3(c) plots the evolution of the kinematic manipulability index
wkand sensor observability index using both the sum osum in
(6) and max omax functions in (8) through the different robot
configurations. All indices are normalized to their respective
maxima seen throughout the motion, though wkis further
scaled to emphasize its evolution particularly at t= 4 s.
As expected, all indices are non-zero in the arbitrary
configurations at t= 0 s and t= 12 s. As the robot
(a)
(b) (c) (d)
(e)
Fig. 2. a) Representation of a single 7-DOF Baxter robot arm with only
traditional joint torque sensors in b) arbitrary configuration, c) kinematic
singularity in θz, d-e) sensor observability singularities in d) τxand e) fx.
Force and torque observability ellipsoids based on the sum function Γsum(·)
are shown in red dashed and blue solid ellipsoids, respectively. Note that the
ellipsoids in c) are very thin, but not completely flat, i.e. o= 0.
moves towards the observability and kinematic singularity at
t= 4 s, all indices approach zero. This singular configuration
eliminates the observability of force (osum, omax →0) and
translational motion (wk→0) in the x-axis. Comparing the
sensor observability indices osum and omax, both calculation
methods hold somewhat similar trends. While omax has a
maximum value of 1, osum is theoretically unbounded but is
normalized for the plot according to the maximum of 688.88
observed in this simulated motion.
To practically illustrate the importance of sensor observ-
ability index, we observe changes in the ability of the Baxter
robot to use its joint sensors to estimate end effector forces
in both normal and observability singular configurations in a
physical interaction experiment shown in Fig. 4. End effector
force estimation from joint sensors is performed using the
packaged Baxter API from the manufacturer and a force sensor
is attached onto the end effector of the robot as shown in Fig.
4(a) to provide a ground truth for interaction forces. Once the
robot is in position, the end effector is first pushed along the
x-, then the y-, and finally x-axes again to observe whether
the interaction forces are detected or not.
In the first scenario, shown in Fig. 4(a), the robot is in an
arbitrary non-zero observability configuration. The associated
plot shows the robot’s ability to resolve the end effector
6
(a)
(b)
(c)
Fig. 3. a) Simulation of a single Baxter robot arm starting in an arbitrary
position, sweeping through an observability singularity at t= 4 s (configura-
tion shown in Fig. 2(e)), setting joints q2to q6to 0 at t= 8 s, and ending
in an arbitrary position at t= 12 s. b) Joint positions of the maneuver. Joint
q7is not shown as it is held at a constant q7= 0. c) Plot of the evolution
of kinematic manipulability wkand observability index using the sum osum
and max omax functions. All indices are normalized to 1, but wkis further
scaled with an exponential to emphasize the changes at t= 4 s.
forces using the joint torque sensors. Conversely, in the second
scenario, shown in Fig. 4(b), the robot is in the sensor
observability singular configuration shown in Fig. 2(e). In this
configuration, the overall system sensor observability sx= 0
but sy= 0. In the force plot in Fig. 4(b), the robot is unable to
observe interaction forces in the x-axis at t≈3s and t≈17
s, despite the ground truth force sensor showing interaction
forces. Forces in the y-axis are observed without issue. Off-
axis forces are observed from imperfect interactions and the
robot shifting during interaction.
III. DISCUSSIONS ON SEN SO R OBSERVABILITY ANALYS IS
A. Special Case: Similarities to Kinematic Analysis
In the special case where the sensor axes are collinear with
the joint axes, parallels can be drawn between the sensor
observability and kinematic analyses. Let us examine a serial
(a) (b)
Fig. 4. Sensor observability experiments using the robot Baxter comparing
end effector forces estimated by the joint torque sensors and measured using
an external force sensor attached to the end effector in a) an arbitrary non-
zero observability configuration and b) the observability singular configuration
shown in Fig. 2(e). An external force is applied first in x, then y, and finally
xagain. Forces are fully observable in the arbitrary position in a), but forces
in xare not observable in the observability singular configuration in b).
manipulator with only revolute joints and single-axis joint
torque sensors located at each joint that are aligned with the
joint axes. This is the joint-sensor configuration of a typical
serial manipulator robot. In this specific case, each local sensor
axis vector ˆs′,i =ˆs′
τ z ∀i= 1...nsas in (3). Thus, using
the force sensor transformation Tf(ˆsi,ri)in (4) and the sum-
based observability function Γsum(S)in (6), the final sensor
observability shas the form:
s= Γsum(S) =
ns
X
i=1
˜si=
ns
X
i=1 "|ˆsi
θ×ri|
∥ˆsi
θ×ri∥
|ˆsi
θ|#(10)
The term ˆsi
pis absent from (10) as it is zero for single-
axis joint torque sensors ˆs′
τ z . The summation in (10) can be
rewritten in matrix form using the sensor observability matrix
Smultiplied by a ns×1vector of ones 1:
s="|ˆs1
θ×r1|
∥ˆs1
θ×r1∥. . . |ˆsns
θ×rns|
∥ˆsns
θ×rns∥
|ˆs1
θ|. . . |ˆsns
θ|#
1
.
.
.
1
ns×1
=S1ns×1(11)
For the kinematic analysis, we begin with the geometric
velocity analysis [16]:
v
ω=
nq
X
k=1 ˙qkˆzk×rk
˙qkˆzk(12)
where vand ωare the translational and angular velocities of
the end effector, ˙qkis the angular velocity of the k-th joint,
and ˆzkis the k-th joint axis where k∈1...nqand nqis the
number of joints. Similar to (10)-(11), (12) may be rewritten
in matrix multiplication form using the kinematic Jacobian J
and the vector of joint angular velocities ˙q:
v
ω=ˆz1×r1. . . ˆznq×rnq
ˆz1. . . ˆznq
˙q1
.
.
.
˙qnq
=J˙q(13)
7
Fig. 5. Configuration where a non-zero null space vector exists for JT, but
it is not a sensor observability singularity as o= 0. This configuration is
similar to Fig. 2(e) but q6tilts the end effector is slightly downwards.
Given that the joint torque sensors ˆsi
θand joint axes ˆzk
are unit vectors and collinear, we then in fact have ˆsi
θ=ˆzk,
ri=rk, and nq=ns. Thus, equations (11) and (13) have very
similar form despite differences in normalization where S≈
Jfor a standard serial manipulator with joint torque sensors
on each rotational joint. To understand this relationship, joint
axes could potentially be thought of as velocity measurement
sensors. A similar analysis holds for prismatic joints paired
with single axis load cell.
Despite the similarity in form of (11) and (13), it is
important to note that a sensor observability singularity does
not necessarily imply kinematic singularity and vice versa. For
example, the configuration shown in Fig. 2(c) is a kinemati-
cally singular configuration where axes 1 and 7 are collinear
and wk= 0, but is not an observability singularity o= 0
as the red force ellipsoid is thin but not flat. Conversely,
the joint configurations shown in Figs. 2(d) and 2(e) are
both observability and kinematic singularities, demonstrating
potential overlaps between the two indices for this particular
robot and sensor configuration.
In this special case for serial manipulator robots, it is
sometimes possible to extract similar information using the
end effector force and joint torque relationship τ=JTf
and examining the null space of JT. The existence of non-
zero null space vectors JTindicates the possibility of having
zero joint torques despite non-zero end effector forces, but it
can be caused by two distinct cases. One case is a sensing
deficiency in the same manner as sensor observability. The
other case occurs when end-effector forces and torques balance
each other out and result in zero readings at the joints. For
example, in the configuration shown in Fig. 5, a null space
analysis of JTindicates that a force applied at the end effector
in the x-axis can be nullified with a balancing torque in the y-
axis, which result in zero joint torques. Despite the existence
of a non-zero null space vector for JTin this configuration,
this pose is not an observability singularity as osum = 57.63
(so o= 0). Thus, even in the special case where the sensor
axes are collinear with the joint axes, null space analysis of
the Jacobian cannot replace sensor observability analysis. The
reason is due to the use of absolute values in (4) and (10)
during sensor observability analysis that negates the possibility
of the sensor axes cancelling each other out.
B. Advantages of Sensor Observability Analysis
While the discussion above might lead one to think that
sensor observability can be derived using the traditional kine-
matic analysis, the parallels drawn in Sec. III-A are applicable
only in the special case where the sensor axes are collinear
with the joint axes (e.g. standard serial manipulators). The
advantage of sensor observability analysis is that it is flexible
and applicable to robot architectures using different and non-
traditional sensor mounting styles. Once non-traditional sensor
mounting styles are used and there is no longer the one-to-
one mapping between joint axes and sensors axes, then the
parallels with the traditional Jacobian formulation no longer
apply. For example, a robot using a load cell in the middle of a
link, similar to the one shown in Fig. 6, could be a lower cost
alternative to using a joint-mounted torque sensor if interaction
torques are not of importance. In this case, since non-joint-
mounted sensors are present, the formulation of Sand owill
differ significantly from Jand wk. An example highlighting
this case will be discussed in Sec. III-D.
Moreover, certain sensors may need to be interpreted differ-
ently than simply with axis direction, which is why the sensor
transformation T□(ˆsi,ri)and sensor observability Γ□(S)
functions are implemented. For example, an articulated robot
may be covered with an array of distributed laser distance sen-
sors [9], ultrasonic sensors, or magnetic directional proximity
sensors [29]. Depending on the joint configuration, these sen-
sors may interact with each other (e.g. ultrasonic interference
or crossing magnetic fields), and these interactions may affect
sensing quality. Their interactions could thus be modelled and
captured by the T□(ˆsi,ri)and Γ□(S)functions and used to
optimize joint configuration to minimize interference. These
complexities will be explored in future work.
Many limitations associated with manipulability analysis
using the Jacobian matrix [14] are not present in sensor
observability analysis. For example, while the kinematic ma-
nipulability index wksuffers from unboundedness and both
scale and dimensional dependencies, sensor observability us-
ing the row-wise max function (8) does not since values are
bounded between 0 and 1. The caveat is that some other
sensor observability functions, for example the row-wise sum
function (6), are unbounded. Calculations with sensor axes are
generally normalized and unaffected by scale and dimensions4.
Thus, when analysing task space observability, our pro-
posed sensor observability analysis can be viewed as a more
generalized and more flexible analysis than the traditional
Jacobian-based analysis. Sensor observability analysis can also
be potentially used in the robot design phase to optimize the
placement of sensors to create redundancy or minimize the
number of sensors required, which is especially applicable to
soft robots [30], and will be the subject of future work. Despite
these comparisons with kinematic analysis using the Jacobian,
it must be stressed that sensor observability analysis cannot be
used for kinematic analysis and vice versa, and thus it is in
fact inappropriate to compare the two directly.
C. Sensor Observability Threshold and Sensor Noise
As sensor observability is a continuous quality measure,
it is difficult to pinpoint an exact threshold to state when
4The exception is that there is potentially an indirect influence of scale and
dimension dependency when sensor noise thresholding is used (Sec. III-C).
8
observability has been lost. An ideal sensor with infinite
sensitivity and zero noise would be able to provide a usable
reading for all non-zero sensor observability values. In reality,
sensors have finite sensitivity and are susceptible to sensor
noise. A poorly observed axis will have poor signal-to-noise
ratio and will effectively be unable to provide meaningful
values below a certain threshold si,∗
j, denoted by the asterisk.
Sensor observability for sensor iin the j-th task-space axis
could be flagged as lost when si
j< si,∗
j, where the signal noise
is greater than a defined required minimum detectable amount
Φi
jas a function of the sensor sensitivity. A system could
have multiple types of sensors, e.g., a mix of different models
of load cells and joint torque sensors, each with their own
specifications. There then exists individual threshold value si,∗
j
for each sensor iand each axis j(see Appendix for derivation):
si,∗
j=σi
ϵ
Φi
j
(14)
where σϵis the standard deviation of the sensor noise and Φi
jis
the desired minimum observable quantity for sensor iin the j-
th task-space axis. Φi
jis a user-defined design parameter based
on the task definition for each specific axis and the sensor
specifications. For example, we require that a system must be
able to detect interaction forces of at least ΦLC =Fmin = 10
N using a load cell (LC) with noise levels of σϵ= 0.5N. The
threshold for sensor observability would then be:
sLC,∗=σϵ
ΦLC =(0.5N)
(10 N)= 0.05
where sensor observability is considered lost if sLC <0.05.
If si,∗
j>1, it means that it is impossible for that sensor to
detect the minimum desired quantity Φi
jas si
jhas an upper
bound of 1. Thus, to ensure si,∗
j≤1, the constraint σϵ≤Φi
j
should be followed. Having si
jbelow the threshold simply
indicates that the minimum quantity may not be individually
observable by the i-th sensor anymore; conversely, it is pos-
sible for much larger values Fj,actual ≫Φi
jto be observable
even when si
j< si,∗
j. In addition, from the perspective of the
entire system, other sensors may be able to compensate for
any sensor si
j< si,∗
jif they are positioned correctly.
Although Φi
jis a design parameter, it is not necessarily
defined in a straightforward manner for different types of
sensors. Recall that the force sensor transformation in (4) has
both translational and rotational components ˜si=˜si
p˜si
θT.
For joint torque sensors, measuring torque is a straightforward
transformation as ˜sJ T S
θ≈ |ˆsJ T S
θ|such that ΦJT S
θ=τmin,
similar to ΦLC
p=Fmin.
Conversely, measuring linear forces using a torque sensor
is influenced by the moment arm, which cannot be ignored.
We can calculate the effect of the cross product as well as the
signal amplification as a result of the moment arm using the
quantity cJT S
p=|ˆsJT S
θ×rJT S |. Thus, for minimum linear
forces measured by torque sensors in the j-th axis, we have
ΦJT S
p,j =Fmin cJT S
p,j . Other definitions of si,∗
jare possible and
other external factors could also be factored into the noise term
when calculating the minimum required sensor observability
(a)
Fig. 6. Special planar RRR robot with 3 revolute joints (without joint torque
sensing) and 3 single axis load cells located on each link. ˜s1and ˜s3are
aligned perpendicularly to the link while ˜s2is parallel with the second link.
threshold, e.g., ultrasonic distance sensors interfering with
each other.
To factor in the sensor observability threshold, an extra step
is added after calculating the transformed sensor axes ˜si=
T□(ˆsi,ri)in (4), as shown in Algorithm 1:
˜si
j=f(˜si
j, si,∗
j)∀i, j (15)
where f(˜si
j, si,∗
j) =
0, if ˜si
j≤si,∗
j
˜si
j−si,∗
j
1−si,∗
j
, if ˜si
j> si,∗
j
The piece-wise defined function is used to threshold each
individual sensor along each task-space axis. Sensor observ-
ability values below the threshold are set to 0 and the range
above the threshold is scaled back to between 0 and 1. All
subsequent steps from (5) onwards in calculating the system
sensor observability sand index oremain the same, where
either sj= 0 or o= 0 indicates that at least one task-space
axis is no longer observable. The effect of sensor observability
thresholding is similar to a deadband.
It is important to note that for the purpose of sensor
observability analysis, noise is only considered to affect the
calculation of si,∗
j; its direct effect on the sensor readout
itself and the need for filtering are not considered. In Sec.
V, a real-world force reconstruction case demonstrates that
the reconstructed value only matches the ground truth when
si
jis greater than a specific value, though this value could be
unique to the particular robot, sensor type, joint configuration,
experiment, and reconstruction method.
D. Non-Traditional Robot Architectures
To demonstrate the utility of sensor observability analysis,
we simulate the following planar RRR robot shown in Fig.
6, which has 3 revolute joints (without joint torque sensing)
and 3 single axis load cells located on each link. ˆs1and ˆs3
are perpendicular to their respective links while ˆs2is parallel
with the second link. Such a structure could be a potential
design for cheaper upper body rehabilitation robots [31]. The
use of single-axis load cells aligned non-traditionally could be
a cheaper alternative to using joint torque sensors or a multi-
axis sensor. In this example, we are only interested in linear
forces in the x- and y-axes and not torques in z. Thus, nt= 2
and we have ˆs1=ˆs1
p=0 1T,ˆs2=ˆs2
p=1 0T, and
ˆs3=ˆs3
p=0 1T. We then obtain the following according
to the force sensor transformation Tf(ˆsi,ri):
˜si=Tf(ˆsi,ri) = |ˆsi
p|(16)
9
Since these are only linear sensors, ˆsi
θfrom (4) does not exist.
If we include the rotation into the task frame, we obtain:
˜s1=|R1
0ˆs1
p|=|−S1|
|C1|
˜s2=|R2
0ˆs2
p|=|C1C2− S1S2|
|C1S2+C2S1|
˜s3=|R3
0ˆs3
p|=|−C1C2S3+C3S2− S1C2C3− S2S3|
|C1C2C3− S2S3− S1C2S3+C3S2|
(17)
where Si=sin(qi),Ci=cos(qi)and Rb
ais the rotation from
frame ato frame b. The sensor observability matrix becomes:
S=˜s1˜s2˜s3
=|−S1|,|C1C2− S1S2|,
|C1|,|C1S2+C2S1|,
|−C1C2S3+C3S2− S1C2C3− S2S3|
|C1C2C3− S2S3− S1C2S3+C3S2|
(18)
Conversely, if we calculate the standard kinematic Jacobian
for linear motion in the x- and y-axes only, we obtain:
J=ˆz1×r1ˆz2×r2ˆz3×r3
=−l1S1−l2S12 −l3S123,
l1C1+l2C12 +l3C123,
−l2S12 −l3S123,−l3S123
l2C12 +l3C123,+l3C123
(19)
where the multiple subscripts indicate angle summation, e.g.,
S123 =sin(q1+q2+q3). Clearly, we can see that see that
the sensor observability matrix Sand the standard kinematic
Jacobian Jno longer match as the robot in Fig. 6 does not
have a one-to-one mapping between joints and sensors. The
system sensor observability sand sensor observability index
oare then calculated using using their respective equations
described in Sec. II-C.
IV. MAXIMIZING SEN SO R OBSERVABILITY
In this section, our focus is on exploring the integration of
maximizing sensor observability in conjunction with solving a
kinematics task. We provide two separate formulations: 1) in
the null space of the Jacobian matrix and 2) as an optimization
problem.
A. Null Space Formulation
If a robot is considered kinematically redundant, then the
null space of the Jacobian matrix can be exploited to satisfy
secondary tasks [32], e.g., to maximize sensor observability:
˙
q=J†˙
x+ (I−J†J)˙
q∗(20)
where ˙
xis the end effector Cartesian velocity, J†is the right
pseudoinverse of J,Iis the identity matrix, and ˙
q∗are the
joint velocities related to the secondary task. The first term is
the solution to the kinematics task that minimizes the norm of
joint velocities, while the second term can be used to maximize
a secondary task. To compare the effect of using different
redundancy resolution strategies, we simulate the 3-DOF RRR
robot shown in Sec. III-D in MATLAB to follow a sinusoidal
trajectory using velocity control. The results are shown in Figs.
7 and 8. The simulation script is provided online5.
1) Minimize joint motion: In a first trial, the joint trajecto-
ries simply use the right pseudoinverse of the Jacobian J†to
minimize joint motion without a secondary task ˙
q∗=0such
that (20) becomes:
˙
q=J†˙
x(21)
The resulting motion is shown in Fig. 7(a). At t= 0.5
s, the robot passes through a sensor observability singularity
and the sensor observability index o→0. In this sensor
observability singular configuration, linear forces in the x-axis
cannot be detected by the sensors and may lead to undesirable
consequences. It is important to note that although the robot
is in a sensor observability singular configuration, the robot is
not in a kinematically singular configuration as wk= 0. Once
the robot moves past the sensor observability singularity at
t > 0.5s, forces in xare observable once again.
2) Maximize kinematic manipulability: In the second trial,
the secondary task in (20) is used to maximize kinematic
manipulability of the end effector. As such, ˙
q∗is defined as
follows using the partial derivatives of the manipulability index
wk, as defined in (2), w.r.t. the joints q:
˙
q∗=k0
δwk
δq(22)
where k0is a positive scalar coefficient. The resulting motion
is shown in Fig. 7(b). Although the effect is not overly
pronounced, there are small differences in joint trajectories
compared to the first trial that result in an overall increase in
kinematic manipulability throughout the motion as well as a
higher maximum wk. There is also the unintended effect that
the robot no longer passes through the sensor observability sin-
gularity, but sensor observability is not explicitly maximized.
The discontinuous oprofiles are inflection points in the robot
motions that cause the sensor axes to change directions.
3) Maximize sensor observability: In the third trial, the
secondary task in (20) is used to maximize sensor observability
instead of kinematic manipulability. Joint velocities in the
Jacobian null space ˙
q∗now use the partial derivatives of the
sum-based sensor observability index osum, defined in (9):
˙
q∗=k0
δosum
δq(23)
The joint trajectories are now optimized to maximize overall
sensor observability while maintaining the desired EE trajec-
tory along the sinusoidal path. The resulting motion is shown
in Fig. 7(c). Both osum and omax have much higher values
on average throughout the motion compared to the other two
trials as well as a higher maximum value and smaller dips
in the middle. Conversely, wkis lower than the other two
trials as an unintended consequence. As expected, maximizing
kinematic manipulability in the second trial and maximizing
5https://github.com/chrisywong/SensorObservabilityAnalysisDataset
10
(a) Minimizing joint motion using null space formulation
(b) Maximizing kinematic manipulability wkusing null space formulation
(c) Maximizing sensor observability ousing null space formulation
(d) Maximizing sensor observability oas an optimization problem using quadratic programming
Fig. 7. Motion of a custom 3 DOF robot using the Jacobian null space to (a) minimize joint motion, (b) maximize kinematic manipulability, and (c) maximize
sensor observability, and (d) maximize sensor observability as an optimization problem formulation. The black square is the base joint 1 while joints 2 and 3
are blue circles. Robot links are thin blue lines while the linear sensor axes are thick orange lines. The blue ellipse is the sensor observability ellipsoid using
ssum and the pink ellipse is the kinematic manipulability ellipsoid. The dotted black line is the desired EE trajectory. In (d), the black outlined dot is the
desired EE position at that time point, and the red line and red dot are respectively the actual EE trajectory and position as a result of the relaxation vector δ.
The plots show the evolution of the sensor observability indices osum and omax and the kinematic manipulability index wkthrough the motion. The sum
index osum and kinematic manipulability index wkare normalized according to their global maxima (4.5 and 1.65, respectively) for plotting purposes.
sensor observability in the third trial do not yield the same
results as they are different objectives.
4) Maximize sensor observability in specific axes: Sensor
observability can also be maximized in specific axes using the
system sensor observability srather than as a whole using
the sensor observability index o. Fig. 8 shows the changes in
robot trajectory when only a single sensor observability axis
is included as the secondary task in (20). Rather than using
δo/δqin (23), only a specific axis sjis used.
˙
q∗=k0
δsj
δq(24)
In this case, we use the sum function (6) to calculate sxand
sy. As expected, maximizing sxskews the sensor observability
ellipsoid in the x-axis, as seen in Fig. 8(a). While maximizing
syin Fig. 8(c) has certain similarities to the first trial shown in
Fig. 7(a), the joint trajectories differ slightly. Ellipsoid shaping
to simultaneously achieve specific sis also possible using the
methods in [20] and will be the subject of future work.
11
(a) Maximizing sensor observability oin x-axis using null space formulation
(b) Maximizing sensor observability oin x-axis as an optimization problem using quadratic programming
(c) Maximizing sensor observability oin y-axis using null space formulation
Fig. 8. Motion of a 3 DOF robot where sensor observability is maximized only in the x-axis using (a) null space formulation and (b) as an optimization
problem, and (c) only in the y-axis using null space formulation. Given the similarities between the two methods in y-axis only maximization, only the null
space formulation is shown. Plot shows sensor observability in the x- and y-axes, sxand syrespectively, using the sum method in (6).
To further illustrate the utility of sensor observability anal-
ysis in a more practical case, we simulate the Baxter robot
when one of its sensors is removed if, for example, there was a
sensor malfunction or if it was removed during cost- or weight-
cutting measures. The robot begins in the posture at t=t0
shown in Fig. 9(a), which is similar to the posture shown
in Fig. 4(a) but not identical. Throughout this simulation,
the robot holds the end effector position and uses the null-
space projection equations (20)-(23) to maximize different
performance metrics. During the period t=t0t2, the
robot uses the null-space projection to increase kinematic
manipulability wk. As expected, wkincreases until it reaches
a local maximum and plateaus, shown in Fig. 9(b). At t=t1,
we simulate the removal of the joint torque sensor at joint
q4such that ˆs4=0, but joint q4is still able to move and
be controlled. As a result, there is a large drop in sensor
observability index oat t=t1. Since only wkis being
maximized and there are no kinematic changes to the robot
joints, the controller does not adjust to compensate for the
deficient sensor, as expected. From t=t2onwards, the
robot switches the null-space projection to optimize sensor
observability osum instead. The robot begins to compensate
for the sensor deficiency and increases the sensor observability
index oby adjusting the orientation of the remaining sensor
axes, eventually trending towards a maximum at t=t4.
Despite the fact that osum is the target of optimization,
wkonly changes slightly. This experiment demonstrates the
different purposes of the kinematic manipulability and sensor
observability indices and how the sensor observability indices
can be used to ensure that sensor observability is maximized
and that interaction forces in the task-space can be observed
properly. This ability to at least temporarily recover from such
a sensor malfunction is especially critical if the robot must
continue to function and is positioned beyond immediate reach
for repair, for instance in teleoperated situations or for robots
out in the field.
B. Optimization Formulation
Maximizing sensor observability can also be formulated
as an optimization problem. Leveraging Lemma 1 in [17],
and taking into consideration that the sensor observability is
12
(a)
(b)
Fig. 9. (a) Simulation of Baxter robot posture optimization using null-space
projection but with a sensor deficiency. Blue lines and circles are the robot
links and joints respectively with the black square base and red diamond end
effector. The black dotted lines are joint axes and the solid yellow lines are
sensor axes (as Baxter uses joint torque sensors, each joint and sensor pair
is collinear); (b) Plot of various performance indices and the joint angles.
osum is normalized to the maximum of 141.1685. Between t=t0t2,
the robot uses the null-space projection to maximize kinematic manipulability
wk. At t=t1, we simulate a removal of the joint torque sensor at joint q4
such that the sensor no longer functions, but the joint is still able to move
and be controlled. Between t=t2t4, the robot switches the null-space
projection to maximize sensor observability osum instead.
inherently a positive function, we formulate the subsequent
optimization problem:
min
˙q,δ,ϵo
1
2˙qTQ˙q+1
2δTQδδ+1
2α ϵ2
o
subject to J˙q+δ=˙
x
o−T(∇o)T˙q=ϵo
b−≤A˙q≤b+
˙q−≤˙q≤˙q+
δ−≤δ≤δ+
(25)
where δ∈Rntis a relaxation vector6,Qand Qδare
6E.g., allow the position and/or orientation of the end effector to vary
from the desired trajectory. Please refer to [17] for in-depth explanations of
the relaxation vector.
positive semidefinite matrices7defined as [17], αis a positive
coefficient, ois the sensor observability index or sensor
observability in a specific axis, ∇o=∂o
∂qis the gradient
of o,Tis the time period of the robot control loop, ⊔
⊓−
and ⊔
⊓+are respectively the lower and upper limits of any
particular variable ⊔
⊓,A∈Rm×nqand b∈Rmcan serve
various purposes, such as collision avoidance or ensuring that
the end-effector remains within the robot’s visual field [33]. m
is therefore dependent on the number of inequality constraints.
The optimization problem (25) can be reformulated as the
following standard quadratic programming (QP) problem:
min
Z∈Rnq+nt+1
1
2ZTQZ
subject to J Z =˙
x
o
B−≤AZ ≤B+
Z−≤Z≤Z+
(26)
where:
Z=
˙q
δ
ϵo
,Q=
Q0nq×nt0nq×1
0nt×nqQδ0nt×1
01×nq01×ntα
,
J=Jnt×nqInt×nt0nt×1
T(∇o)T01×nt1,
A=A0m×(nt+1),B+=b+,B−=b−,
Z+=
˙q+
δ+
ϵ+
o
,Z−=
˙q−
δ−
0
where ϵ+
o≫1is a user-defined coefficient. As a consequence,
any off-the-shelf QP solver can be used to solve (26), thereby
solving the optimization problem in (25). The resulting robot
motion, using the same scenario as Sec. IV-A, is shown in
Fig. 7(d) and implemented using quadprog() in MATLAB.
There are clear differences when compared to maximizing
sensor observability as a secondary task using the null space of
the Jacobian (Sec. IV-A3 and Fig. 7(c)). Posing sensor observ-
ability as an optimization problem with the relaxation vector
generates a different trajectory that results in a flatter osum
profile that has a slightly higher mean (¯osum, opt = 3.8856
vs ¯osum, null = 3.7817) at the expense of deviating from
the desired trajectory (¯xerr = 0.074 m). Similar differences
are obtained when optimizing sensor observability only for
a single axis, as shown in Fig. 8(b), when compared to the
null space formulation in (24) in Fig. 8(a). In both cases,
the relaxation vector δ+=−δ−= [0.1,0.1]Tallows for
deviations from the prescribed path in order to allow for
slightly different maximization profiles of o.
If the end effector should not deviate from the prescribed
path, then the relaxation limits should be set close to zero:
δ+=−δ−= [η, η]Twith 0< η ≪1. The resulting motion
7Qδis a non-constant matrix that varies with the motion and allows the
relaxation vector to take on different profiles. For example, it can force the
EE motion to match the desired trajectory near the beginning and end of
the trajectory, but allow the EE to deviate from the desired trajectory in the
middle of the motion.
13
(a) (b)
(c)
(d)
Fig. 10. Baxter robot experiment with sensor deficiency at ˆs4. Blue and pink
ellipsoids are the force and torque sensor observability ellipsoids, respectively.
(a) Initial Position. Joints 2, 6, and 7 form a line that is parallel with the x-
axis. As ˆs4is the only sensor that can detect forces along the x-axis, the robot
is now in a sensor observability singular configuration as witnessed by the
collapsed force sensor observability ellipsoid. (b) Final position for optimized
sensor observability, similar to t=t4of Fig. 9(a). (c) Reconstructed end
effector forces with ground truth (all sensors) and with sensor deficiency
(without ˆs4). (d) Sensor observability index oand system sensor observability
vector sthroughout the experiment. System sensor observability svalues for
the other axes are not shown as they are well above 0 and not of interest.
is then similar to the null space formulation, but (25) still
allows for other optimization variables to be used if desired.
V. PRACTICAL IMPLICATIONS OF SENSOR OBSERVABILITY
In this section, we perform a physical interaction experiment
using the real Baxter robot and demonstrate the practical
implications of sensor observability singularity and the effect
on force reconstruction over a range of sensor observability
values, seen in Fig. 10. Following the previous sensor defi-
ciency example shown in Fig. 9, the initial joint configuration
has joints 2, 6, and 7 form a line that is parallel with the x-axis.
As a result, only joint 4 is able to detect forces in x. Let us
say that the sensor deficiency at ˆs4now occurs immediately
at t= 0 s. The sensor deficiency renders the current initial
sensor configuration into a sensor observability singularity as
none of the remaining sensors are capable of sensing forces
along the x-axis, which is shown by the collapsed force sensor
observability ellipsoid in Fig. 10(a). In this experiment, forces
are applied to the end effector as the robot moves from the
sensor observability singular configuration in Fig. 10(a) to a
more optimized pose shown in Fig. 10(b).
The goal is to reconstruct the end effector forces FEE using
the equation τ=JTFEE , where the joint torques τand
Jacobian transpose JTare known. The deficiency is modelled
by removing elements related to the 4th joint in the deficient
joint torque vector τdef =τ1τ2τ3τ5τ6τ7and
the corresponding 4th column to obtain the deficient Jacobian
Jdef ∈R6×6. A least squares approximation of end effector
forces is performed using the remaining joint torque readouts
τdef =JT
def FEE and compared to the ground truth using
all sensors τ=JTFEE . The MATLAB function lsqr()
is used with a higher tolerance and a preconditioner matrix
to stabilize against the small singular values of Jdef . The
evolution of the reconstructed end effector forces as the robot
transitions between the two poses is shown in Fig. 10(c).
As expected, τdef and Jdef are unable to reconstruct the
forces in xin the sensor observability singular configuration
from t=0 s to t=7.2 s. Conversely, the forces in y
and zare correctly reconstructed throughout the entirety of
the experiment. Starting from t= 7.2s, sensor observability
optimization repositions the remaining sensors to compensate
for the deficient one and regain sensing in the x-axis and
completes the transition at t= 17.2s. As the robot moves
away from the singular position, forces in xslowly become
visible starting from t= 7.6s (4% into the robot motion and
ssum,Fx≈0.14). The reconstructed forces in xeventually
coincide with the ground truth starting from t= 11.4s. From
t= 11.4s onwards, all forces are reconstructed properly for
the sensor deficient case. In Fig. 10(d), the sensor observability
indices and system observability vectors show that they are
all close to 0 until the robot begins to move away from the
singular configuration. While Fxis fully reconstructed when
ssum,Fx≈0.66 at t= 11.4s, this value does not necessarily
represent a universal threshold and could be unique to this
particular robot, sensor type, joint configuration, experiment,
and reconstruction method.
VI. CONCLUSION
In this work, we introduce the novel concept of the sensor
observability for analysing the quality of a specific joint
configuration for observing task-space quantities. We believe
that this is the first work in quantifying the cumulating effect of
distributed axial sensor positioning in multi-DOF articulated
robots and provides a novel performance metric as well as
14
the base framework for developing further tools related to
sensor analysis. In special cases related to force sensing,
there exists parallels between traditional kinematics analysis
and the proposed sensor observability analysis, but sensor
observability has certain advantages related to generalization.
A deeper analysis shows the need to distinguish between
the two and use sensor observability to augment kinematic
manipulability, particularly in irregular robot structures where
joints and sensors do not have a one-to-one mapping. While
sensor observability analysis is most intuitively applied to
force sensing, the concept may potentially be applied to other
axial sensors such as accelerometers or distance sensors.
Future work, as mentioned throughout the paper, will in-
clude further generalization of the concept to include other
sensor types, unidirectional sensors, and sensor performance at
joint limits. The concept will also be extended to multi-contact
robots, multi-limbed robots, flexible/soft robots, and floating
base robots. While we have demonstrated the optimization of
task-space observability along a single direction, future work
would extend the concept for sensor observabiliy ellipsoid
shaping to achieve specific sensor observability profiles, as
shown in [20] for the manipulability ellipsoid shaping. Addi-
tionally, possible parallels between sensor observability versus
controllability and observability in the state-space sense will
also be explored in future work. Lastly, sensor observability
analysis has the potential to be used in the robot design phase
to optimize the placement of sensors to create redundancy
or minimize the number of sensors required and will be the
subject of future work.
REFERENCES
[1] P. Salaris, M. Cognetti, R. Spica, and P. R. Giordano, “Online optimal
perception-aware trajectory generation,” IEEE Transactions on Robotics,
vol. 35, no. 6, pp. 1307–1322, Dec 2019.
[2] H. Hu, Z. Liu, S. Chitlangia, A. Agnihotri, and D. Zhao, “Investigating
the impact of multi-lidar placement on object detection for autonomous
driving,” in Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), June 2022, pp. 2550–2559.
[3] I. Bonev, J. Ryu, S.-G. Kim, and S.-K. Lee, “A closed-form solution
to the direct kinematics of nearly general parallel manipulators with
optimally located three linear extra sensors,” IEEE Transactions on
Robotics and Automation, vol. 17, no. 2, pp. 148–156, April 2001.
[4] M. Van Damme, P. Beyl, B. Vanderborght, V. Grosu, R. Van Ham,
I. Vanderniepen, A. Matthys, and D. Lefeber, “Estimating robot end-
effector force from noisy actuator torque measurements,” in 2011 IEEE
International Conference on Robotics and Automation, May 2011, pp.
1108–1113.
[5] L. D. Phong, J. Choi, and S. Kang, “External force estimation using
joint torque sensors for a robot manipulator,” in 2012 IEEE International
Conference on Robotics and Automation, May 2012, pp. 4507–4512.
[6] C.-h. Wu and R. P. Paul, “Manipulator compliance based on joint
torque control,” in 1980 19th IEEE Conference on Decision and Control
including the Symposium on Adaptive Processes, Dec 1980, pp. 88–94.
[7] E. Berger and A. Uhlig, “Feature-based deep learning of proprioceptive
models for robotic force estimation,” in 2020 IEEE-RAS 19th Interna-
tional Conference on Humanoid Robots (Humanoids), 2021, pp. 4258–
4264.
[8] C. Gosselin and J. Angeles, “Singularity analysis of closed-loop kine-
matic chains,” IEEE Transactions on Robotics and Automation, vol. 6,
no. 3, pp. 281–290, June 1990.
[9] S. Stavridis, P. Falco, and Z. Doulgeri, “Pick-and-place in dynamic
environments with a mobile dual-arm robot equipped with distributed
distance sensors,” in 2020 IEEE-RAS 19th International Conference on
Humanoid Robots (Humanoids), 2021, pp. 4258–4264.
[10] C. Xiao, S. Xu, W. Wu, and J. Wachs, “Active multiobject exploration
and recognition via tactile whiskers,” IEEE Transactions on Robotics,
vol. 38, no. 6, pp. 3479–3497, Dec 2022.
[11] A. Albini, F. Grella, P. Maiolino, and G. Cannata, “Exploiting distributed
tactile sensors to drive a robot arm through obstacles,” IEEE Robotics
and Automation Letters, vol. 6, no. 3, pp. 4361–4368, July 2021.
[12] T. Lalibert´
e and C. Gosselin, “Low-impedance displacement sensors for
intuitive physical human–robot interaction: Motion guidance, design,
and prototyping,” IEEE Transactions on Robotics, vol. 38, no. 3, pp.
1518–1530, June 2022.
[13] L. Hawley, R. Rahem, and W. Suleiman, “External force observer for
small- and medium-sized humanoid robots,” International Journal of
Humanoid Robotics, vol. 16, no. 06, pp. 1–25, 2019.
[14] S. Patel and T. Sobh, “Manipulator performance measures - a
comprehensive literature survey,” Journal of Intelligent & Robotic
Systems, vol. 77, no. 3, pp. 547–570, 2015. [Online]. Available:
https://doi.org/10.1007/s10846-014-0024-y
[15] T. Yoshikawa, “Manipulability of robotic mechanisms,” The Interna-
tional Journal of Robotics Research, vol. 4, no. 2, pp. 3–9, 1985.
[16] M. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and
Control. Wiley, 2020.
[17] K. Dufour and W. Suleiman, “On maximizing manipulability index
while solving a kinematics task,” Journal of Intelligent & Robotic
Systems, vol. 100, no. 1, pp. 3–13, Oct. 2020.
[18] S. Chiu, “Control of redundant manipulators for task compatibility,”
in Proceedings. 1987 IEEE International Conference on Robotics and
Automation, vol. 4, March 1987, pp. 1718–1724.
[19] N. Jaquier, L. Rozo, D. G. Caldwell, and S. Calinon, “Geometry-
aware tracking of manipulability ellipsoids,” in Proceedings of Robotics:
Science and Systems, Pittsburgh, Pennsylvania, June 2018.
[20] ——, “Geometry-aware manipulability learning, tracking, and transfer,”
The International Journal of Robotics Research, vol. 40, no. 2-3, pp.
624–650, 2021.
[21] Y. Gu, B. Yao, and C. George Lee, “Feasible center of mass dynamic
manipulability of humanoid robots,” in 2015 IEEE International Con-
ference on Robotics and Automation (ICRA), May 2015, pp. 5082–5087.
[22] M. Azad, J. Babiˇ
c, and M. Mistry, “Dynamic manipulability of the center
of mass: A tool to study, analyse and measure physical ability of robots,”
in 2017 IEEE International Conference on Robotics and Automation
(ICRA), May 2017, pp. 3484–3490.
[23] A. Bicchi and D. Prattichizzo, “Manipulability of cooperating robots
with unactuated joints and closed-chain mechanisms,” IEEE Transac-
tions on Robotics and Automation, vol. 16, no. 4, pp. 336–345, Aug
2000.
[24] I. Gravagne and I. Walker, “Manipulability, force, and compliance
analysis for planar continuum manipulators,” IEEE Transactions on
Robotics and Automation, vol. 18, no. 3, pp. 263–273, June 2002.
[25] C. Y. Wong and W. Suleiman, “Sensor observability index: Evaluating
sensor alignment for task-space observability in robotic manipulators,”
in 2022 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2022, pp. 1276–1282.
[26] S. Samadi, J. Roux, A. Tanguy, S. Caron, and A. Kheddar, “Humanoid
control under interchangeable fixed and sliding unilateral contacts,”
IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 4032–4039,
2021.
[27] J. Denavit and R. S. Hartenberg, “A kinematic notation for lower-pair
mechanisms based on matrices,” Trans. ASME E, Journal of Applied
Mechanics, vol. 22, pp. 215–221, June 1955.
[28] R. L. Williams II, “Baxter humanoid robot kinematics,” Ohio University,
Tech. Rep., 2017. [Online]. Available: https://www.ohio.edu/mechanical-
faculty/williams/html/PDF/BaxterKinematics.pdf
[29] F. Wu, L. Mar´
echal, A. Vibhute, S. Foong, G. S. Soh, and K. L.
Wood, “A compact magnetic directional proximity sensor for spherical
robots,” in 2016 IEEE International Conference on Advanced Intelligent
Mechatronics (AIM), July 2016, pp. 1258–1264.
[30] A. Spielberg, A. Amini, L. Chin, W. Matusik, and D. Rus, “Co-learning
of task and sensor placement for soft robotics,” IEEE Robotics and
Automation Letters, vol. 6, no. 2, pp. 1208–1215, April 2021.
[31] A. Demofonti, G. Carpino, L. Zollo, and M. J. Johnson, “Affordable
robotics for upper limb stroke rehabilitation in developing countries: A
systematic review,” IEEE Transactions on Medical Robotics and Bionics,
vol. 3, no. 1, pp. 11–20, 2021.
[32] B. Siciliano, L. Sciavicco, L. Villani, and G. Oriolo, Robotics: Mod-
elling, Planning and Control, 1st ed. Springer-Verlag, 2008.
[33] K. Dufour, J. Ocampo-Jimenez, and W. Suleiman, “Visual–spatial atten-
tion as a comfort measure in human–robot collaborative tasks,” Robotics
and Autonomous Systems, vol. 133, p. 103626, 2020.
15
Christopher Yee Wong (M’15) received his B.Eng
(’11) and M.Eng. (’14) from McGill University
in Montreal, Canada and Ph.D. (’17) in mechani-
cal engineering from Univ. of Toronto in Toronto,
Canada. He was a postdoctoral researcher at AIST
in Tsukuba, Japan (’18-’19), at LIRMM in Mont-
pellier, France (’19), and at l’Univ. de Sherbrooke
in Sherbrooke, Canada (’19-’23) and is currently at
McGill University since 2023. His research focuses
on improving robot cognition in physical human-
robot interaction with regards to safety, human anal-
ysis, and intention detection related to touch.
Wael Suleiman received the Master’s and Ph.D.
degrees in automatic control from Paul Sabatier
University, Toulouse, France in 2004 and 2008,
respectively. He has been Postdoctoral researcher
at AIST, Tsukuba, Japan from 2008 to 2010, and
at Heidelberg University, Germany from 2010 to
2011. He joined University of Sherbrooke, Quebec,
Canada, in 2011, and is currently Full Professor at
Electrical and Computer Engineering Department.
His research interests include collaborative and hu-
manoid robots, motion planning, nonlinear system
identification and control and numerical optimization.
APPENDIX
A. Sensor observability threshold for sensor noise
(a) (b)
Fig. 11. a) Simplified example for the derivation of sensor observability
thresholding in the x-axis; b) flowchart of the formulation.
To illustrate the derivation of the sensor observability thresh-
old, we examine the scenario shown in Fig. 11 where an
external force Fext is applied to the 1DOF robot with a single
load cell ˆsLC that is aligned with the link. The analysis only
considers the x-axis for simplicity. The force projected along
the axis of the load cell, which is seen by the sensor, is
ˆ
Fin =Fext|Cθ|, where Cθ=cos(θ). The output signal of the
load cell ˆ
Fout is corrupted by sensor noise σϵ, a property of
the sensor hardware, resulting in ˆ
Fout =ˆ
Fin +σϵ. In order to
reconstruct the applied external force from the sensor output,
the output of the sensor must be divided by the angle offset
of the sensor Cθ. The reconstructed external force ˜
Fis then:
˜
F=ˆ
Fout
|Cθ|=ˆ
Fin +σϵ
|Cθ|=Fext|Cθ|+σϵ
|Cθ|=Fext +σϵ
|Cθ|(27)
As the alignment of the sensor axis is equivalent to the angle
offset, i.e., |Cθ|=sLC
x, the second term is then rewritten as:
˜
F=Fext +σϵ
sLC
x
(28)
Thus, sensing of the external force and reconstructing it
precisely is highly dependent on the degree of alignment
between Fext and sLC
x. If the alignment is poor, then the noise
term σϵwill be amplified by 1
sLC
xand mask any small Fext.
Thus, for a given alignment sLC
x,Φis considered the minimum
force that can be detected.
Φ = σϵ
sLC
x
(29)
By considering (29) in reverse, we set Φmin as the min-
imum that must be detected and then determine the sensor
observability threshold sLC,∗
x:
sLC,∗
x=σϵ
Φmin
(30)
While this derivation is for load cells, formulations for other
sensor types may differ and will be explored in future work.