Conference PaperPDF Available

A eneralized approach for inconsistency detection in data fusion from multiple sensors

Authors:

Abstract and Figures

This paper presents a sensor fusion strategy based on Bayesian method that can identify the inconsistency in sensor data so that spurious data can be eliminated from the sensor fusion process. The proposed method adds a term to the commonly used Bayesian technique that represents the probabilistic estimate corresponding to the event that the data is not spurious conditioned upon the data and the true state. This term has the effect of increasing the variance of the posterior distribution when data from one of the sensors is inconsistent with respect to the other. The proposed strategy was verified with the help of extensive simulations. The simulations showed that the proposed method was able to identify inconsistency in sensor data and also confirmed that the identification of inconsistency led to a better estimate of desired state variable
Content may be subject to copyright.
A Generalized Approach for Inconsistency Detection in Data Fusion
from Multiple Sensors
Manish Kumar, Devendra P. Garg, and Randy A. Zachery
Abstract— This paper presents a sensor fusion strategy
based on Bayesian method that can identify the inconsistency in
sensor data so that spurious data can be eliminated from the
sensor fusion process. The proposed method adds a term to the
commonly used Bayesian technique that represents the
probabilistic estimate corresponding to the event that the data
is not spurious conditioned upon the data and the true state.
This term has the effect of increasing the variance of the
posterior distribution when data from one of the sensors is
inconsistent with respect to the other. The proposed strategy
was verified with the help of extensive simulations. The
simulations showed that the proposed method was able to
identify inconsistency in sensor data and also confirmed that
the identification of inconsistency led to a better estimate of
desired state variable.
I. INTRODUCTION
The primary goal of a multi-sensor system is to combine
information from a multitude of sources in a coherent and
synergistic manner to obtain a robust, accurate and
consistent description of quantities of interest in the
environment. There are several issues that arise when fusing
information from multiple sources, some of which include
data association, sensor uncertainty, and management of
data from multiple sources. The most fundamental of these
issues is the inherent uncertainties in the sensor
measurements. The uncertainties in sensors arise not only
from the impreciseness and noise in the measurements, but
are also caused by the ambiguities and inconsistencies
present in the environment, and from the inability to
distinguish between them. The strategies used to fuse data
from these sensors should be able to model such
uncertainties, take into account the environmental
parameters that affect sensor measurements, and fuse
different types of information to obtain a consistent
description of the environment. Some of the techniques used
in literature for sensor fusion include Dempster Shafer
theory for evidential reasoning [1-2], fuzzy logic [3-4],
Bayesian approach [5] and statistical techniques [6] such as
Kalman filter [7-9].
Another cause of uncertainty in sensor measurements is
that sensors frequently provide spurious data. Such spurious
data in sensor measurements are difficult to model because
they do not arise due to inherent noise or other sources of
uncertainties mentioned above. The cause of these spurious
measurements can be permanent failures, short duration
spike faults, or nascent (slowly developing) failure. Most of
the experimentally developed models make use of data
which are not spurious, and represent uncertainties arising
only from sensor noise and inherent limitations. Fusing
spurious observations with the correct ones often leads to
inaccurate estimation which can eventually lead to a
potentially damaging action by the control system. Hence, a
sensor validation scheme is necessary to identify/predict
spurious measurements so that they can be eliminated before
fusing with other measurements. However, it is not an easy
task to predict if a particular sensor would provide spurious
measurement or even to identify if measurement from a
particular sensor was inaccurate.
There are several techniques available in literature for
sensor validation and identification of inconsistent data.
Many of them are based on specific failure models which
lack completeness since all failures cannot be necessarily
modeled. However, in order to detect inconsistency, either
there should be redundancy in data or some a priori
information. For example, a few researchers have used
Nadaraya-Watson Estimator [10] and a priori observation to
validate sensor measurement. However, a priori information
may not be available in all situations. Some researchers have
used model based Kalman filter approach [11], others have
used covariance [12-13], probability [14-15], fuzzy logic
[16], and neural network [17] based approaches. Some of
these methods are explicit model based, whereas others need
tuning and training.
Most of the fusion strategies based on Bayesian approach
available in literature handle inconsistency in data rather
poorly. In practical real world scenarios, where data
generated by sensors might be incomplete, incoherent or
inconsistent, this approach might lead to erroneous results.
Consequently, the inconsistency in data needs to be dealt
with accordingly or separately when Bayesian approach is
being used. This paper makes use of Bayesian approach for
fusion that takes into account measurement inconsistency
and entropy to identify spurious data. Based on the entropy
of posterior distribution of desired quantity, the approach
Manuscript received September 23, 2005.
Manish Kumar is NRC Research Associate with ARO, NC, USA.
(phone: 919-660-5296; fax: 919-660-8963; e-mail: manish@duke.edu).
Devendra P. Garg is with Department of Mechanical Engineering and
Materials Science at Duke University, Durham, NC 27708 USA. (e-mail:
dpgarg@duke.edu).
Randy A. Zachery is with the Army Research Office, RTP, NC, USA (e-
mail: randy.zachery@us.army.mil).
Proceedings of the 2006 American Control Conference
Minneapolis, Minnesota, USA, June 14-16, 2006
WeC20.5
1-4244-0210-7/06/$20.00 ©2006 IEEE 2078
presented in this paper detects if the data from sensors were
spurious or inconsistent. Entropy based analysis helps in
determining if the fusion of data from a particular sensor
actually improves the information content of fused variable.
The paper first describes a simplified version of the
Bayesian approach. Next it presents the analytical
formulation of the proposed approach, and finally makes a
comparative study of the two approaches via simulation
studies.
II. BAYESIAN APPROACH FOR SENSOR FUSION
Bayesian inference [18-19] is a statistical data fusion
algorithm based on Bayes’ theorem [20] of conditional or a
posteriori probability to estimate an n-dimensional state
vector ‘X’, after the observation or measurement denoted by
Z’ has been made. The probabilistic information contained
in Z about X is described by a probability density function
(p.d.f.) p(Z | X), known as likelihood function, or the sensor
model, which is a sensor dependent objective function based
on observation. The likelihood function relates the extent to
which the a posteriori probability is subject to change, and
is evaluated either via offline experiments or by utilizing the
available information about the problem. If the information
about the state X is made available independently before any
observation is made, then likelihood function can be
improved to provide more accurate results. Such a priori
information about X can be encapsulated as the prior
probability and is regarded as subjective because
it is not based on observed data. Bayes’ theorem provides
the posterior conditional distribution of X = x, given Z = z,
as
xXP
)(
)()|(
)()|(
)()|(
)|(
zZP
xXPxXzZp
dxxXPxXzZp
xXPxXzZp
zZxXp
³(1)
Since the denominator depends only on the measurement
(the summation/integration is carried out over all possible
values of state), an intuitive estimation can be made by
maximizing this posterior distribution, i.e., by maximizing
the numerator of (1). This is called Maximum a posteriori
(or MAP) estimate, and is given by:

)(||maxarg
^
xXPxXzZpzZxXpxMAP v
(2)
To incorporate the measurements from two sensors,
Equation (1) can be extended to appear as:
),(
)()|()|(
),|(
21
21
21 zzZP
xXPxXzZpxXzZp
zzZxXp
(3)
Sensor modeling [21-23] forms an important part of
sensor fusion and it deals with developing an understanding
of the nature of measurements provided by the sensor, the
limitations of the sensor, and probabilistic understanding of
the sensor performance in terms of the uncertainties. The
information supplied by a sensor is usually modeled as a
mean about a true value, with uncertainty due to noise
represented by a variance that depends on both the measured
quantities themselves and the operational parameters of the
sensor. A probabilistic sensor model is particularly useful
because it facilitates a determination of the statistical
characteristics of the data obtained. This probabilistic model
captures the probability distribution of measurement by the
sensor (z) when the state of the measured quantity (x) is
known. This distribution is extremely sensor specific and
can be experimentally determined. Gaussian distribution is
one of the most commonly used distributions to represent
the sensor uncertainties and is given by the following
equation:


°
¿
°
¾
½
°
¯
°
®
2
2
2
2
1
|
V
SV
zx
exXzZp (4)
The standard deviation of the distribution
V
is a measure
of the uncertainty of the data provided by sensors. Durrant-
Whyte [22] has used the summation of two Gaussian
distributions to model uncertainty in the sensor
measurement. Other researchers have developed some other
methods [23] to iteratively update the parameters of the
distribution.
III. BAYESIAN FUSION OF TWO GAUSSIAN DISTRIBUTIONS
If the models of two sensors are given by the following
Gaussian likelihood function:


°
¿
°
¾
½
°
¯
°
®
2
2
2
2
1
|k
k
zx
k
kexXzZp
V
SV
k=1,2
(5)
where k=1 represents 1st sensor, and k=2 represents the 2nd
sensor, then, from Bayes’ Theorem the fused MAP estimate
is given by:

>@
xXzZpxXzZpx MAP ||maxarg 21
^
or

»
»
»
¼
º
«
«
«
¬
ª
°
¿
°
¾
½
°
¯
°
®
2
2
2
2
2
1
2
1
22
21
^
2
1
maxarg
VV
SVV
zxzx
MAP ex (6)
which gives:
2
2
1
2
2
2
2
2
1
2
1
1
2
2
2
1
2
2
^
1
1
1
1
1z
r
z
r
zzxMAP
VV
V
VV
V
(7)
where,
2
1
V
V
ris the ratio of standard deviations. Hence,
if there is no prior information available about the quantity
2079
to be estimated, the Bayesian approach for fusion of the two
sensor estimates results in a weighted average dictated by
the ratio of standard deviations. If two Gaussian
distributions (each given by the one of two sensors’ model
pdfs) are fused, then the posterior distribution is jointly
Gaussian with a mean given by Equation (7) and the
standard deviation given by:

 
>
1
2
2
2
1
2
'
VVV
@
(8)
Figure 1 shows the two distributions that get fused to give
the posterior distribution. The simple product of
distributions is also shown in the figure. It may be noted
from the figure that the standard deviation of fused
distribution is smaller than either of the two distributions
representing lesser uncertainty in fused estimates.
0246810 12 14 16 18 20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
Probability
p(z
2
| x)
p(x | z
1
,z
2
)
p(z
1
| x)
p(z
1
| x)p(z
2
| x)
Figure 1: Fusion of Two Gaussian Distributions
IV. SENSOR FUSION WITH SPURIOUS DATA
Spurious data implies that the data measured or observed
by a sensor is incorrect. Sensors often provide data which is
spurious due to sensor failure, some inherent limitation of
the sensor and/or due to some ambiguity in the environment.
The Bayesian approach described in the previous section is
inadequate in handling such spurious data. The approach
yields the same weighted mean value whether data from one
sensor is bad or not, and the posterior distribution always
has a smaller variance than either of individual distributions
being multiplied. This is also evident from Equation (8). The
approach does not have a mechanism to identify if data from
certain sensor is incorrect. Fusing spurious observation with
the correct one often leads to inaccurate estimation. The
following paragraphs describe the use of Bayesian approach
for fusion of data from two sensors that takes into account
measurement inconsistency.
While building stochastic sensor models experimentally,
generally these spurious data are identified and eliminated.
Hence these experimentally developed sensor models
represent uncertainties arising only from sensor noise. If the
event represents that the data obtained from sensor is
not spurious, then sensor model developed in this manner
actually represents the distribution
0 s

0,| sxXzZp
1
z
@
.
Hence, if the sensor model is represented by a Gaussian
distribution, then for Sensor 1 outputting measurement ,
the sensor model is given by:

°
¿
°
¾
½
°
¯
°
®
2
1
2
1
2
2
V
S
zx
e
@
1
0
2
z
1
z

>@
1
1
0

>@
1
0

>@
0

>@
0
),
)
21 z
x

>
(9)
1
1
1
1
0,|
V
sxXzZp
where 1
V
is the standard deviation of the distribution, and
subscript 1 in
>
1,| sxXzZp represents Sensor 1.
Similarly, sensor model for Sensor 2 outputting data is
given by:

>@

°
¿
°
¾
½
°
¯
°
®
2
2
2
2
2
2
2
22
1
0,|
V
SV
zx
esxXzZp (10)
From the Bayes’ theorem, the probability that the data
measured by Sensor 1 is not spurious conditioned upon the
actual state
x
is given by:

>@

>@

>@

>@
¦
s
sxXzZpsP
sxXzZpsP
zZxXsp
1
1
1
1
1
1,|
,|0
,|0
(11)

>@
1
0 sP is the sensor specific prior probability that the
data provided by Sensor 1 is not spurious. The denominator
of the right hand side of the above equation is a summation
carried over all possible values of s which are 0 and 1.
Equation (11) can be re-written as:

>@

>@

>@
1
1
1
1
1
1|
,|0
,|0 xXzZp
sxXzZpsP
zZxXsp
(12)
or,

>@

>@

>@
1
1
1
1
1
1
1,|0
,|0
|zZxXsp
sxXzZpsP
xXzZp
(13)
Similarly, for Sensor 2:

>@

>@

>@
2
2
2
2
2
2
2,|0
,|0
|zZxXsp
sxXzZpsP
xXzZp
(14)
Then, from Equation (3),

>@

>@

>@

>@

>@

>@
(
(
,|0
0,|0
,|0
0,|0
),|(
2
2
2
22
1
1
1
1
1
21
zZP
XP
zZxXsp
sxXzZpsP
zZxXsp
sxXzZpsP
zzZxXp
u
u
(15)
or simply,

>@

>@

>@

>@
2
2
2
2
1
1
1
1
21
,|0
0,|
,|0
0,|
),|(
zZxXsp
sxXzZp
zZxXsp
sxXzZp
zzZxXp
u
v (16)
2080
The probability that the measurement from Sensor ‘i’ is
not spurious given the true state ‘x’ and measurement ‘zi’ is
assumed to be represented by the following equation:
2
22 2mb ii
V
(21)
Substituting Equation (21) in Expression (19) gives,




°
¿
°
¾
½
°
¯
°
®
°
¿
°
¾
½
°
¯
°
®
u
v
2
21
2
2
2
2
2
1
2
21
2
2
2
1
2
1
2
2
2
1
21
2
1
2
1
),|(
zzm
m
zx
zzm
m
zx
e
ezzZxXp
V
V
SV
SV
(22)

>@

°
¿
°
¾
½
°
¯
°
®
2
2
,|0 i
i
a
zx
i
i
ezZxXsp (17)
An advantage of choosing the above function for
representing the probability is the fact that the probability is
1 when measurement ‘zi’ is equal to the true state ‘x’, and
decreases when the measured value moves away from the
true state. The rate at which the probability decreases when
the measured value moves away from the true estimate
depends upon the parameter ‘ai’. In this paper, this
parameter is assumed to be dependent upon the distance
between measurements of two sensors and the standard
deviation of Gaussian distribution representing the
corresponding sensor’s model given by Equations (9-10).
The spread of the function, given by Equation (17), should
be more if the data points from sensor are close to each
other. Logically it follows that the parameter ‘ai’ should be
inversely proportional to the distance between sensor
readings. Hence:
Hence, the whole process has the effect of increasing the
value of the variance of individual distribution by a factor of

°
¿
°
¾
½
°
¯
°
®
2
21
2
2
zzm
m. Larger difference in the sensor measurements
implies that the variance increases by a bigger factor, which
is logically understandable. The MAP estimate of state x
remains unchanged as given by Equation (7), but the
variance of the fused (posterior) distribution changes.
Hence, depending on the squared difference in
measurements from the two sensors, the variance of the
posterior distribution may increase or decrease as compared
to variance of individual Gaussian distributions representing
the sensor models. The strategy is, therefore, capable of
determining if fusion of the two measurements would lead to
an increase or decrease of the variance of the posterior
distribution. In information theoretic terms, the strategy is
capable of determining if the fusion leads to an increase in
information content or not. This can be easily determined by
calculating the entropy of the state variable ‘x’ with
distribution ),|( 21 zzZxXp given by the equation:

2
21
2
2
zz
b
ai
i
(18)
Substituting Equations (17-18) along with Gaussian sensor
model given by Equations (9-10) in Expression (16) yields:






 
 
°
¿
°
¾
½
°
¯
°
®
°
¿
°
¾
½
°
¯
°
®
°
°
¿
°
°
¾
½
°
°
¯
°
°
®
°
¿
°
¾
½
°
¯
°
®
°
°
¿
°
°
¾
½
°
°
¯
°
°
®
°
¿
°
¾
½
°
¯
°
®
u
v
uv
2
2
2
21
2
2
2
2
2
1
2
21
2
1
2
1
2
21
2
2
2
2
2
2
2
2
2
21
2
1
2
1
2
1
2
1
2
1
2
2
1
1
21
2
2
2
1
21
2
1
2
1
),|(
2
1
2
1
),|(
b
zz
zx
b
zz
zx
zz
b
zx
zx
zz
b
zx
zx
e
ezzZxXp
e
e
e
e
zzZxXp
V
V
VV
SV
SV
SVSV
(19)
 
dxzzZxXpzzZxXpXH
³ ),|(log),|( 2121
(23)
Entropy of a variable represents the uncertainty in that
variable. A larger value of entropy implies more uncertainty
and hence less information content. The fusion of two
measurements should always lead to a decrease in entropy,
and fusion should always be done in order to reduce
entropy. For Gaussian distributions, larger variance implies
more entropy. It may be noted that the prior probability
>@
i
sP 0 is a constant value and simply acts as a constant
weighting factor in Equation (15). This value doesn’t have
any influence on the posterior distribution and the MAP
estimate of the state.
The value of parameter ‘bi’ is chosen to satisfy the
following inequality:

2
21
22 2zzb ii t
V
(20)
V. SIMULATION RESULTS
A simulation study was carried out to verify the
effectiveness of the proposed strategy in identifying
spurious data. The following parameters were assumed in
the simulation:
Satisfaction of this inequality ensures that the posterior
distribution in Expression (19) remains Gaussian and hence
has a single peak. The value of parameter should be chosen
based on maximum expected difference (represented by ‘m’)
between the sensor readings so that inequality (20) is always
satisfied. Hence,
Sensor 1:

>@
8.00 1 sP and 3
1
V
Sensor 2:

>@
9.00 2 sP and 2
2
V
True value of state: 20 x
2081
Simulation data was generated so that Sensor 1 provided
80% of the time normally distributed random data with a
mean value of 20 and a variance value of 9. It provided
incorrect data 20% of the time which was uniformly
distributed random data outside the Gaussian distribution.
Similarly, Sensor 2 provided 90% of the time normally
distributed random data with a mean value of 20 and a
variance value of 4, and 10% of the time it provided
incorrect data. It may be noted here that the values for
have been assumed simply for the purpose of
generating simulated data. These are not used in the fusion
algorithm. Since these values are constants, they do not have
any effect on the posterior distribution or the MAP estimate.

>
k
sP 0
@
Figure 2 shows the situation in which the data provided
by the two sensors were in approximate agreement. It can be
seen that fused posterior distribution obtained from the
proposed strategy has a lower value of variance than that of
the each of the distributions being multiplied indicating that
fusion leads to a decrease in posterior uncertainty and
entropy.
010 20 30 40 50
0
0.1
0.2
0.3
0.4
0.5
x
Probability
010 20 30 40 50
0
0.1
0.2
0.3
0.4
0.5
x
Probability
Fused Post erior:
Simple Bayesi an
Fused Post erior:
Proposed Bay esian
Sensor 2
Sensor 1
Figure 2: Fusion of Two Sensors in Agreement
Figure 3 shows another situation in which the data
provided by two sensors were in disagreement. In this case,
fused posterior distribution obtained from the proposed
strategy has a larger variance as compared to both of the
distributions being multiplied. This indicates that the fusion
of data actually leads to an increase in entropy and
uncertainty. Further, it may be noted that posterior
distribution resulting from Bayesian approach without this
diagnostic feature results in the same constant variance (as
in the case when the two sensors were in agreement) with
peak at a point representing the weighted mean.
A set of one hundred data points was generated in the
manner described above and fusion was carried out via both
usual Bayesian technique as well as the proposed Bayesian
technique. If the proposed technique detected inconsistency
in data (data by one of the sensors being spurious), the
measurement by the sensor with less probability of
providing spurious data (Sensor 2 in this case) was assumed
to be the fused value. Figure 4 shows the data points with
asterisks (*) as fused values obtained via the proposed
approach. The dots (·) are the values obtained from simple
Bayesian technique. Dashed horizontal (--) line represents
the true value of the state desired to be measured. The figure
shows that the asterisks, on an average, lie closer to the
dashed line as compared to simple dots. The mean value of
sum of squared error between fused value and true value for
all 100 data points obtained from simple Bayesian approach
was found to be 22.4030, and that obtained from the
proposed Bayesian technique was found to be 14.0744. As
can be seen in the figure, there are few outliers which
represent cases when Sensor 2 (sensor with less probability
of giving spurious data) provided spurious data. The
approach is able to identify the inconsistency in data very
effectively, but in the presence of only two sensor data
points, it is not possible to ascertain which one of the two
measurements is spurious. Addition of another sensor can
greatly help in identifying the sensor providing spurious
data.
010 20 30 40 50
0
0.1
0.2
0.3
0.4
0.5
x
Probability
Fused Post erior:
Simple Bay esian
Fused Post erior:
Proposed Bay esian
Sensor 2
Sensor 1
Figure 3: Fusion of Two Sensors in Disagreement
020 40 60 80 100
10
15
20
25
30
35
40
Data Points
Value of Stat e Variable
True Value of State
Proposed Bayesi an Approach
Simple Bayesian Approach
Figure 4: Simulation Study Involving Fusion of 100 Random Data Points
from Two Sensors
Figure 5 shows a sample of 10 data points (taken from the
fusion of 100 data points above) representing both data
measured by two sensors, and the fused value obtained from
simple Bayesian as well as proposed method. It can be seen
in the figure that for almost all of the points both simple
2082
Bayesian as well as proposed method yielded the same value
except for 8th data point, where inconsistency occurred. The
proposed method was able to identify inconsistency and
chose the 2nd sensor reading as the correct one.
[4] Klir, J. K. and Yuan, B., Fuzzy Sets and Fuzzy Logic: Theory and
Applications, Upper Saddle River, NJ: Prentice-Hall, 1995.
[5] McKendall, R. and Mintz, M., “Data Fusion Techniques Using Robust
Statistics”, Data Fusion in Robotics and Machine Intelligence, Abidi,
M. A. and Gonzalez, R. A. (eds.), Academic Press, 1992.
[6] Press, S. J., Bayesian Statistics: Principles, Models and Applications,
John Wiley and Sons, 1989.
0 2 4 6 8 10
15
20
25
30
35
Value of State Variable
Data Points
True V alue of State
Sensor 1 Dat a
Sensor 2 Dat a
Proposed Bay esian Approach
Simple Bayesian Approach
[7] Maybeck, P. S., Stochastic Models, Estimation and Control, Volume
1, Academic Press, Inc., 1979.
[8] Kalman, R. E., “A New Approach to Linear Filtering and Prediction
Problems”, Transactions of the ASME-Journal of Basic Engineering,
Vol. 82, Ser. D, 1960, pp. 35 – 45.
[9] Sasiadek, J. Z., “Sensor Fusion”, Annual Reviews in Control, Vol. 26,
2002, pp. 203-228.
[10] Wellington, S. J., Atkinson, J. K., Sion, R. P. “Sensor Validation and
Fusion Using the Nadaraya-Watson Statistical Estimator”,
Proceedings of the IEEE International Conference on Information
Fusion, Vol. 1, July 2002, pp. 321 – 326.
[11] Del Gobbo, D., Napolitano, M., Famouri, P., and Innocenti, M.,
“Experimental Application of Extended Kalman Filtering for Sensor
Validation”, IEEE Transactions on Control Systems Technology, Vol.
9, No. 2, March 2001, pp. 376 – 380.
[12] Nicholson, D., “An Automatic Method for Eliminating Spurious Data
from Sensor Networks”, Proceedings of the IEE Conference on Target
Tracking: Algorithms and Applications, 23-24 March 2004, pp. 57 –
61.
Figure 5: Fusion of 10 Sample Data Points
VI. CONCLUSIONS [13] Benaskeur, A. R., “Consistent Fusion of Correlated Data Sources”,
Proceedings of the IEEE Annual Conference of the Industrial
Electronics Society, Vol. 4, 5-8 Nov. 2002, pp. 2652 – 2656.
One of the problems in sensor fusion is that the sensors
may provide spurious data that cannot be easily modeled and
predicted. These spurious or incorrect data need to be
identified before fusing with the correct ones. This paper
proposes an innovative technique based on Bayesian
approach which can implicitly identify inconsistency in
sensor data and make decision about whether to fuse them or
not. The method yields posterior distribution whose
variance/entropy depends upon the distance between the
data points. If the posterior distribution is less informative
than the individual distributions being multiplied, then it is
assumed that the data obtained from one of the two sensors
is spurious. Simulation studies verified that the proposed
strategy was highly effective in identifying inconsistency in
sensor data. Future research includes experimental
validation of the proposed method and extending the scope
of the proposed method to incorporate data from three or
more sensors.
[14] Soika, M., “A Sensor Failure Detection Framework for Autonomous
Mobile Robots”, Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems, Vol. 3, 7-11 Sept.
1997, pp. 1735 – 1740.
[15] Ibarguengoytia, P. H., Sucar, L. E., and Vadera, S., “Real Time
Intelligent Sensor Validation”, IEEE Transactions on Power Systems,
Vol. 16, No. 4, Nov. 2001, pp. 770 – 775.
[16] Frolik, J., Abdelrahman, M., and Kandasamy, P., “A Confidence-
Based Approach to the Self-validation, Fusion and Reconstruction of
Quasi-Redundant Sensor Data”, IEEE Transactions on
Instrumentation and Measurement, Vol. 50, No. 6, Dec. 2001, pp.
1761 – 1769.
[17] Rizzo, A. and Xibilia, M.G., “An Innovative Intelligent System for
Sensor Validation in Tokamak Machines”, IEEE Transactions on
Control Systems Technology, Vol. 10, No. 3, May 2002, pp. 421 –
431.
[18] Luo, R. and Su, K., “A Review of High-Level Multisensor Fusion:
Approaches and Applications”, Proceedings of the IEEE International
Conference on Multisensor Fusion and Integration for Intelligent
Systems, 1999, pp. 25-31.
[19] Clark, J. J., and Yuille, A. L, Data Fusion for Sensory Information
Processing Systems, Kluwer Academic Publications, 1990.
[20] Bayes T., “An Essay Towards Solving a Problem in Doctrine of
Chances”, Philosophical Transactions, Vol. 53, 1763, pp. 370 – 418.
ACKNOWLEDGMENT [21] Manyika, J. and Durrant-Whyte, H., Data Fusion and Sensor
Management: a Decentralized Information-Theoretic Approach, Ellis
Howard Limited, 1994.
This research was performed while the first author, Dr.
Manish Kumar, held a National Research Council Research
Associateship Award at Duke University via the Army
Research Office. The financial support provided by the
National Science Foundation under award number 04-27597
is gratefully acknowledged.
[22] Kumar, M. and Garg, D., “Three-Dimensional Occupancy Grid with
the Use of Vision and Proximity Sensors in a Robotic Workcell”,
Paper Number IMECE2004-59593, Proceedings of the ASME
International Mechanical Engineering Congress and Exposition,
Anaheim, CA, November 14-19, 2004, 8p.
[23] Kumar, M. and Garg, D., “Intelligent Multi Sensor Fusion Techniques
in Flexible Manufacturing Workcells”, Proceedings of American
Control Conference, 2004, Boston, MA, pp. 5375 – 5380.
REFERENCES [24] Durrant-Whyte, H.F., Integration, Coordination and Control of Multi-
Sensor Robot Systems, Kluwer Academic Publishers, Norwell, MA,
1988.
[1] Shafer, G., A Mathematical Theory of Evidence, Princeton, NJ:
Princeton University Press, 1976.
[2] Demspter, A. P., “A Generalization of Bayesian Inference”, Journal of
Royal Statistical Society, B, Vol. 30, No. 2, 1968, pp. 205 – 247.
[25] Porrill, J., “Optimal Combination and Constraints for Geometrical
Sensor Data”, The International Journal of Robotics Research, 1988,
pp. 66 – 77.
[3] Yager, R. R. and Zadeh, L. A., (Eds.), An Introduction to Fuzzy Logic
Applications in Intelligent Systems, Kluwer Academic Publishers,
1991.
2083
... In order to leverage long horizon information (see bottom of Fig. 2), we first fuse the keypoint heatmap (the heat center) and the displacement prediction inputs via a Bayesian filter [39], where input number n = 2 in our case andμ i represents the corresponding 2D location prediction: ...
... A simple constant velocity model is used, and the uncertainties for the velocity estimates are set at 20 in our experiments. As the relative dimension prediction inputs stay the same for one specific object over time, we employ another Bayesian filter [39] on the entire history of their measurements. After the updates, the keypoint estimates from the Kalman filter along with the estimated relative dimensions, are fed to a Levenberg-Marquardt version of PnP [40], producing the final 6-DOF pose output. ...
Preprint
We propose a single-stage, category-level 6-DoF pose estimation algorithm that simultaneously detects and tracks instances of objects within a known category. Our method takes as input the previous and current frame from a monocular RGB video, as well as predictions from the previous frame, to predict the bounding cuboid and 6-DoF pose (up to scale). Internally, a deep network predicts distributions over object keypoints (vertices of the bounding cuboid) in image coordinates, after which a novel probabilistic filtering process integrates across estimates before computing the final pose using PnP. Our framework allows the system to take previous uncertainties into consideration when predicting the current frame, resulting in predictions that are more accurate and stable than single frame methods. Extensive experiments show that our method outperforms existing approaches on the challenging Objectron benchmark of annotated object videos. We also demonstrate the usability of our work in an augmented reality setting.
... First, their implementation requires redundant, diverse and complementary sensors which increases the risks of hardware faults. Other issues are related to the data used in the fusion process, diversity and imperfection of the used sensors and the type of the application [2], [4], [5], [6]. ...
Article
Full-text available
This paper presents a fault tolerance architecture for data fusion mechanisms that tolerates sensor faults in a multirotor Unmanned Aerial Vehicle (UAV). The developed approach is based on the traditional duplication/comparison method and is carried out via error detection and system recovery to both detect and isolate the faulty sensors. It is applied on an informational framework using extended Informational Kalman Filters (IKF) for state estimation with prediction models based on available sensors measurements. Error detection is realized through residuals comparisons using the Bhattacharyya Distance (BD), an informational measure that estimates the similarity of two probability distributions. An optimal thresholding based on Bhattacharyya criterion is applied. In order to identify the faulty sensor, the Bhattacharyya distance between the a priori and a posteriori distributions of each IKF is also computed. The system recovery is done by substituting the erroneous state by an error-free state. The proposed architecture alleviates the assumption of a fault-free prediction model using the information surprise concept instead of hardware redundancy.The performance of the proposed framework is shown through offline validation using real measurements from navigation sensors of a multirotor UAV with fault injection.
... e more sufficient the amount of information is, the closer the fusion result is to the essence of things. With the help of some new technologies in other fields, theories, and algorithms that can fully and effectively utilize redundant information of multisensors, reduce the impact of data defects (imprecise and uncertain) and alleviate outliers, and false data [155] are developed, which is one of the key factors to improve the accuracy of data fusion. (4) Establish criteria for judging data fusion to reduce the ambiguity of data association; inconsistent fusion data, also known as data association ambiguity, is one of the main obstacles to overcome in data fusion. ...
Article
Full-text available
Multisensor data generalized fusion algorithm is a kind of symbolic computing model with multiple application objects based on sensor generalized integration. It is the theoretical basis of numerical fusion. This paper aims to comprehensively review the generalized fusion algorithms of multisensor data. Firstly, the development and definition of multisensor data fusion are analyzed and the definition of multisensor data generalized fusion is given. Secondly, the classification of multisensor data fusion is discussed, and the generalized integration structure of multisensor and its data acquisition and representation are given, abandoning the research characteristics of object oriented. Then, the principle and architecture of multisensor data fusion are analyzed, and a generalized multisensor data fusion model is presented based on the JDL model. Finally, according to the multisensor data generalized fusion architecture, some related theories and methods are reviewed, and the tensor-based multisensor heterogeneous data generalized fusion algorithm is proposed, and the future work is prospected.
... Simple methods for data reconciliation of conflicting sensor data are voters [46]. More elaborate fusion methods are the Bayes method [27,97], the Dempster-Shafer method [89,165], and heuristic methods [94,149]. In the process industry, reconciliation methods are implemented for the estimation of process state data. ...
Chapter
Full-text available
This chapter describes the various approaches to analyse, quantify and evaluate uncertainty along the phases of the product life cycle. It is based on the previous chapters that introduce a consistent classification of uncertainty and a holistic approach to master the uncertainty of technical systems in mechanical engineering. Here, the following topics are presented: the identification of uncertainty by modelling technical processes, the detection and handling of data-induced conflicts, the analysis, quantification and evaluation of model uncertainty as well as the representation and visualisation of uncertainty. The different approaches are discussed and demonstrated on exemplary technical systems.
... The outputs of these 2 PFs are averaged as the algorithm's output. TS follows the mainstream idea of first-detect-failure-thendo-fusion in the literature (see e.g., [1], [25]). It employs a single PF and a modified likelihood function L t (x t ) = (L 1,t (x t )) 1−α1,t (L 2,t (x t )) 1−α2,t , where α i,t denotes the estimated failure probability of the ith modality at time t, i = 1, 2. ...
Article
Full-text available
This letter is concerned with multi-modal data fusion (MMDF) under unexpected modality failures in nonlinear non-Gaussian dynamic processes. An efficient framework to tackle this problem is proposed. In particular, a notion termed modality “ usefulness ,” which takes a value of 1 or 0, is used for indicating whether the observation of this modality is useful or not. For $n$ modalities involved, $2^n$ combinations of their “ usefulness ” values exist. Each combination defines one hypothetical model of the true data generative process. Then the problem of concern is formalized as a task of nonlinear non-Gaussian state filtering under model uncertainty, which is addressed by a dynamic model averaging (DMA) based particle filter (PF) algorithm. This DMA algorithm employs $2^n$ models, while all models share the same state-transition function and a unique set of particle values. That makes its computational complexity only slightly larger than a single model based PF algorithm, especially for scenarios in which $n$ is small. Experimental results show that the proposed solution outperforms remarkably state-of-the-art methods. Code and data are available at https://github.com/robinlau1981/fusion .
... The outputs of these 2 PFs are averaged as the algorithm's output. TS follows the mainstream idea of first-detect-failure-thendo-fusion in the literature (see e.g., [1], [25]). It employs a single PF and a modified likelihood function L t (x t ) = (L 1,t (x t )) 1−α1,t (L 2,t (x t )) 1−α2,t , where α i,t denotes the estimated failure probability of the ith modality at time t, i = 1, 2. ...
Preprint
Full-text available
This paper is concerned with multi-modal data fusion (MMDF) under unexpected modality failures in nonlinear non-Gaussian dynamic processes. An efficient framework to tackle this problem is proposed. In particular, a notion termed modality “usefulness”, which takes a value of 1 or 0, is used for indicating whether the observation of this modality is useful or not. For $n$ modalities involved, $2^n$ combinations of their “usefulness” values exist. Each combination defines one hypothetical model of the true data generative process. Then the problem of concern is formalized as a task of nonlinear non-Gaussian state filtering under model uncertainty, which is addressed by a dynamic model averaging (DMA) based particle filter (PF) algorithm. This DMA algorithm employs $2^n$ models, while all models share the same state-transition function and a unique set of particle values. That makes the computational complexity of this algorithm only slightly larger than a single model based PF algorithm, especially for scenarios in which $n$ is small. Experimental results show that the proposed solution outperforms remarkably state-of-the-art methods. Code and data are available at https://github.com/robinlau1981/fusion.
Article
In intelligent greenhouses, wireless sensor networks with uneven temperature distribution and low collection efficiency may lead to poor monitoring effects in real time. To improve the performance of the temperature monitoring system in intelligent greenhouses, a real-time fusion strategy for a hierarchical wireless sensor network (WSN) is proposed. The designed WSN has three layers. In the bottom, sensors collect and preprocess the temperature data of the greenhouse by an improved unscented Kalman filter. In the middle layer, each cluster-head sensor, as a local fusion center, is used to fuse the data collected from the bottom sensors by a parallel inverse covariance intersection fusion algorithm. In the top, a global fusion center is utilized to fuse the temperature data from the middle layer to reflect the global temperature of the greenhouse by an improved extreme learning machine algorithm. The designed algorithm applied in each layer ensures the efficiency and accuracy of data fusion in real time. Simulation results show that the designed fusion strategy effectively improves the fusion accuracy and realizes the real-time fusion. The performance of the designed temperature monitoring system is greatly improved.
Preprint
Full-text available
This work presents the most recent advances of the Robotic Testbed for Rendezvous and Optical Navigation (TRON) at Stanford University - the first robotic testbed capable of validating machine learning algorithms for spaceborne optical navigation. The TRON facility consists of two 6 degrees-of-freedom KUKA robot arms and a set of Vicon motion track cameras to reconfigure an arbitrary relative pose between a camera and a target mockup model. The facility includes multiple Earth albedo light boxes and a sun lamp to recreate the high-fidelity spaceborne illumination conditions. After the overview of the facility, this work details the multi-source calibration procedure which enables the estimation of the relative pose between the object and the camera with millimeter-level position and millidegree-level orientation accuracies. Finally, a comparative analysis of the synthetic and TRON simulated imageries is performed using a Convolutional Neural Network (CNN) pre-trained on the synthetic images. The result shows a considerable gap in the CNN's performance, suggesting the TRON simulated images can be used to validate the robustness of any machine learning algorithms trained on more easily accessible synthetic imagery from computer graphics.
Conference Paper
This paper discusses the use of multiple vision sensors and a proximity sensor to obtain three-dimensional occupancy profile of robotic workspace, identify key features, and obtain a 3-D model of the objects in the work space. The present research makes use of three identical vision sensors. Two of these sensors are mounted on a stereo rig on the sidewall of the robotic workcell. The third vision sensor is located above the workcell. The vision sensors on the stereo rig provide information about three-dimensional position of any point in the robotic workspace. The camera to robot calibration for these vision sensors in stereo configuration has been obtained with the help of a three-layered feedforward neural network. Squared Sum of Difference (SSD) algorithm has been used to obtain the stereo matching. Similarly, camera to robot transformation for the camera located above the work cell has been obtained with the help of a three-layered feedforward neural network. Three-dimensional positional information from vision sensors on stereo rig and two-dimensional positional information from a camera located above the workcell and a proximity sensor mounted on the robot wrist have been fused with the help of Bayesian technique to obtain more accurate positional information about locations in workspace.
Article
Procedures of statistical inference are described which generalize Bayesian inference in specific ways. Probability is used in such a way that in general only bounds may be placed on the probabilities of given events, and probability systems of this kind are suggested both for sample information and for prior information. These systems are then combined using a specified rule. Illustrations are given for inferences about trinomial probabilities, and for inferences about a monotone sequence of binomial pi. Finally, some comments are made on the general class of models which produce upper and lower probabilities, and on the specific models which underlie the suggested inference procedures.
Book
1 Introduction.- 1.1 Sensors and Intelligent Robotics.- 1.2 Multi-Sensor Robot Systems.- 1.3 Organization of Sensor Systems.- 1.4 The Integration of Sensory Information.- 1.5 Coordination of Sensor Systems.- 1.6 Summary and Overview.- 2 Environment Models and Sensor Integration.- 2.1 Introduction.- 2.2 Geometric Environment Models.- 2.3 Uncertain Geometry.- 2.4 Characterizing Uncertain Geometry.- 2.4.1 Stochastic Geometry.- 2.4.2 Well-Condition Stochastic Geometries.- 2.4.3 Stochastic Topology.- 2.5 Manipulating Geometric Uncertainty.- 2.5.1 Transforming Probability.- 2.5.2 Approximate Transforms.- 2.5.3 Properties of Transformations.- 2.6 Gaussian Geometry.- 2.6.1 Changing Locations.- 2.6.2 Changing Feature Descriptions.- 2.7 Gaussian Topology.- 2.8 Summary.- 3 Sensors and Sensor Models.- 3.1 Introduction.- 3.2 Characterizing Sensors.- 3.3 Multi-Sensor System Models.- 3.4 Sensors as Team Members.- 3.5 Observation Models.- 3.6 Dependency Models.- 3.7 State Models.- 3.8 Summary.- 4 Integrating Sensor Observations.- 4.1 Introduction.- 4.2 Decision Models and Information Fusion.- 4.2.1 Decision Theory Preliminaries.- 4.2.2 Robust Decision Procedures.- 4.2.3 Observation Fusion.- 4.2.4 Sparse Data Fusion.- 4.3 Integrating Observations with Constraints.- 4.3.1 Updating a Constrained Random Network.- 4.3.2 The Three-Node Example.- 4.4 Estimating Environment Changes.- 4.4.1 Changes in Location.- 4.4.2 Using Feature Observations to Update Centroid Locations.- 4.4.3 Logical Relations and Updates.- 4.5 Consistent Integration of Geometric Observations.- 4.5.1 The Consistency Problem.- 4.5.2 Consistent Changes in Location.- 4.5.3 Consistent Updating of a Location Network.- 4.5.4 Computational Considerations.- 4.6 Summary.- 5 Coordination and Control.- 5.1 Introduction.- 5.2 The Team Decision Problem.- 5.2.1 The Structure of a Team.- 5.2.2 The Multi-Bayesian Team.- 5.2.3 Opinion Pools.- 5.3 Multi-Sensor Teams.- 5.3.1 Known and Unknown Environments.- 5.3.2 Hypothesis Generation and Verification.- 5.3.3 Constraint and Coordination.- 5.4 Sensor Control.- 5.5 Summary.- 6 Implementation and Results.- 6.1 Introduction.- 6.2 A Structure for Multi-Sensor Systems.- 6.3 Experimental Scope.- 6.4 Implementation.- 6.4.1 The Coordinator.- 6.4.2 The Agents.- 6.4.3 Communication.- 6.5 Simulation Results.- 6.6 Experimental Results.- 6.7 Summary and Conclusions.- 7 Conclusions.- 7.1 Summary Discussion.
Book
Knowledge Representation in Fuzzy Logic L.A. Zadeh. Expert Systems Using Fuzzy Logic R.R. Yager. Fuzzy Rules in Knowledge-Based Systems D. Dubois, H. Prade. Fuzzy Logic Controllers H. Berenji. Methods and Applications of Fuzzy Mathematical Programming H.J. Zimmermann. Fuzzy Set Methods in Computer Vision J.M. Keller, R. Krishnapuram. Fuzziness, Image Information and Scene Analysis S.K. Pal. Fuzzy Sets in Natural Language Processing V. Novak. Fuzzy-Set-Theoretic Applications in Modeling of Man-Machine Interactions W. Karwowski, G. Salvendy. Questionnaires and Fuzziness B. Bouchon-Meunier. Fuzzy Logic Knowledge Systems and Artificial Neural Networks in Medicine and Biology E. Sanchez. The Representation and Use of Uncertainty and Metaknowledge in Milord R. Lopez de Montaras, C. Sierra, J. Augusti. Fuzzy Logic with Linguistic Quantifiers in Group Decision Making J. Kacprzyk, M. Fedrizzi, H. Nurmi. Learning in Uncertain Environments M. Botta, A. Giordana, L. Saitta. Evidential Reasoning Under Probabilistic and Fuzzy Uncertainties J.F. Baldwin. Probabilistic Sets-Probabilistic Extension of Fuzzy Sets K. Hirota. Index.
Article
Procedures of statistical inference are described which generalize Bayesian inference in specific ways. Probability is used in such a way that in general only bounds may be placed on the probabilities of given events, and probability systems of this kind are suggested both for sample information and for prior information. These systems are then combined using a specified rule. Illustrations are given for inferences about trinomial probabilities, and for inferences about a monotone sequence of binomial pi. Finally, some comments are made on the general class of models which produce upper and lower probabilities, and on the specific models which underlie the suggested inference procedures.