ArticlePDF Available

Intelligent RFID Indoor Localization System Using a Gaussian Filtering Based Extreme Learning Machine

Authors:

Abstract and Figures

Nowadays, the increasing demands of location-based services (LBS) have spurred the rapid development of indoor positioning systems (IPS). However, the performance of IPSs is affected by the fluctuation of the measured signal. In this study, a Gaussian filtering algorithm based on an extreme learning machine (ELM) is proposed to address the problem of inaccurate indoor positioning when significant Received Signal Strength Indication (RSSI) fluctuations happen during the measurement process. The Gaussian filtering method is analyzed and compared, which can effectively filter out the fluctuant signals that were caused by the environment effects in an RFID-based positioning system. Meanwhile, the fast learning ability of the proposed ELM algorithm can reduce the time consumption for the offline and online service, and establishes the network positioning regression model between the signal strengths of the tags and their corresponding positions. The proposed positioning system is tested in a real experimental environment. In addition, system test results demonstrate that the positioning algorithms can not only provide higher positioning accuracy, but also achieve a faster computational efficiency compared with other previous algorithms.
Content may be subject to copyright.
symmetry
S
S
Article
Intelligent RFID Indoor Localization System Using a
Gaussian Filtering Based Extreme Learning Machine
Changzhi Wang, Zhicai Shi * and Fei Wu *
School of Electrical and Electronic Engineering, Shanghai University of Engineering Science,
Shanghai 201620, China; wcz1990ah@126.com
*Correspondence: shizhicai@sues.edu.cn (Z.S.); wufei@sues.edu.cn (F.W.);
Tel.: +86-021-6779-1131 (Z.S.); +86-021-6779-1035 (F.W.)
Academic Editor: Ka Lok Man
Received: 12 November 2016; Accepted: 20 February 2017; Published: 26 February 2017
Abstract:
Nowadays, the increasing demands of location-based services (LBS) have spurred the rapid
development of indoor positioning systems (IPS). However, the performance of IPSs is affected by the
fluctuation of the measured signal. In this study, a Gaussian filtering algorithm based on an extreme
learning machine (ELM) is proposed to address the problem of inaccurate indoor positioning when
significant Received Signal Strength Indication (RSSI) fluctuations happen during the measurement
process. The Gaussian filtering method is analyzed and compared, which can effectively filter out the
fluctuant signals that were caused by the environment effects in an RFID-based positioning system.
Meanwhile, the fast learning ability of the proposed ELM algorithm can reduce the time consumption
for the offline and online service, and establishes the network positioning regression model between
the signal strengths of the tags and their corresponding positions. The proposed positioning system
is tested in a real experimental environment. In addition, system test results demonstrate that the
positioning algorithms can not only provide higher positioning accuracy, but also achieve a faster
computational efficiency compared with other previous algorithms.
Keywords: indoor positioning system; RFID; extreme learning machine; Gaussian filtering
1. Introduction
Indoor positioning systems (IPS) are emerging as a technology, due to the increasing popularity
and demand in location based service indoors [
1
,
2
]. One of the most famous location-aware services
is the Global Positioning System (GPS), but there must be a direct line-of-sight between receiver and
satellite in an unshielded environment. GPS is used for outdoor locations. However, its performance
is very poor in indoor environments [
3
]. Therefore, different types of indoor positioning systems
have been developed for personal and commercial needs [
4
,
5
]. Among all of the indoor positioning
methods, Benavente-Peces et al. [
6
] found that Radio Frequency Identification (RFID) is the best choice
for indoor positioning with high accuracy and low cost.
RFID is a rapidly developing technology that it uses radio frequency (RF) signals to automatically
identify objects. It has been widely applied in various fields for tool tracking, process management,
access control and supply chain, such as access cards and electronic wallets. A typical RFID networks
system was illustrated in Figure 1that consists of three different entities: RFID readers, tags and
servers. Furthermore, the RFID system has also been applied in the location field for the real-time
tracking and positioning of indoor targets. At first, Shiraishi et al. [
7
] proposed an indoor positioning
estimation system based on UHF RFID. Furthermore, Wang et al. [
8
] have successfully designed an
indoor personnel tracking and positioning system based on RFID technology combined with images.
Similarly, Montaser et al. [
9
] present a low-cost indoor positioning and material tracking methodology
for construction projects using passive UHF RFID technology. Goodrum et al. [
10
] also explored the
Symmetry 2017,9, 30; doi:10.3390/sym9030030 www.mdpi.com/journal/symmetry
Symmetry 2017,9, 30 2 of 16
applications of UHF RFID technology that is used for tool tracking on construction job sites. Therefore,
UHF RFID is used for indoor positioning in this paper.
Symmetry2017,9,302of15
positioningandmaterialtrackingmethodologyforconstructionprojectsusingpassiveUHFRFID
technology.Goodrumetal.[10]alsoexploredtheapplicationsofUHFRFIDtechnologythatisused
fortooltrackingonconstructionjobsites.Therefore,UHFRFIDisusedforindoorpositioninginthis
paper.
Figure1.ArchitectureofarapidlyRFIDsystem.
Currently,anextensivebodyofresearchfortheIPShasfocusedonthemachinelearning
approachessuchasExtremeLearningMachine(ELM)[11],ArtificialNeuralNetwork(ANN)or
SupportVectorMachine(SVM)[12]andsoon,inordertoovercometheshortcomingsfacedbythe
traditionalpositioningmethods.Wuetal.[13]appliedtheANNtoovercomethelimitationsofthe
empiricalpositioningformula,whichwasusedinthepreviousresearch.TheANNcanlearnthe
geographyfeaturestoadapttotherealworld,whichcanavoidtheimpactofthemultipath
phenomenonandbeflexiblyappliedtoanyenvironment.However,theweightsofANNarenot
easytobedetermined.Forthatreason,Kuoetal.[14]proposedafeatureselectionbased
BackPropagation(BP)neuralnetworkthatusesanArtificialImmuneSystem(AIS)(BPAIS)to
determinetheconnectingweightsofANN,andforecaststhepositionofthepickingstafffor
warehousemanagement.Similarly,Ceravoloetal.[15]appliedthegeneticalgorithm(GA)to
determinetheparametersofBPnetwork(GABP)andappliedittopredictandcontrolthe
environmenttemperature.
Inaddition,Zouetal.[16]proposedtheELMtoovercometheindoorpositioningproblemand,
throughtheexperimentalevaluation,reportedthatanELMbasedonpositioningsystemcan
providehigherpositioningaccuracyandrobustnessoverthatofatraditionalmethod.Jiangetal.[17]
presentedafusionpositionframeworkwithaParticleFilterusingWiFisignalsandmotionsensors,
whichusetheELMregressionalgorithmtopredictthelocationofthetarget.Thesystemconsistsof
threemainmodules:asensordatabasedpositionmodel(ELMregression),thefusionParticleFilter
modelandtheWiFibasedpositionmodel.Theresultsshowthattheproposedmethodachievesa
goodperformance,forexample,achievingabetteraccuracythanthetraditionalfingerprintmethod.
Fuetal.[18]proposedthekernelELMtolocatethetargetpositionandcompareditwithmachine
learningalgorithmssuchasSVM[19,20]andBP,andtheresultrevealsthatthekernelELMhasgood
predictionperformanceandfastlearningspeed.
Itisworthmentioningthattheaboveresearchhasbeenmadegoodprogress.Unfortunately,
mostofthesestudieshaveneglectedthefilteringprocessingforexperimentaldatabecausetheRSSI
isaffectedbymanyfactors,suchasindoortemperature,humidity,multipatheffectandsoon,
whichleadstoalargefluctuationofRSSI.Therefore,thispaperwillproposeanindoorpositioning
systembasedonaGaussianfiltercombinedwithELMandverifyitsperformanceviacomparison
withprevailingmethods.
Theremainderofthispaperisorganizedasfollows.Section2presentstherelatedworksofthis
research.TheproposedmethodisexplainedinSection3.Section4designedtheexperimentaland
comparisonanalysis.Section5discussedtheexperimentalresults.Finally,Section6presentsthe
conclusionsandfutureresearchdirections.

Figure 1. Architecture of a rapidly RFID system.
Currently, an extensive body of research for the IPS has focused on the machine learning
approaches such as Extreme Learning Machine (ELM) [
11
], Artificial Neural Network (ANN) or
Support Vector Machine (SVM) [
12
] and so on, in order to overcome the shortcomings faced by the
traditional positioning methods. Wu et al. [
13
] applied the ANN to overcome the limitations of
the empirical positioning formula, which was used in the previous research. The ANN can learn
the geography features to adapt to the real world, which can avoid the impact of the multipath
phenomenon and be flexibly applied to any environment. However, the weights of ANN are not easy
to be determined. For that reason, Kuo et al. [
14
] proposed a feature selection-based Back-Propagation
(BP) neural network that uses an Artificial Immune System (AIS) (BP-AIS) to determine the connecting
weights of ANN, and forecasts the position of the picking staff for warehouse management. Similarly,
Ceravolo et al. [
15
] applied the genetic algorithm (GA) to determine the parameters of BP network
(GA-BP) and applied it to predict and control the environment temperature.
In addition, Zou et al. [
16
] proposed the ELM to overcome the indoor positioning problem and,
through the experimental evaluation, reported that an ELM-based on positioning system can provide
higher positioning accuracy and robustness over that of a traditional method. Jiang et al. [
17
] presented
a fusion position framework with a Particle Filter using Wi-Fi signals and motion sensors, which use
the ELM regression algorithm to predict the location of the target. The system consists of three main
modules: a sensor data based position model (ELM regression), the fusion Particle Filter model and the
Wi-Fi based position model. The results show that the proposed method achieves a good performance,
for example, achieving a better accuracy than the traditional fingerprint method. Fu et al. [
18
] proposed
the kernel ELM to locate the target position and compared it with machine learning algorithms such as
SVM [
19
,
20
] and BP, and the result reveals that the kernel ELM has good prediction performance and
fast learning speed.
It is worth mentioning that the above research has been made good progress. Unfortunately,
most of these studies have neglected the filtering processing for experimental data because the RSSI is
affected by many factors, such as indoor temperature, humidity, multi-path effect and so on, which
leads to a large fluctuation of RSSI. Therefore, this paper will propose an indoor positioning system
based on a Gaussian filter combined with ELM and verify its performance via comparison with
prevailing methods.
The remainder of this paper is organized as follows. Section 2presents the related works of this
research. The proposed method is explained in Section 3. Section 4designed the experimental and
comparison analysis. Section 5discussed the experimental results. Finally, Section 6presents the
conclusions and future research directions.
Symmetry 2017,9, 30 3 of 16
2. Related Works
2.1. IPS Technologies
At present, there are four frequently used localization technologies based on the measurement
of distances or angles between reference points: Time of Arrival (TOA), Time Difference of Arrival
(TDOA), Angle of Arrival (AOA) [
21
] and Received Signal Strength Indication (RSSI) [
22
] are shown
as Figure 2. Hatami et al. [
23
] describes in detail the performance differences for each location
system. The TOA and TDOA based systems are more suitable for outdoor or large-scale open
indoor environments, and the RSSI based system is the most suitable for tight indoor environments.
Furthermore, Reference [
24
] has found that the positioning system based on the RSSI measurement
has two main advantages: simple implementation and cost effectiveness. This study takes into account
the technological problems and considerable costs, and the basic RSSI measurement method based on
localization is adopted.
Symmetry2017,9,303of15
2.RelatedWorks
2.1.IPSTechnologies
Atpresent,therearefourfrequentlyusedlocalizationtechnologiesbasedonthemeasurement
ofdistancesoranglesbetweenreferencepoints:TimeofArrival(TOA),TimeDifferenceofArrival
(TDOA),AngleofArrival(AOA) [21]andReceivedSignalStrengthIndication(RSSI)[22]are
shownasFigure2.Hatamietal.[23]describesindetailtheperformancedifferencesforeach
locationsystem.TheTOAandTDOAbasedsystemsaremoresuitableforoutdoororlargescale
openindoorenvironments,andtheRSSIbasedsystemisthemostsuitablefortightindoor
environments.Furthermore,Reference[24]hasfoundthatthepositioningsystembasedontheRSSI
measurementhastwomainadvantages:simpleimplementationandcosteffectiveness.Thisstudy
takesintoaccountthetechnologicalproblemsandconsiderablecosts,andthebasicRSSI
measurementmethodbasedonlocalizationisadopted.
(a)(b)
(c)(d)
Figure2.Fourmeasurementbasedpositioningtechniques:(a)TOAbasedestimation;(b)TDOA
basedestimation;(c)AOAbasedestimation;(d)RSSIbasedestimation.
2.2.ELMAlgorithm
ELM[25]isatypeofmachinelearningalgorithmbasedonasinglehiddenLayer
FeedforwardneuralNetwork(SLFN)architecturedevelopedbyHuangetal.Ithasbeen
demonstratedthatELMcanprovidegoodgeneralizationperformanceatanextremelyfastlearning
speed[26,27].Given,anarbitrarydistincttrainingset,|∈,∈,1,2,
⋯,,whereisan1inputvector ,,⋯,,and isan1target
vector,,⋯,.Thegoalofregressionistofindtherelationshipbetweenand.
Sincetheonlyparameterstobeoptimizedaretheoutputweights,thetrainingofELMisequivalent
tosolvingaleastsquaresproblem[28].Figure3showsthetypicalnetworkstructureoftheSLFN.
Figure 2.
Four measurement based positioning techniques: (
a
) TOA based estimation; (
b
) TDOA based
estimation; (c) AOA based estimation; (d) RSSI based estimation.
2.2. ELM Algorithm
ELM [
25
] is a type of machine learning algorithm based on a single-hidden Layer Feed-forward
neural Network (SLFN) architecture developed by Huang et al. It has been demonstrated that ELM
can provide good generalization performance at an extremely fast learning speed [
26
,
27
]. Given
N
,
an arbitrary distinct training set
µ=(Xi,ti)
XiR1×n,tiR1×m,i=1, 2, ··· ,N
, where
Xi
is
an
n×
1 input vector
Xi=[xi1,xi2,··· ,xin ]T
, and
ti
is an
m×
1 target vector
ti=[ti1,ti2,··· ,tim ]T
.
The goal of regression is to find the relationship between
Xi
and
ti
. Since the only parameters to
be optimized are the output weights, the training of ELM is equivalent to solving a least squares
problem [28]. Figure 3shows the typical network structure of the SLFN.
Symmetry 2017,9, 30 4 of 16
Symmetry2017,9,304of15
Figure3.TheSLFNnetworkstructurewithhiddenneurons.
Inthetrainingprocess,thefirststageisthatmapsthedatafromthedimensional
inputspacetothedimensionalhiddenlayerfeaturespace(ELMfeaturespace)
:→,(1)
where,,⋯,istheoutputofarowvectorofthehiddenlayer,
.
ThemathematicalmodeloftheSLFNscanbesummarizedasEquation(2):
,(2)
where∈isthehiddenlayeroutputmatrix,istheoutputweightmatrix
,,⋯,andisthetargetvector


1
2
NNL







hx
hx
H
hx
,
1
2
T
T
T
L
L
m
β
β
β
β
,
1
2
T
T
T
NNm
t
t
T
t
.
(3)
EachoutputofELMisgivenbyEquation(4)
,1,2,(4)
ELMtheoryistominimizethetrainingerrorbutalsothesmallestnormoftheoutputweights
[29]:Min:
∑‖
 ,
S.t.:
,1,2,⋯,,(5)
whereisthepenaltycoefficientonthetrainingerrors,and∈isthetrainingerrorvector
oftheoutputnodeswithrespecttothethtrainingpattern,,,,⋯,,.
3.TheProposedELMBasedIPS
3.1.DataGaussianFiltering
DataGaussianfilteringisusedtoreducethesignalfluctuationofRSSI,whichisobtainedby
repeatedmeasurementsatthefixedposition.Assumingthat,
∈,∈,
where,,,,⋯,,representstheRSSImeasurementvectorofthetagthat
containsRSSIvaluesthatarereceivedbyanumberofreaders,andtheobservedpositionof
thetagis,,,,isthenumberofreferencetags.Definingthat
,
,⋯,,
,⋯,,
istheRSSIvectorofthethtagthatismeasuredby
thereaderatafixedpoint,whereisthenumberofmeasurements,and
,
,⋯,,
,⋯,,
,⋯,,
⊆,istheRSSIvectorthatis
processedbythe2principleofGaussiandistribution.Figure4ashowstherepeated
measurementresultsfromaspecifiedtagatafixedpointk.FromFigure4a,itcanbeseenthatthe
Figure 3. The SLFN network structure with Lhidden neurons.
In the training process, the first stage is that
h(xi)
maps the data from the
n
-dimensional input
space to the L-dimensional hidden-layer feature space (ELM feature space)
h:xih(xi), (1)
where
h(xi)=[h1(x),h2(x),··· ,hL(x)]
is the output of a row vector of the hidden layer,
h(xi)R1×L
.
The mathematical model of the SLFNs can be summarized as Equation (2):
Hβ=T, (2)
where
HRN×L
is the hidden layer output matrix,
βRL×m
is the output weight matrix
βi=[βi1,βi2,··· ,βim ]Tand TRN×mis the target vector
H=
h(x1)
h(x2)
.
.
.
h(xN)
N×L
,β=
βT
1
βT
2
.
.
.
βT
L
L×m
,T=
tT
1
tT
2
.
.
.
tT
N
N×m
. (3)
Each output of ELM is given by Equation (4)
ti=h(xi)β,i=1, 2, ··· (4)
ELM theory is to minimize the training error but also the smallest norm of the output weights [
29
]:
Min : LDELM =1
2kβk2+W
2N
i=1kηik2,
S.t. : h(xi)β=tT
iηT
i,i=1, 2, ··· ,N,
(5)
where
W
is the penalty coefficient on the training errors, and
ηiRm
is the training error vector of the
moutput nodes with respect to the ith training pattern ηi=[ηi,1,ηi,2 ,··· ,ηi,m]T.
3. The Proposed ELM Based IPS
3.1. Data Gaussian Filtering
Data Gaussian filtering is used to reduce the signal fluctuation of RSSI, which is obtained by
repeated measurements at the fixed position. Assuming that
D={(Fi,pi)}N
i=1(FiRm,piRn)
,
where
Fi={RSSIi,1,RSS Ii,2 ,··· ,RSSIi,m}
represents the RSSI measurement vector of the tag
that contains
m
RSSI values that are received by a number of readers
m
, and the observed
position of the tag is
pi=Coordinatei,x,Coordinatei,y
,
N
is the number of reference tags.
Symmetry 2017,9, 30 5 of 16
Defining that
RFk=nRSS I1
i,m,··· ,RSS Ic
i,m,··· ,RSS It
i,mo(k=pi)
is the RSSI vector of the
i
th tag
that is measured by the reader
m
at a fixed point
k
, where
t
is the number of measurements, and
RFk0=nRSS I1
i,m,··· ,RSS Ie
i,m,··· ,RSS Id
i,m,··· ,RSS Il
i,mo(RFk0RFk,elt)
is the RSSI vector
that is processed by the 2
σRSSI
principle of Gaussian distribution. Figure 4a shows the repeated
measurement results from a specified tag at a fixed point k. From Figure 4a, it can be seen that the
changes in RSSI measurement can be approximated as a parametric distribution, and the Gaussian
distribution fits the RSSI distribution from a fixed point (measured in dBm) as shown in Figure 4b.
Symmetry2017,9,305of15
changesinRSSImeasurementcanbeapproximatedasaparametricdistribution,andtheGaussian
distributionfitstheRSSIdistributionfromafixedpoint(measuredindBm)asshowninFigure4b.
(a)(b)
Figure4.TheRSSIdataandGaussianfitting:(a)TheRSSImeasurementvaluesfromaspecifiedtag
atafixedpoint;(b)ThePDFandGaussianfittingforRSSIdata.
Figure4impliesthatthecanmodeltheGaussiandistribution:
,
|
,

,(6)
whereistheexpectedvalueofRSSIandisthevarianceofRSSI,andisgivenby
 1
,
 ,

,

 .
(7)
Accordingtothe2principleofGaussiandistribution,thesmallprobabilityeventof
,
2isexcluded,andthedataof,
2ischosenas
effectiveexperimentaldata.
Sincethesignalstrengthvaluesreceivedfromdifferentlocationsoftagsareindependent,
isstillconsistentwiththeGaussiandensitydistribution,andisgivenby:
,
|

,


,(8)
where',
1
1l
R
SI im
d
d
S
u RSSI
l
,


2
'2 '
,
1
11
RSSI i m RSSI
d
ld
RSSI u
l

,andisthenumberofremaining
measuredvalues,whichhasbeenprocessedbythe2principleofGaussiandistribution.
Whenthefunctionvalue,
|
,thecorrespondingRSSIvaluehasahigh
probabilityvalue;when,
|
,thecorrespondingRSSIvaluehasasmall
probabilityvalue,asdefinedby:
,
|1.(9)
Accordingtothegeneralengineeringpracticethat0.6,andtheeffectiveintervalofRSSI
valueisEquation(9).Figure5showstheRSSIvalueofatagthatiscollectedatafixedpointseveral
times:
2
3.8

,
2
6.3

.(10)
Figure 4.
The RSSI data and Gaussian fitting: (
a
) The RSSI measurement values from a specified tag at
a fixed point; (b) The PDF and Gaussian fitting for RSSI data.
Figure 4implies that the RFkcan model the Gaussian distribution:
fRSSIc
i,m
RFk=1
2πσRSS I
e(RSSI c
i,muRSSI )2
2σRSSI 2, (6)
where uRSSI is the expected value of RSSI and σRSSI 2is the variance of RSSI, and is given by
uRSSI =1
t
t
c=1
RSSIc
i,m,
σRSSI 2=1
(t1)
t
c=1RSSIc
i,muRSS I 2.
(7)
According to the 2
σRSSI
principle of Gaussian distribution, the small probability event of
RSSIc
i,muRSS I
>
2
σRSSI
is excluded, and the data of
RSSIc
i,muRSS I
<
2
σRSSI
is chosen as
effective experimental data.
Since the signal strength values received from different locations of tags are independent,
RFk0
is
still consistent with the Gaussian density distribution, and is given by:
fRSSId
i,m
RFk0=1
2πσ0
RSSI
e(RSSI d
i,mu0
RSSI )2
2σ0
RSSI 2, (8)
where
u0
RSSI =1
l
l
d=1
RSSId
i,m
,
σ0
RSSI 2=1
(l1)
l
d=1RSSId
i,mu0
RSS I 2
, and
l
is the number of remaining
measured values, which has been processed by the 2σRSSI principle of Gaussian distribution.
Symmetry 2017,9, 30 6 of 16
When the function value
fRSSId
i,m
RFk0P0
, the corresponding RSSI value has a high
probability value; when
fRSSId
i,m
RFk0<P0
, the corresponding RSSI value has a small probability
value, as defined by:
P0fRSSId
i,m
RFk01. (9)
According to the general engineering practice that
P0=
0.6, and the effective interval of RSSI value
is Equation (9). Figure 5shows the RSSI value of a tag that is collected at a fixed point several times:
q2σ0
RSS I ln3.8σ0
RSS I +u0
RSSI RSS I d
i,mq2σ0
RSS I ln6.3σ0
RSS I +u0
RSSI . (10)
Symmetry2017,9,306of15
(a) (b)
Figure5.TheRSSIvalueofatagthatiscollectedatafixedpointforseveraltimes:(a)theRSSI
valuesbeforeGaussianprocess;(b)theRSSIvaluesafterGaussianprocess.
ThefinaloutputoftheGaussianprocessisthearithmeticmeanofalltheRSSIvalueswithin
theinterval[2
3.8

,2
6.3

],andisgivenby

,
 ,(11)
whereisanumberthatsatisfiesEquation(9).
3.2.ELMLearningMethod
TheproposedELMmethodconsidersthelocalizationproblemasaregressionproblem.
Supposethegiventrainingdatais,|∈,∈,1,2,⋯,,settingrank
,,wheredenotesthehiddenoutputmatrix.TheELMnetworkwithhiddennodesis
showninFigure3,andtheoutputfunctionofthisnetworkcanbeexpressedasfollows:

 ,,,
1,2,⋯,,(12)
whereistheweightconnectingthethhiddennodetotheoutputnode,andarethe
learningparametersofhiddennodes,and,,istheoutputofthethhiddennode.
Fortheadditivehiddennode,,,∙,∈,andifSLFNswith
hiddennodescanapproximatethesampleswithzeroerrors,thereexist,,andsuchthat

 ∙,
1,2,⋯,,(13)
where,,⋯,,,,⋯,.
TheELMlearningalgorithmconsistsofthreemainproceduresasshowninFigure6.
Figure6.ThelearningofELMalgorithm.
Step1:Randomlyassigntheinputparameters:inputweightsandbiases,1,2,⋯,.
Step2:Calculatetheinitialhiddenlayeroutputmatrix
,,⋯
,,
⋮⋮⋮
,,⋯
,,
.(14)
Figure 5.
The RSSI value of a tag that is collected at a fixed point for several times: (
a
) the RSSI values
before Gaussian process; (b) the RSSI values after Gaussian process.
The final output of the Gaussian process is the arithmetic mean of all the RSSI values within the
interval [q2σ0
RSS I ln3.8σ0
RSS I +u0
RSSI ,q2σ0
RSS I ln6.3σ0
RSS I +u0
RSSI ], and is given by
FoutRSSI =1
k
k
e=1
RSSIe
i,m, (11)
where kis a number that satisfies Equation (9).
3.2. ELM Learning Method
The proposed ELM method considers the localization problem as a regression problem. Suppose
the given training data is
µ={(xi,ti)|xiRn,tiRm,i=1, 2, ··· ,N}
, setting rank
(H0)=L
,
LN
,
where
H0
denotes the hidden output matrix. The ELM network with
L
hidden nodes is shown in
Figure 3, and the output function of this network can be expressed as follows:
fLxj=
L
i=1
βiGai,bi,xj,j=1, 2, ··· ,N, (12)
where
βi
is the weight connecting the
i
th hidden node to the output node,
ai
and
bi
are the learning
parameters of hidden nodes, and Gai,bi,xjis the output of the ith hidden node.
Symmetry 2017,9, 30 7 of 16
For the additive hidden node,
Gai,bi,xj=gai·xj+bi
,
biR
, and if SLFNs with
L
hidden
nodes can approximate the Nsamples with zero errors, there exist βi,ai, and bisuch that
L
i=1
βigai·xj+bi=ti,j=1, 2, ··· ,N, (13)
where ai=[wi1,wi2,··· ,win],xj=x1j,x2j,··· ,xnj T.
The ELM learning algorithm consists of three main procedures as shown in Figure 6.
Symmetry2017,9,306of15
(a) (b)
Figure5.TheRSSIvalueofatagthatiscollectedatafixedpointforseveraltimes:(a)theRSSI
valuesbeforeGaussianprocess;(b)theRSSIvaluesafterGaussianprocess.
ThefinaloutputoftheGaussianprocessisthearithmeticmeanofalltheRSSIvalueswithin
theinterval[2
3.8

,2
6.3

],andisgivenby

,
 ,(11)
whereisanumberthatsatisfiesEquation(9).
3.2.ELMLearningMethod
TheproposedELMmethodconsidersthelocalizationproblemasaregressionproblem.
Supposethegiventrainingdatais,|∈,∈,1,2,⋯,,settingrank
,,wheredenotesthehiddenoutputmatrix.TheELMnetworkwithhiddennodesis
showninFigure3,andtheoutputfunctionofthisnetworkcanbeexpressedasfollows:

 ,,,
1,2,⋯,,(12)
whereistheweightconnectingthethhiddennodetotheoutputnode,andarethe
learningparametersofhiddennodes,and,,istheoutputofthethhiddennode.
Fortheadditivehiddennode,,,∙,∈,andifSLFNswith
hiddennodescanapproximatethesampleswithzeroerrors,thereexist,,andsuchthat

 ∙,
1,2,⋯,,(13)
where,,⋯,,,,⋯,.
TheELMlearningalgorithmconsistsofthreemainproceduresasshowninFigure6.
Figure6.ThelearningofELMalgorithm.
Step1:Randomlyassigntheinputparameters:inputweightsandbiases,1,2,⋯,.
Step2:Calculatetheinitialhiddenlayeroutputmatrix
,,⋯
,,
⋮⋮⋮
,,⋯
,,
.(14)
Figure 6. The learning of ELM algorithm.
Step1: Randomly assign the input parameters: input weights aiand biases bi,i=1, 2, · · · ,L.
Step2: Calculate the initial hidden layer output matrix H0
H0=
G(a1,b1,x1)··· G(aL,bL,x1)
.
.
..
.
..
.
.
G(a1,b1,xN)··· G(aL,bL,xN)
N×L
. (14)
Step3: Estimate the initial output weight
β(0)
. For
T0=[t1,t2,· · · ,tN]T
N×m
, the problem is
equivalent to minimizing H0β=T0, which can be written as
min
βkH0βT0k. (15)
The optimal solution is given by
β(0)=H0+T0
, where
H+
is the Moore–Penrose generalized
inverse of hidden layer output matrix H0.
3.3. Overall System
The framework of the proposed ELM with a Gaussian process for IPS is shown in Figure 7.
It contains two localization phases: the offline phase and the online phase. During the offline phase,
the collected RSSI of reference tags and their physical locations are adopted as training inputs and
training targets, respectively. The cluster classification is conducted for all DQ values in the database
through ELM learning to define the parameters of the network and establish an initial ELM regression
model of environment characteristics for online positioning. During the online phase, the real-time
DQ values will be fed into the trained ELM model, and, then, the estimated location will be calculated.
The main task of ELM learning algorithm here is the cluster analysis for training sample sets
which are stored in the RFID localization database. Defining
Tagi={Li,RSSIi}
, where
Li=(xi,yi)
is the coordinate of the
i
th tag and its corresponding
RSSIi={RSSi,1,RSSIi,2,· · · ,RSSIi,m}
.
In addition, at time
T
, the reader repeatedly obtains the RSSI value of the tag at a fixed point
Li
,
namely,
DQm=nRSSI1
i,m,RSSI2
i,m,··· ,RSS It
i,mo
,
(m=1, 2, ··· , 4)
is the acquired RSSI vector for
tag
i
that is measured by the
m
th reader
t
times. All
DQ
are collected to build an original database.
As discussed in Section 3.1, the RSSI value in the original database is processed by Gaussian filter to
reduce the signal fluctuation that is caused by the indoor environment effects. Furthermore, in order
to avoid the domination of large feature values, the datasets are normalized to the range of [0, 1] by
using the corresponding maximum and minimum values.
Symmetry 2017,9, 30 8 of 16
Symmetry2017,9,307of15
Step3:Estimatetheinitialoutputweight.For,,⋯,
,theproblemis
equivalenttominimizing,whichcanbewrittenas
min
.(15)
Theoptimalsolutionisgivenby ,whereistheMoore–Penrosegeneralized
inverseofhiddenlayeroutputmatrix.
3.3.OverallSystem
TheframeworkoftheproposedELMwithaGaussianprocessforIPSisshowninFigure7.It
containstwolocalizationphases:theofflinephaseandtheonlinephase.Duringtheofflinephase,
thecollectedRSSIofreferencetagsandtheirphysicallocationsareadoptedastraininginputsand
trainingtargets,respectively.TheclusterclassificationisconductedforallDQvaluesinthe
databasethroughELMlearningtodefinetheparametersofthenetworkandestablishaninitial
ELMregressionmodelofenvironmentcharacteristicsforonlinepositioning.Duringtheonline
phase,therealtimeDQvalueswillbefedintothetrainedELMmodel,and,then,theestimated
locationwillbecalculated.
ThemaintaskofELMlearningalgorithmhereistheclusteranalysisfortrainingsamplesets
whicharestoredintheRFIDlocalizationdatabase.Defining,,where,
isthecoordinateofthethtaganditscorresponding,,,,⋯,,.In
addition,attime,thereaderrepeatedlyobtainstheRSSIvalueofthetagatafixedpoint,
namely,,
,,
,⋯,,
,1,2,,4istheacquiredRSSIvectorfortag
thatismeasuredbythethreadertimes.Allarecollectedtobuildanoriginaldatabase.As
discussedinSection3.1,theRSSIvalueintheoriginaldatabaseisprocessedbyGaussianfilterto
reducethesignalfluctuationthatiscausedbytheindoorenvironmenteffects.Furthermore,in
ordertoavoidthedominationoflargefeaturevalues,thedatasetsarenormalizedtotherangeof[0,
1]byusingthecorrespondingmaximumandminimumvalues.
Figure7.TheframeworkofELMIPSbasedonGaussianprocess.
TheinputtrainingdatasetsinFigure7are,|
,
,
,
,where,represents
thecoordinatesofthepointtobemeasured,and
,
,
,
representsitscorresponding
RSSIvalues,whichhasbeenprocessedbyGaussianfilterandnormalized.Thenewdatabaseis
utilizedtoselectthemodelparameters,whichareusedtoconstructtheELMbasedpositioning
model,andthecorrespondingELMlearningprocedureisshownasFigure6.
Figure 7. The framework of ELM IPS based on Gaussian process.
The input training data sets in Figure 7are
x,y
DQ0
1,DQ0
2,DQ0
3,DQ0
4
, where
(x,y)
represents
the coordinates of the point to be measured, and
DQ0
1,DQ0
2,DQ0
3,DQ0
4
represents its corresponding
RSSI values, which has been processed by Gaussian filter and normalized. The new database is utilized
to select the model parameters, which are used to construct the ELM based positioning model, and the
corresponding ELM learning procedure is shown as Figure 6.
4. Experimental Results and Analysis
In order to evaluate the performance of the proposed method, some experiments were conducted.
In our experiment, the type of Laird-S8658WPL UHF RFID System (including reader and receiving
antenna, LJYZN, Shanghai, China) was used to collect the RFID signals. The UHF RFID unit of this
device is MHz and the main working frequency is 865–965 MHz, the gain is 6 dBiC and the power
is 10 W. In addition, the RSSI unit is dBm. All evaluation processes were conducted in a MATLAB
(R2013b version, MathWorks corporation, Natick, MA, USA) environment running in a Windows 7
machine with an Intel Core i7 central processing unit (CPU) of 3.07 GHz and a 4 GB of random access
memory (RAM).
4.1. Experimental Environment
The simulated environment is illustrated as shown in Figure 8. The size of the experimental
environment is approximately 11 m
×
6 m where a total of 60 reference tags and four UHF RFID
antennas were used. The distance between every two adjacent reference tags was 1.2 m, and the
four antennas are connected to a reader and fixed in the four different corners of the location area,
respectively. When the reader is working, the four antennas will receive the different information
(including the RSSI values between the tags and the antennas), which comes from the tag feedback.
In order to obtain an accurate RSSI value from the reference tag, the RSSI value of the tag is repeatedly
measured 100 times
(t=100)
by the antenna at a fixed point, and Gaussian is used to process these
RSSI values as described in Section 3.1. Finally, the experiment yielded 240 groups’ perfect sample
data elements, including the RSSI values of tags and corresponding positions (60 RSSI data elements
for each antenna), and the sample data are regarded as the input values for the ELM network as shown
in Figure 9, where
x=[x1, x2, x3, x4]T=DQ0
1,DQ0
2,DQ0
3,DQ0
4T(16)
Symmetry 2017,9, 30 9 of 16
Symmetry2017,9,308of15
4.ExperimentalResultsandAnalysis
Inordertoevaluatetheperformanceoftheproposedmethod,someexperimentswere
conducted.Inourexperiment,thetypeofLairdS8658WPLUHFRFIDSystem(includingreader
andreceivingantenna,LJYZN,Shanghai,China)wasusedtocollecttheRFIDsignals.TheUHF
RFIDunitofthisdeviceisMHzandthemainworkingfrequencyis865–965MHz,thegainis6
dBiCandthepoweris10W.Inaddition,theRSSIunitisdBm.Allevaluationprocesseswere
conductedinaMATLAB(R2013bversion,MathWorkscorporation,Natick,MA,USA)environment
runninginaWindows7machinewithanIntelCorei7centralprocessingunit(CPU)of3.07GHz
anda4GBofrandomaccessmemory(RAM).
4.1.ExperimentalEnvironment
ThesimulatedenvironmentisillustratedasshowninFigure8.Thesizeoftheexperimental
environmentisapproximately11m×6mwhereatotalof60referencetagsandfourUHFRFID
antennaswereused.Thedistancebetweeneverytwoadjacentreferencetagswas1.2m,andthe
fourantennasareconnectedtoareaderandfixedinthefourdifferentcornersofthelocationarea,
respectively.Whenthereaderisworking,thefourantennaswillreceivethedifferentinformation
(includingtheRSSIvaluesbetweenthetagsandtheantennas),whichcomesfromthetagfeedback.
InordertoobtainanaccurateRSSIvaluefromthereferencetag,theRSSIvalueofthetagis
repeatedlymeasured100times100bytheantennaatafixedpoint,andGaussianisusedto
processtheseRSSIvaluesasdescribedinSection3.1.Finally,theexperimentyielded240groups’
perfectsampledataelements,includingtheRSSIvaluesoftagsandcorrespondingpositions(60
RSSIdataelementsforeachantenna),andthesampledataareregardedastheinputvaluesforthe
ELMnetworkasshowninFigure9,where
x,x,x,x
,
,
,
.(16)
AsshowninFigure9,thearchitectureofELMnetworkhasnineinputs,eachrepresentingaset
ofRSSIvectors,andtwooutputs,representingthegeographycoordinateofthetagposition.
Figure8.Thesimulatedexperimentalenvironment.
Figure9.ThearchitectureofELMnetworkbasedonpositioningsystem.
Figure 8. The simulated experimental environment.
Symmetry2017,9,308of15
4.ExperimentalResultsandAnalysis
Inordertoevaluatetheperformanceoftheproposedmethod,someexperimentswere
conducted.Inourexperiment,thetypeofLairdS8658WPLUHFRFIDSystem(includingreader
andreceivingantenna,LJYZN,Shanghai,China)wasusedtocollecttheRFIDsignals.TheUHF
RFIDunitofthisdeviceisMHzandthemainworkingfrequencyis865–965MHz,thegainis6
dBiCandthepoweris10W.Inaddition,theRSSIunitisdBm.Allevaluationprocesseswere
conductedinaMATLAB(R2013bversion,MathWorkscorporation,Natick,MA,USA)environment
runninginaWindows7machinewithanIntelCorei7centralprocessingunit(CPU)of3.07GHz
anda4GBofrandomaccessmemory(RAM).
4.1.ExperimentalEnvironment
ThesimulatedenvironmentisillustratedasshowninFigure8.Thesizeoftheexperimental
environmentisapproximately11m×6mwhereatotalof60referencetagsandfourUHFRFID
antennaswereused.Thedistancebetweeneverytwoadjacentreferencetagswas1.2m,andthe
fourantennasareconnectedtoareaderandfixedinthefourdifferentcornersofthelocationarea,
respectively.Whenthereaderisworking,thefourantennaswillreceivethedifferentinformation
(includingtheRSSIvaluesbetweenthetagsandtheantennas),whichcomesfromthetagfeedback.
InordertoobtainanaccurateRSSIvaluefromthereferencetag,theRSSIvalueofthetagis
repeatedlymeasured100times100bytheantennaatafixedpoint,andGaussianisusedto
processtheseRSSIvaluesasdescribedinSection3.1.Finally,theexperimentyielded240groups’
perfectsampledataelements,includingtheRSSIvaluesoftagsandcorrespondingpositions(60
RSSIdataelementsforeachantenna),andthesampledataareregardedastheinputvaluesforthe
ELMnetworkasshowninFigure9,where
x,x,x,x
,
,
,
.(16)
AsshowninFigure9,thearchitectureofELMnetworkhasnineinputs,eachrepresentingaset
ofRSSIvectors,andtwooutputs,representingthegeographycoordinateofthetagposition.
Figure8.Thesimulatedexperimentalenvironment.
Figure9.ThearchitectureofELMnetworkbasedonpositioningsystem.
Figure 9. The architecture of ELM network based on positioning system.
As shown in Figure 9, the architecture of ELM network has nine inputs, each representing a set of
RSSI vectors, and two outputs, representing the geography coordinate of the tag position.
4.2. Selection of Parameters for the ELM Model
For the proposed ELM based positioning model, there are two important parameters that need to
be selected, namely, the type of activation function
G(a,b,x)
and the number of hidden nodes
L
. The sine
activation function
G(a,b,x)=sin(a·x+b)
, hardlim activation function
(a,b,x)=hardlim(a·x+b)
and sigmoid activation function
(a,b,x)=
1
/(
1
+exp(((a·x+b)))
have been used to evaluate
the performance of the ELM, with different numbers of hidden nodes. The results of the parameter
selection of the optimal number of hidden nodes for ELM with sigmoid, sine and hardlim activation
functions are shown in Figure 10.
Symmetry2017,9,309of15
4.2.SelectionofParametersfortheELMModel
FortheproposedELMbasedpositioningmodel,therearetwoimportantparametersthatneed
tobeselected,namely,thetypeofactivationfunction,,andthenumberofhiddennodes.
Thesineactivationfunction,,sin∙,hardlimactivationfunction,,
hardlim∙andsigmoidactivationfunction,,1/1exphavebeen
usedtoevaluatetheperformanceoftheELM,withdifferentnumbersofhiddennodes.Theresults
oftheparameterselectionoftheoptimalnumberofhiddennodesforELMwithsigmoid,sineand
hardlimactivationfunctionsareshowninFigure10.
Figure10.Averagepositioningerrorregardingdifferentactivationfunctionsanddifferentnumbers
ofhiddennodes.
InFigure10,theperformanceofELMwiththesigmoidactivationfunctionisnotsignificantly
changedwithrespecttothesineactivationfunction.Forthesigmoidfunctions,theincreased
amplitudeofaverageerrorisrelativelysmallwhenthenumberofhiddennodeswasmorethan60.
Inaddition,theperformanceofthehardlimfunctionisthemoststableamongthethreeactivation
functions,buttheaverageerroristheworst.Thus,inthefollowingsimulations,weselectedthe
sigmoidfunctionastheactivationfunctionofELM,withthenumberofhiddennodessetto28,
asshowninTable1.ThedistributionofRSSIindicationfromfourreadersintheexperimentis
illustratedinFigure11.
(a)(b)
(c)(d)
Figure11.ThedistributionofRSSIindicationfromfourreaders:(a)The3DRSSlvalueswith
Reader1;(b)The3DRSSlvalueswithReader2;(c)The3DRSSlvalueswithReader3;(d)The3D
RSSlvalueswithReader4.
Figure 10.
Average positioning error regarding different activation functions and different numbers of
hidden nodes.
Symmetry 2017,9, 30 10 of 16
In Figure 10, the performance of ELM with the sigmoid activation function is not significantly
changed with respect to the sine activation function. For the sigmoid functions, the increased amplitude
of average error is relatively small when the number of hidden nodes was more than 60. In addition,
the performance of the hardlim function is the most stable among the three activation functions, but
the average error is the worst. Thus, in the following simulations, we selected the sigmoid function
as the activation function of ELM, with the number of hidden nodes
L
set to 28, as shown in Table 1.
The distribution of RSSI indication from four readers in the experiment is illustrated in Figure 11.
Symmetry2017,9,309of15
4.2.SelectionofParametersfortheELMModel
FortheproposedELMbasedpositioningmodel,therearetwoimportantparametersthatneed
tobeselected,namely,thetypeofactivationfunction,,andthenumberofhiddennodes.
Thesineactivationfunction,,sin∙,hardlimactivationfunction,,
hardlim∙andsigmoidactivationfunction,,1/1exphavebeen
usedtoevaluatetheperformanceoftheELM,withdifferentnumbersofhiddennodes.Theresults
oftheparameterselectionoftheoptimalnumberofhiddennodesforELMwithsigmoid,sineand
hardlimactivationfunctionsareshowninFigure10.
Figure10.Averagepositioningerrorregardingdifferentactivationfunctionsanddifferentnumbers
ofhiddennodes.
InFigure10,theperformanceofELMwiththesigmoidactivationfunctionisnotsignificantly
changedwithrespecttothesineactivationfunction.Forthesigmoidfunctions,theincreased
amplitudeofaverageerrorisrelativelysmallwhenthenumberofhiddennodeswasmorethan60.
Inaddition,theperformanceofthehardlimfunctionisthemoststableamongthethreeactivation
functions,buttheaverageerroristheworst.Thus,inthefollowingsimulations,weselectedthe
sigmoidfunctionastheactivationfunctionofELM,withthenumberofhiddennodessetto28,
asshowninTable1.ThedistributionofRSSIindicationfromfourreadersintheexperimentis
illustratedinFigure11.
(a)(b)
(c)(d)
Figure11.ThedistributionofRSSIindicationfromfourreaders:(a)The3DRSSlvalueswith
Reader1;(b)The3DRSSlvalueswithReader2;(c)The3DRSSlvalueswithReader3;(d)The3D
RSSlvalueswithReader4.
Figure 11.
The distribution of RSSI indication from four readers: (
a
) The 3D RSSl values with Reader1;
(
b
) The 3D RSSl values with Reader2; (
c
) The 3D RSSl values with Reader3; (
d
) The 3D RSSl values
with Reader4.
Table 1. Parameter settings of ELM algorithm.
Algorithm Activation Function Number of Hidden Nodes (L) Type
ELM Sigmoid 28 0 (Regression)
As shown in Figure 11, the signals collected by one reader can be quite different even at the same
location due to noises and RSSI fluctuations in the indoor environment.
4.3. Comparison of Data Quality
As depicted in Section 3.1, it is seen that the data Gaussian filtering method in the offline and
online phases improved the quality of the training data and the model input data, respectively. In order
to analyze the influence of the data quality on the precision of the positioning model based on ELM, this
study uses the four different scenes, and the algorithm is repeated 10 times in each scenario experiment.
The details are as follows and the results of location estimation for impact of data quality on the
positioning model is shown in Figure 12. Table 2summarizes the positioning results in different scenes.
Case 1. In the offline phase, does not filter the training samples; in the online phase, does not filter
the test samples for the RSSI information of each tags, and it is used as the input of the model.
Case 2. In the offline phase, does not filter the training samples; however, in the online phase,
for the RSSI information of every tag, the reader repeated measurements 100 times at a fixed point,
used the Gaussian filter for the collected RSSI, and then used the results as the input of the model.
Symmetry 2017,9, 30 11 of 16
Case 3. In the offline phase, filtering the training samples according to the corresponding rules in
this study; in the online phase, does not filter the test samples for the RSSI information of every tag,
and it is used as the input of the model.
Case 4. In the offline phase, filtering the training samples according to the corresponding rules in
this study; however, in the online phase, for the RSSI information of every tag, the reader repeated
measurements 100 times at a fixed point, used the Gaussian filter for the collected RSSI, and then used
the results as the input of the model.
Symmetry2017,9,3010of15
Table1.ParametersettingsofELMalgorithm.
AlgorithmActivationFunction NumberofHiddenNodes()Type
ELMSigmoid280(Regression)
AsshowninFigure11,thesignalscollectedbyonereadercanbequitedifferentevenatthe
samelocationduetonoisesandRSSIfluctuationsintheindoorenvironment.
4.3.ComparisonofDataQuality
AsdepictedinSection3.1,itisseenthatthedataGaussianfilteringmethodintheofflineand
onlinephasesimprovedthequalityofthetrainingdataandthemodelinputdata,respectively.In
ordertoanalyzetheinfluenceofthedataqualityontheprecisionofthepositioningmodelbasedon
ELM,thisstudyusesthefourdifferentscenes,andthealgorithmisrepeated10timesineach
scenarioexperiment.Thedetailsareasfollowsandtheresultsoflocationestimationforimpactof
dataqualityonthepositioningmodelisshowninFigure12.Table2summarizesthepositioning
resultsindifferentscenes.
Case1.Intheofflinephase,doesnotfilterthetrainingsamples;intheonlinephase,doesnot
filterthetestsamplesfortheRSSIinformationofeachtags,anditisusedastheinputofthemodel.
Case2.Intheofflinephase,doesnotfilterthetrainingsamples;however,intheonlinephase,
fortheRSSIinformationofeverytag,thereaderrepeatedmeasurements100timesatafixedpoint,
usedtheGaussianfilterforthecollectedRSSI,andthenusedtheresultsastheinputofthemodel.
Case3.Intheofflinephase,filteringthetrainingsamplesaccordingtothecorrespondingrules
inthisstudy;intheonlinephase,doesnotfilterthetestsamplesfortheRSSIinformationofevery
tag,anditisusedastheinputofthemodel.
Case4.Intheofflinephase,filteringthetrainingsamplesaccordingtothecorrespondingrules
inthisstudy;however,intheonlinephase,fortheRSSIinformationofeverytag,thereader
repeatedmeasurements100timesatafixedpoint,usedtheGaussianfilterforthecollectedRSSI,
andthenusedtheresultsastheinputofthemodel.
(a)(b)
(c)(d)
Figure12.Impactofdataqualityonthepositioningmodel:(a)Case1;(b)Case2;(c)Case3;(d)Case4.
Figure 12.
Impact of data quality on the positioning model: (
a
)Case 1; (
b
)Case 2; (
c
)Case 3; (
d
)Case 4.
Table 2. The positioning error results in different scenes (m).
Positioning Scenes Min Error Max Error Average Error Average MSE
Case 1 0.1003 4.5780 1.3279 1.5079
Case 2 0.1125 6.3260 1.8074 2.2991
Case 3 0.3181 9.2672 2.4641 3.3730
Case 4 1.1505 1.5264 0.7114 1.0562
As observed from Table 2,Case 4 has the smallest average error and MSE values—0.7114 and
1.0562, respectively, while Case 3 has the largest average error and MSE values—2.4641 and 3.3730
respectively. Therefore, data filtering in the offline training phase and in the online positioning phase
can effectively improve the positioning accuracy of the model. However, we found that, when the
positioning model comes from high quality training samples, the input data for online positioning
phase also need to be high quality.
Symmetry 2017,9, 30 12 of 16
4.4. Comparison with Other Algorithms
This section aims to further evaluate the performance of the proposed algorithm. The GA-BP,
PSO-BP positioning method [
30
,
31
] with no filtering training and testing samples are implemented
and compared, respectively, and the evaluation criteria include the positioning error and algorithm
running time. Since the same ANN structure and the same training set might draw a different result,
each positioning algorithm run 10 times repeatedly. The results of location estimation for different
positioning methods are presented in Figure 12, and the positioning error results are summarized
in Table 3.
Table 3. The positioning error results for different positioning methods (m).
Positioning Method Min Error Max Error Average Error Average MSE
ELM-Gauss filtering 0.0820 1.6060 0.7009 0.7910
ELM 0.2642 4.5819 1.3237 1.6804
GA-BP-Gauss filtering 0.1470 2.0779 0.9007 0.9366
GA-BP 0.3723 6.9965 1.3343 2.0387
PSO-BP-Gauss filtering 0.1317 1.6027 0.8027 0.8126
PSO-BP 0.2854 8.8549 1.5821 1.9898
4.4.1. Positioning Error
It can be seen from Figure 13 and Table 3that the positioning accuracy of the proposed
ELM-Gaussian filtering method is obviously superior to other five methods such as ELM (without
Gaussian filtering), GA-BP-Gaussian filtering (include filtering training and testing samples) and
GA-BP (without Gaussian filtering), PSO-BP-Gaussian filtering (include filtering training and testing
samples) and PSO-BP (without Gaussian filtering). ELM-Gaussian filtering enhances the average
positioning error by 47.05% over ELM, 22.18% over GA-BP-Gaussian filtering, 47.47% over GA-BP,
12.68% over PSO-BP-Gaussian filtering, 55.69% over PSO-BP, GA-BP-Gaussian filtering enhances the
average positioning error by 32.50% over GA-BP, and PSO-BP-Gaussian filtering enhances the average
positioning error by 49.26% over PSO-BP. However, the ELM (without Gaussian filtering) and the
GA-BP (without Gaussian filtering) provide similar average positioning error, which is 1.3237 m and
1.3343 m, respectively.
Symmetry2017,9,3011of15
Table2.Thepositioningerrorresultsindifferentscenes(m).
PositioningScenesMinError MaxError AverageError AverageMSE
Case10.10034.57801.32791.5079
Case20.11256.32601.80742.2991
Case30.31819.26722.46413.3730
Case41.15051.52640.71141.0562
AsobservedfromTable2,Case4hasthesmallestaverageerrorandMSEvalues—0.7114and
1.0562,respectively,whileCase3hasthelargestaverageerrorandMSEvalues—2.4641and3.3730
respectively.Therefore,datafilteringintheofflinetrainingphaseandintheonlinepositioning
phasecaneffectivelyimprovethepositioningaccuracyofthemodel.However,wefoundthat,
whenthepositioningmodelcomesfromhighqualitytrainingsamples,theinputdataforonline
positioningphasealsoneedtobehighquality.
4.4.ComparisonwithOtherAlgorithms
Thissectionaimstofurtherevaluatetheperformanceoftheproposedalgorithm.TheGABP,
PSOBPpositioningmethod[30,31]withnofilteringtrainingandtestingsamplesareimplemented
andcompared,respectively,andtheevaluationcriteriaincludethepositioningerrorandalgorithm
runningtime.SincethesameANNstructureandthesametrainingsetmightdrawadifferentresult,
eachpositioningalgorithmrun10timesrepeatedly.Theresultsoflocationestimationfordifferent
positioningmethodsarepresentedinFigure12,andthepositioningerrorresultsaresummarized
inTable3.
4.4.1.PositioningError
ItcanbeseenfromFigure13andTable3thatthepositioningaccuracyoftheproposed
ELMGaussianfilteringmethodisobviouslysuperiortootherfivemethodssuchasELM(without
Gaussianfiltering),GABPGaussianfiltering(includefilteringtrainingandtestingsamples)and
GABP(withoutGaussianfiltering), PSOBPGaussianfiltering(includefilteringtrainingand
testingsamples)andPSOBP(withoutGaussianfiltering).ELMGaussianfilteringenhancesthe
averagepositioningerrorby47.05%overELM,22.18%overGABPGaussianfiltering,47.47%over
GABP,12.68%overPSOBPGaussianfiltering,55.69%overPSOBP,GABPGaussianfiltering
enhancestheaveragepositioningerrorby32.50%overGABP, andPSOBPGaussianfiltering
enhancestheaveragepositioningerrorby49.26%overPSOBP.However,theELM(without
Gaussianfiltering)andtheGABP(withoutGaussianfiltering)providesimilaraveragepositioning
error,whichis1.3237mand1.3343m,respectively.
(a)(b)
Figure 13. Cont.
Symmetry 2017,9, 30 13 of 16
Symmetry2017,9,3012of15
(c)(d)
(e)(f)
Figure13.Comparisonwiththeotherpositioningmethods:(a)ELMGaussfiltering(include
filteringtrainingandtestingsamples);(b)ELM(withoutGaussfiltering);(c)GABPGaussian
filtering(includefilteringtrainingandtestingsamples);(d)GABP(withoutGaussianfiltering);(e)
GABPGaussianfiltering(includefilteringtrainingandtestingsamples);(f)GABP(without
Gaussianfiltering).
Table3.Thepositioningerrorresultsfordifferentpositioningmethods(m).
PositioningMethodMinErro
r
MaxError AverageError AverageMSE
ELMGaussfiltering0.08201.60600.70090.7910
ELM0.26424.58191.32371.6804
GABPGaussfiltering0.14702.07790.90070.9366
GABP0.37236.99651.33432.0387
PSOBPGaussfiltering0.13171.60270.80270.8126
PSOBP0.28548.85491.58211.9898
Therefore,effectivedatafiltering(includingfilteringtrainingandtestsamples)cangreatly
improvethepositioningaccuracybasedonmachinelearningmethods,andtheaveragepositioning
accuracyoftheELMmethodisbetterthantheGABPandPSOBPmethod.
4.4.2.ComputationalTime
Inordertofurtherevaluatetheperformanceoftheproposedalgorithm,theefficiencyofthree
machinelearningalgorithms(ELM,GABPandPSOBP)basedonpositioningwascompared,and
eachpositioningalgorithmranthreetimescontinuously.Tables4and5listtheparametersof
GABPandPSOBPalgorithmsbasedonpositioning,respectively,andTable6liststhe
computationaltimeforthreealgorithms.Intermsofcomputationaltime,asobservedfromTable6,
ELMrequiresatmost0.5156sinthetrainingphase,reducingthetrainingtimeoverGABP(4.2998
s)byafactorof8,andreducingthetrainingtimeoverPSOBP(3.9891s)byafactorof7,andELM
requiresatmost0.1078sinthetestingphase,reducingthetestingtimeoverGABP(1.5326s)bya
factorof1,and reducingthetestingtimeoverPSOBP(1.5198)byafactorof2.Thisdemonstrates
Figure 13.
Comparison with the other positioning methods: (
a
) ELM-Gauss filtering (include filtering
training and testing samples); (
b
) ELM (without Gauss filtering); (
c
) GA-BP-Gaussian filtering (include
filtering training and testing samples); (
d
) GA-BP (without Gaussian filtering); (
e
) GA-BP-Gaussian
filtering (include filtering training and testing samples); (f) GA-BP (without Gaussian filtering).
Therefore, effective data filtering (including filtering training and test samples) can greatly
improve the positioning accuracy based on machine learning methods, and the average positioning
accuracy of the ELM method is better than the GA-BP and PSO-BP method.
4.4.2. Computational Time
In order to further evaluate the performance of the proposed algorithm, the efficiency of three
machine learning algorithms (ELM, GA-BP and PSO-BP) based on positioning was compared, and
each positioning algorithm ran three times continuously. Tables 4and 5list the parameters of GA-BP
and PSO-BP algorithms based on positioning, respectively, and Table 6lists the computational time
for three algorithms. In terms of computational time, as observed from Table 6, ELM requires at most
0.5156 s in the training phase, reducing the training time over GA-BP (4.2998 s) by a factor of 8, and
reducing the training time over PSO-BP (3.9891 s) by a factor of 7, and ELM requires at most 0.1078 s
in the testing phase, reducing the testing time over GA-BP (1.5326 s) by a factor of 1, and reducing
the testing time over PSO-BP (1.5198) by a factor of 2. This demonstrates that the ELM algorithm is
superior to the GA-BP and PSO-BP learning algorithms in the RFID-enabled positioning systems, since
the core of ELM is to transform the complex iterative process into the random generation of hidden
layer parameters.
Symmetry 2017,9, 30 14 of 16
Table 4. Parameter setting of GA-BP algorithm.
Algorithm Maxgen Popsize Crossover Rate Mutation Rate
GA-BP 100 50 0.75 0.01
Table 5. Parameter setting of PSO-BP algorithm.
Algorithm Maxgen Popsize Learning Factor Weight
PSO-BP 100 50 0.5 0.5
Table 6. The performance comparison of training and testing time.
Run Times Positioning Method Training Time (s) Testing Time (s)
1
ELM 0.5156 0.1078
GA-BP 4.2998 1.5326
PSO-BP 3.9891 1.5196
2
ELM 0.5945 0.1916
GA-BP 4.0157 1.2564
PSO-BP 4.2803 1.4267
3
ELM 0.6416 0.1277
GA-BP 4.4682 1.6831
PSO-BP 4.5002 1.4652
5. Discussion
Based on the obtained results, it can be concluded that the ELM regression model could achieve
a higher computational efficiency in the positioning system, since it only needs to set up the hidden
nodes number, and the algorithm does not need to adjust the input weights and the bias of the network.
Additionally, indoor positioning based on RFID will display the phenomenon of signal reflection and
fading due to the space layout and multipath effects in indoor environment, which will lead to the
larger fluctuations of signal strength in the acquired data samples. Therefore, the Gaussian filtering
is proposed to reduce the signal fluctuation of RSSI, and our experiments verify that ELM-Gaussian
filtering (including filtering training and testing samples) can provide high positioning accuracy
compared with the other five methods. We conclude that the combination of Gaussian filtering
(include filtering training and testing samples) and ELM can effectively solve the positioning problem
in the indoor environments.
6. Conclusions
The data qualities of the online and offline phases are analyzed, respectively, which shows
that the high quality positioning model can obviously improve the accuracy of indoor positioning.
In addition, the time consumption of indoor positioning is also analyzed and compared, and the
proposed algorithm estimates the current coordinate of the tag by building the ELM positioning model
combined with the RSSI measurement values and achieves a rapid positioning.
In this study, to solve the inaccurate localization and efficiency problems in a complex positioning
system, an ELM algorithm based on Gaussian filtering is proposed, which introduces the Gaussian
filtering rule and can effectively filter out the significant fluctuant signals that are caused by the
environment effects, and provide a high quality sample data. The proposed ELM-Gaussian filtering
methods can quickly establish a high quality positioning model and achieve a fast position prediction.
The experimental results demonstrate that the proposed positioning system will contribute by
effectively avoiding the environment interferences and achieving the highest accuracy and efficiency
in all comparative methods.