Available via license: CC BY 4.0
Content may be subject to copyright.
symmetry
S
S
Article
Intelligent RFID Indoor Localization System Using a
Gaussian Filtering Based Extreme Learning Machine
Changzhi Wang, Zhicai Shi * and Fei Wu *
School of Electrical and Electronic Engineering, Shanghai University of Engineering Science,
Shanghai 201620, China; wcz1990ah@126.com
*Correspondence: shizhicai@sues.edu.cn (Z.S.); wufei@sues.edu.cn (F.W.);
Tel.: +8602167791131 (Z.S.); +8602167791035 (F.W.)
Academic Editor: Ka Lok Man
Received: 12 November 2016; Accepted: 20 February 2017; Published: 26 February 2017
Abstract:
Nowadays, the increasing demands of locationbased services (LBS) have spurred the rapid
development of indoor positioning systems (IPS). However, the performance of IPSs is affected by the
ﬂuctuation of the measured signal. In this study, a Gaussian ﬁltering algorithm based on an extreme
learning machine (ELM) is proposed to address the problem of inaccurate indoor positioning when
signiﬁcant Received Signal Strength Indication (RSSI) ﬂuctuations happen during the measurement
process. The Gaussian ﬁltering method is analyzed and compared, which can effectively ﬁlter out the
ﬂuctuant signals that were caused by the environment effects in an RFIDbased positioning system.
Meanwhile, the fast learning ability of the proposed ELM algorithm can reduce the time consumption
for the ofﬂine and online service, and establishes the network positioning regression model between
the signal strengths of the tags and their corresponding positions. The proposed positioning system
is tested in a real experimental environment. In addition, system test results demonstrate that the
positioning algorithms can not only provide higher positioning accuracy, but also achieve a faster
computational efﬁciency compared with other previous algorithms.
Keywords: indoor positioning system; RFID; extreme learning machine; Gaussian ﬁltering
1. Introduction
Indoor positioning systems (IPS) are emerging as a technology, due to the increasing popularity
and demand in location based service indoors [
1
,
2
]. One of the most famous locationaware services
is the Global Positioning System (GPS), but there must be a direct lineofsight between receiver and
satellite in an unshielded environment. GPS is used for outdoor locations. However, its performance
is very poor in indoor environments [
3
]. Therefore, different types of indoor positioning systems
have been developed for personal and commercial needs [
4
,
5
]. Among all of the indoor positioning
methods, BenaventePeces et al. [
6
] found that Radio Frequency Identiﬁcation (RFID) is the best choice
for indoor positioning with high accuracy and low cost.
RFID is a rapidly developing technology that it uses radio frequency (RF) signals to automatically
identify objects. It has been widely applied in various ﬁelds for tool tracking, process management,
access control and supply chain, such as access cards and electronic wallets. A typical RFID networks
system was illustrated in Figure 1that consists of three different entities: RFID readers, tags and
servers. Furthermore, the RFID system has also been applied in the location ﬁeld for the realtime
tracking and positioning of indoor targets. At ﬁrst, Shiraishi et al. [
7
] proposed an indoor positioning
estimation system based on UHF RFID. Furthermore, Wang et al. [
8
] have successfully designed an
indoor personnel tracking and positioning system based on RFID technology combined with images.
Similarly, Montaser et al. [
9
] present a lowcost indoor positioning and material tracking methodology
for construction projects using passive UHF RFID technology. Goodrum et al. [
10
] also explored the
Symmetry 2017,9, 30; doi:10.3390/sym9030030 www.mdpi.com/journal/symmetry
Symmetry 2017,9, 30 2 of 16
applications of UHF RFID technology that is used for tool tracking on construction job sites. Therefore,
UHF RFID is used for indoor positioning in this paper.
Symmetry2017,9,302of15
positioningandmaterialtrackingmethodologyforconstructionprojectsusingpassiveUHFRFID
technology.Goodrumetal.[10]alsoexploredtheapplicationsofUHFRFIDtechnologythatisused
fortooltrackingonconstructionjobsites.Therefore,UHFRFIDisusedforindoorpositioninginthis
paper.
Figure1.ArchitectureofarapidlyRFIDsystem.
Currently,anextensivebodyofresearchfortheIPShasfocusedonthemachinelearning
approachessuchasExtremeLearningMachine(ELM)[11],ArtificialNeuralNetwork(ANN)or
SupportVectorMachine(SVM)[12]andsoon,inordertoovercometheshortcomingsfacedbythe
traditionalpositioningmethods.Wuetal.[13]appliedtheANNtoovercomethelimitationsofthe
empiricalpositioningformula,whichwasusedinthepreviousresearch.TheANNcanlearnthe
geographyfeaturestoadapttotherealworld,whichcanavoidtheimpactofthemultipath
phenomenonandbeflexiblyappliedtoanyenvironment.However,theweightsofANNarenot
easytobedetermined.Forthatreason,Kuoetal.[14]proposedafeatureselection‐based
Back‐Propagation(BP)neuralnetworkthatusesanArtificialImmuneSystem(AIS)(BP‐AIS)to
determinetheconnectingweightsofANN,andforecaststhepositionofthepickingstafffor
warehousemanagement.Similarly,Ceravoloetal.[15]appliedthegeneticalgorithm(GA)to
determinetheparametersofBPnetwork(GA‐BP)andappliedittopredictandcontrolthe
environmenttemperature.
Inaddition,Zouetal.[16]proposedtheELMtoovercometheindoorpositioningproblemand,
throughtheexperimentalevaluation,reportedthatanELM‐basedonpositioningsystemcan
providehigherpositioningaccuracyandrobustnessoverthatofatraditionalmethod.Jiangetal.[17]
presentedafusionpositionframeworkwithaParticleFilterusingWi‐Fisignalsandmotionsensors,
whichusetheELMregressionalgorithmtopredictthelocationofthetarget.Thesystemconsistsof
threemainmodules:asensordatabasedpositionmodel(ELMregression),thefusionParticleFilter
modelandtheWi‐Fibasedpositionmodel.Theresultsshowthattheproposedmethodachievesa
goodperformance,forexample,achievingabetteraccuracythanthetraditionalfingerprintmethod.
Fuetal.[18]proposedthekernelELMtolocatethetargetpositionandcompareditwithmachine
learningalgorithmssuchasSVM[19,20]andBP,andtheresultrevealsthatthekernelELMhasgood
predictionperformanceandfastlearningspeed.
Itisworthmentioningthattheaboveresearchhasbeenmadegoodprogress.Unfortunately,
mostofthesestudieshaveneglectedthefilteringprocessingforexperimentaldatabecausetheRSSI
isaffectedbymanyfactors,suchasindoortemperature,humidity,multi‐patheffectandsoon,
whichleadstoalargefluctuationofRSSI.Therefore,thispaperwillproposeanindoorpositioning
systembasedonaGaussianfiltercombinedwithELMandverifyitsperformanceviacomparison
withprevailingmethods.
Theremainderofthispaperisorganizedasfollows.Section2presentstherelatedworksofthis
research.TheproposedmethodisexplainedinSection3.Section4designedtheexperimentaland
comparisonanalysis.Section5discussedtheexperimentalresults.Finally,Section6presentsthe
conclusionsandfutureresearchdirections.
Figure 1. Architecture of a rapidly RFID system.
Currently, an extensive body of research for the IPS has focused on the machine learning
approaches such as Extreme Learning Machine (ELM) [
11
], Artiﬁcial Neural Network (ANN) or
Support Vector Machine (SVM) [
12
] and so on, in order to overcome the shortcomings faced by the
traditional positioning methods. Wu et al. [
13
] applied the ANN to overcome the limitations of
the empirical positioning formula, which was used in the previous research. The ANN can learn
the geography features to adapt to the real world, which can avoid the impact of the multipath
phenomenon and be ﬂexibly applied to any environment. However, the weights of ANN are not easy
to be determined. For that reason, Kuo et al. [
14
] proposed a feature selectionbased BackPropagation
(BP) neural network that uses an Artiﬁcial Immune System (AIS) (BPAIS) to determine the connecting
weights of ANN, and forecasts the position of the picking staff for warehouse management. Similarly,
Ceravolo et al. [
15
] applied the genetic algorithm (GA) to determine the parameters of BP network
(GABP) and applied it to predict and control the environment temperature.
In addition, Zou et al. [
16
] proposed the ELM to overcome the indoor positioning problem and,
through the experimental evaluation, reported that an ELMbased on positioning system can provide
higher positioning accuracy and robustness over that of a traditional method. Jiang et al. [
17
] presented
a fusion position framework with a Particle Filter using WiFi signals and motion sensors, which use
the ELM regression algorithm to predict the location of the target. The system consists of three main
modules: a sensor data based position model (ELM regression), the fusion Particle Filter model and the
WiFi based position model. The results show that the proposed method achieves a good performance,
for example, achieving a better accuracy than the traditional ﬁngerprint method. Fu et al. [
18
] proposed
the kernel ELM to locate the target position and compared it with machine learning algorithms such as
SVM [
19
,
20
] and BP, and the result reveals that the kernel ELM has good prediction performance and
fast learning speed.
It is worth mentioning that the above research has been made good progress. Unfortunately,
most of these studies have neglected the ﬁltering processing for experimental data because the RSSI is
affected by many factors, such as indoor temperature, humidity, multipath effect and so on, which
leads to a large ﬂuctuation of RSSI. Therefore, this paper will propose an indoor positioning system
based on a Gaussian ﬁlter combined with ELM and verify its performance via comparison with
prevailing methods.
The remainder of this paper is organized as follows. Section 2presents the related works of this
research. The proposed method is explained in Section 3. Section 4designed the experimental and
comparison analysis. Section 5discussed the experimental results. Finally, Section 6presents the
conclusions and future research directions.
Symmetry 2017,9, 30 3 of 16
2. Related Works
2.1. IPS Technologies
At present, there are four frequently used localization technologies based on the measurement
of distances or angles between reference points: Time of Arrival (TOA), Time Difference of Arrival
(TDOA), Angle of Arrival (AOA) [
21
] and Received Signal Strength Indication (RSSI) [
22
] are shown
as Figure 2. Hatami et al. [
23
] describes in detail the performance differences for each location
system. The TOA and TDOA based systems are more suitable for outdoor or largescale open
indoor environments, and the RSSI based system is the most suitable for tight indoor environments.
Furthermore, Reference [
24
] has found that the positioning system based on the RSSI measurement
has two main advantages: simple implementation and cost effectiveness. This study takes into account
the technological problems and considerable costs, and the basic RSSI measurement method based on
localization is adopted.
Symmetry2017,9,303of15
2.RelatedWorks
2.1.IPSTechnologies
Atpresent,therearefourfrequentlyusedlocalizationtechnologiesbasedonthemeasurement
ofdistancesoranglesbetweenreferencepoints:TimeofArrival(TOA),TimeDifferenceofArrival
(TDOA),AngleofArrival(AOA) [21]andReceivedSignalStrengthIndication(RSSI)[22]are
shownasFigure2.Hatamietal.[23]describesindetailtheperformancedifferencesforeach
locationsystem.TheTOAandTDOAbasedsystemsaremoresuitableforoutdoororlarge‐scale
openindoorenvironments,andtheRSSIbasedsystemisthemostsuitablefortightindoor
environments.Furthermore,Reference[24]hasfoundthatthepositioningsystembasedontheRSSI
measurementhastwomainadvantages:simpleimplementationandcosteffectiveness.Thisstudy
takesintoaccountthetechnologicalproblemsandconsiderablecosts,andthebasicRSSI
measurementmethodbasedonlocalizationisadopted.
(a)(b)
(c)(d)
Figure2.Fourmeasurementbasedpositioningtechniques:(a)TOAbasedestimation;(b)TDOA
basedestimation;(c)AOAbasedestimation;(d)RSSIbasedestimation.
2.2.ELMAlgorithm
ELM[25]isatypeofmachinelearningalgorithmbasedonasingle‐hiddenLayer
Feed‐forwardneuralNetwork(SLFN)architecturedevelopedbyHuangetal.Ithasbeen
demonstratedthatELMcanprovidegoodgeneralizationperformanceatanextremelyfastlearning
speed[26,27].Given,anarbitrarydistincttrainingset,∈,∈,1,2,
⋯,,whereisan1inputvector ,,⋯,,and isan1target
vector,,⋯,.Thegoalofregressionistofindtherelationshipbetweenand.
Sincetheonlyparameterstobeoptimizedaretheoutputweights,thetrainingofELMisequivalent
tosolvingaleastsquaresproblem[28].Figure3showsthetypicalnetworkstructureoftheSLFN.
Figure 2.
Four measurement based positioning techniques: (
a
) TOA based estimation; (
b
) TDOA based
estimation; (c) AOA based estimation; (d) RSSI based estimation.
2.2. ELM Algorithm
ELM [
25
] is a type of machine learning algorithm based on a singlehidden Layer Feedforward
neural Network (SLFN) architecture developed by Huang et al. It has been demonstrated that ELM
can provide good generalization performance at an extremely fast learning speed [
26
,
27
]. Given
N
,
an arbitrary distinct training set
µ=(Xi,ti)
Xi∈R1×n,ti∈R1×m,i=1, 2, ··· ,N
, where
Xi
is
an
n×
1 input vector
Xi=[xi1,xi2,··· ,xin ]T
, and
ti
is an
m×
1 target vector
ti=[ti1,ti2,··· ,tim ]T
.
The goal of regression is to ﬁnd the relationship between
Xi
and
ti
. Since the only parameters to
be optimized are the output weights, the training of ELM is equivalent to solving a least squares
problem [28]. Figure 3shows the typical network structure of the SLFN.
Symmetry 2017,9, 30 4 of 16
Symmetry2017,9,304of15
Figure3.TheSLFNnetworkstructurewithhiddenneurons.
Inthetrainingprocess,thefirststageisthatmapsthedatafromthe‐dimensional
inputspacetothe‐dimensionalhidden‐layerfeaturespace(ELMfeaturespace)
:→,(1)
where,,⋯,istheoutputofarowvectorofthehiddenlayer,∈
.
ThemathematicalmodeloftheSLFNscanbesummarizedasEquation(2):
,(2)
where∈isthehiddenlayeroutputmatrix,∈istheoutputweightmatrix
,,⋯,and∈isthetargetvector
1
2
NNL
hx
hx
H
hx
,
1
2
T
T
T
L
L
m
β
β
β
β
,
1
2
T
T
T
NNm
t
t
T
t
.
(3)
EachoutputofELMisgivenbyEquation(4)
,1,2,⋯(4)
ELMtheoryistominimizethetrainingerrorbutalsothesmallestnormoftheoutputweights
[29]:Min:
‖‖
∑‖
‖
,
S.t.:
,1,2,⋯,,(5)
whereisthepenaltycoefficientonthetrainingerrors,and∈isthetrainingerrorvector
oftheoutputnodeswithrespecttothethtrainingpattern,,,,⋯,,.
3.TheProposedELMBasedIPS
3.1.DataGaussianFiltering
DataGaussianfilteringisusedtoreducethesignalfluctuationofRSSI,whichisobtainedby
repeatedmeasurementsatthefixedposition.Assumingthat,
∈,∈,
where,,,,⋯,,representstheRSSImeasurementvectorofthetagthat
containsRSSIvaluesthatarereceivedbyanumberofreaders,andtheobservedpositionof
thetagis,,,,isthenumberofreferencetags.Definingthat
,
,⋯,,
,⋯,,
istheRSSIvectorofthethtagthatismeasuredby
thereaderatafixedpoint,whereisthenumberofmeasurements,and
,
,⋯,,
,⋯,,
,⋯,,
⊆,istheRSSIvectorthatis
processedbythe2principleofGaussiandistribution.Figure4ashowstherepeated
measurementresultsfromaspecifiedtagatafixedpointk.FromFigure4a,itcanbeseenthatthe
Figure 3. The SLFN network structure with Lhidden neurons.
In the training process, the ﬁrst stage is that
h(xi)
maps the data from the
n
dimensional input
space to the Ldimensional hiddenlayer feature space (ELM feature space)
h:xi→h(xi), (1)
where
h(xi)=[h1(x),h2(x),··· ,hL(x)]
is the output of a row vector of the hidden layer,
h(xi)∈R1×L
.
The mathematical model of the SLFNs can be summarized as Equation (2):
Hβ=T, (2)
where
H∈RN×L
is the hidden layer output matrix,
β∈RL×m
is the output weight matrix
βi=[βi1,βi2,··· ,βim ]Tand T∈RN×mis the target vector
H=
h(x1)
h(x2)
.
.
.
h(xN)
N×L
,β=
βT
1
βT
2
.
.
.
βT
L
L×m
,T=
tT
1
tT
2
.
.
.
tT
N
N×m
. (3)
Each output of ELM is given by Equation (4)
ti=h(xi)β,i=1, 2, ··· (4)
ELM theory is to minimize the training error but also the smallest norm of the output weights [
29
]:
Min : LDELM =1
2kβk2+W
2∑N
i=1kηik2,
S.t. : h(xi)β=tT
i−ηT
i,i=1, 2, ··· ,N,
(5)
where
W
is the penalty coefﬁcient on the training errors, and
ηi∈Rm
is the training error vector of the
moutput nodes with respect to the ith training pattern ηi=[ηi,1,ηi,2 ,··· ,ηi,m]T.
3. The Proposed ELM Based IPS
3.1. Data Gaussian Filtering
Data Gaussian ﬁltering is used to reduce the signal ﬂuctuation of RSSI, which is obtained by
repeated measurements at the ﬁxed position. Assuming that
D={(Fi,pi)}N
i=1(Fi∈Rm,pi∈Rn)
,
where
Fi={RSSIi,1,RSS Ii,2 ,··· ,RSSIi,m}
represents the RSSI measurement vector of the tag
that contains
m
RSSI values that are received by a number of readers
m
, and the observed
position of the tag is
pi=Coordinatei,x,Coordinatei,y
,
N
is the number of reference tags.
Symmetry 2017,9, 30 5 of 16
Deﬁning that
RFk=nRSS I1
i,m,··· ,RSS Ic
i,m,··· ,RSS It
i,mo(k=pi)
is the RSSI vector of the
i
th tag
that is measured by the reader
m
at a ﬁxed point
k
, where
t
is the number of measurements, and
RFk0=nRSS I1
i,m,··· ,RSS Ie
i,m,··· ,RSS Id
i,m,··· ,RSS Il
i,mo(RFk0⊆RFk,e≤l≤t)
is the RSSI vector
that is processed by the 2
σRSSI
principle of Gaussian distribution. Figure 4a shows the repeated
measurement results from a speciﬁed tag at a ﬁxed point k. From Figure 4a, it can be seen that the
changes in RSSI measurement can be approximated as a parametric distribution, and the Gaussian
distribution ﬁts the RSSI distribution from a ﬁxed point (measured in dBm) as shown in Figure 4b.
Symmetry2017,9,305of15
changesinRSSImeasurementcanbeapproximatedasaparametricdistribution,andtheGaussian
distributionfitstheRSSIdistributionfromafixedpoint(measuredindBm)asshowninFigure4b.
(a)(b)
Figure4.TheRSSIdataandGaussianfitting:(a)TheRSSImeasurementvaluesfromaspecifiedtag
atafixedpoint;(b)ThePDFandGaussianfittingforRSSIdata.
Figure4impliesthatthecanmodeltheGaussiandistribution:
,

√,
,(6)
whereistheexpectedvalueofRSSIandisthevarianceofRSSI,andisgivenby
1
,
,
∑,
.
(7)
Accordingtothe2principleofGaussiandistribution,thesmallprobabilityeventof
,
2isexcluded,andthedataof,
2ischosenas
effectiveexperimentaldata.
Sincethesignalstrengthvaluesreceivedfromdifferentlocationsoftagsareindependent,
isstillconsistentwiththeGaussiandensitydistribution,andisgivenby:
,

√
,
,(8)
where',
1
1l
R
SI im
d
d
S
u RSSI
l
,
2
'2 '
,
1
11
RSSI i m RSSI
d
ld
RSSI u
l
,andisthenumberofremaining
measuredvalues,whichhasbeenprocessedbythe2principleofGaussiandistribution.
Whenthefunctionvalue,

,thecorrespondingRSSIvaluehasahigh
probabilityvalue;when,

,thecorrespondingRSSIvaluehasasmall
probabilityvalue,asdefinedby:
,
1.(9)
Accordingtothegeneralengineeringpracticethat0.6,andtheeffectiveintervalofRSSI
valueisEquation(9).Figure5showstheRSSIvalueofatagthatiscollectedatafixedpointseveral
times:
2
3.8
,
2
6.3
.(10)
Figure 4.
The RSSI data and Gaussian ﬁtting: (
a
) The RSSI measurement values from a speciﬁed tag at
a ﬁxed point; (b) The PDF and Gaussian ﬁtting for RSSI data.
Figure 4implies that the RFkcan model the Gaussian distribution:
fRSSIc
i,m
RFk=1
√2πσRSS I
e−(RSSI c
i,m−uRSSI )2
2σRSSI 2, (6)
where uRSSI is the expected value of RSSI and σRSSI 2is the variance of RSSI, and is given by
uRSSI =1
t
t
∑
c=1
RSSIc
i,m,
σRSSI 2=1
(t−1)
t
∑
c=1RSSIc
i,m−uRSS I 2.
(7)
According to the 2
σRSSI
principle of Gaussian distribution, the small probability event of
RSSIc
i,m−uRSS I
>
2
σRSSI
is excluded, and the data of
RSSIc
i,m−uRSS I
<
2
σRSSI
is chosen as
effective experimental data.
Since the signal strength values received from different locations of tags are independent,
RFk0
is
still consistent with the Gaussian density distribution, and is given by:
fRSSId
i,m
RFk0=1
√2πσ0
RSSI
e−(RSSI d
i,m−u0
RSSI )2
2σ0
RSSI 2, (8)
where
u0
RSSI =1
l
l
∑
d=1
RSSId
i,m
,
σ0
RSSI 2=1
(l−1)
l
∑
d=1RSSId
i,m−u0
RSS I 2
, and
l
is the number of remaining
measured values, which has been processed by the 2σRSSI principle of Gaussian distribution.
Symmetry 2017,9, 30 6 of 16
When the function value
fRSSId
i,m
RFk0≥P0
, the corresponding RSSI value has a high
probability value; when
fRSSId
i,m
RFk0<P0
, the corresponding RSSI value has a small probability
value, as deﬁned by:
P0≤fRSSId
i,m
RFk0≤1. (9)
According to the general engineering practice that
P0=
0.6, and the effective interval of RSSI value
is Equation (9). Figure 5shows the RSSI value of a tag that is collected at a ﬁxed point several times:
q2σ0
RSS I ln3.8σ0
RSS I +u0
RSSI ≤RSS I d
i,m≤q2σ0
RSS I ln6.3σ0
RSS I +u0
RSSI . (10)
Symmetry2017,9,306of15
(a) (b)
Figure5.TheRSSIvalueofatagthatiscollectedatafixedpointforseveraltimes:(a)theRSSI
valuesbeforeGaussianprocess;(b)theRSSIvaluesafterGaussianprocess.
ThefinaloutputoftheGaussianprocessisthearithmeticmeanofalltheRSSIvalueswithin
theinterval[2
3.8
,2
6.3
],andisgivenby
∑,
,(11)
whereisanumberthatsatisfiesEquation(9).
3.2.ELMLearningMethod
TheproposedELMmethodconsidersthelocalizationproblemasaregressionproblem.
Supposethegiventrainingdatais,∈,∈,1,2,⋯,,settingrank
,,wheredenotesthehiddenoutputmatrix.TheELMnetworkwithhiddennodesis
showninFigure3,andtheoutputfunctionofthisnetworkcanbeexpressedasfollows:
,,,
1,2,⋯,,(12)
whereistheweightconnectingthethhiddennodetotheoutputnode,andarethe
learningparametersofhiddennodes,and,,istheoutputofthethhiddennode.
Fortheadditivehiddennode,,,∙,∈,andifSLFNswith
hiddennodescanapproximatethesampleswithzeroerrors,thereexist,,andsuchthat
∙,
1,2,⋯,,(13)
where,,⋯,,,,⋯,.
TheELMlearningalgorithmconsistsofthreemainproceduresasshowninFigure6.
Figure6.ThelearningofELMalgorithm.
Step1:Randomlyassigntheinputparameters:inputweightsandbiases,1,2,⋯,.
Step2:Calculatetheinitialhiddenlayeroutputmatrix
,,⋯
,,
⋮⋮⋮
,,⋯
,,
.(14)
Figure 5.
The RSSI value of a tag that is collected at a ﬁxed point for several times: (
a
) the RSSI values
before Gaussian process; (b) the RSSI values after Gaussian process.
The ﬁnal output of the Gaussian process is the arithmetic mean of all the RSSI values within the
interval [q2σ0
RSS I ln3.8σ0
RSS I +u0
RSSI ,q2σ0
RSS I ln6.3σ0
RSS I +u0
RSSI ], and is given by
FoutRSSI =1
k
k
∑
e=1
RSSIe
i,m, (11)
where kis a number that satisﬁes Equation (9).
3.2. ELM Learning Method
The proposed ELM method considers the localization problem as a regression problem. Suppose
the given training data is
µ={(xi,ti)xi∈Rn,ti∈Rm,i=1, 2, ··· ,N}
, setting rank
(H0)=L
,
L≤N
,
where
H0
denotes the hidden output matrix. The ELM network with
L
hidden nodes is shown in
Figure 3, and the output function of this network can be expressed as follows:
fLxj=
L
∑
i=1
βiGai,bi,xj,j=1, 2, ··· ,N, (12)
where
βi
is the weight connecting the
i
th hidden node to the output node,
ai
and
bi
are the learning
parameters of hidden nodes, and Gai,bi,xjis the output of the ith hidden node.
Symmetry 2017,9, 30 7 of 16
For the additive hidden node,
Gai,bi,xj=gai·xj+bi
,
bi∈R
, and if SLFNs with
L
hidden
nodes can approximate the Nsamples with zero errors, there exist βi,ai, and bisuch that
L
∑
i=1
βigai·xj+bi=ti,j=1, 2, ··· ,N, (13)
where ai=[wi1,wi2,··· ,win],xj=x1j,x2j,··· ,xnj T.
The ELM learning algorithm consists of three main procedures as shown in Figure 6.
Symmetry2017,9,306of15
(a) (b)
Figure5.TheRSSIvalueofatagthatiscollectedatafixedpointforseveraltimes:(a)theRSSI
valuesbeforeGaussianprocess;(b)theRSSIvaluesafterGaussianprocess.
ThefinaloutputoftheGaussianprocessisthearithmeticmeanofalltheRSSIvalueswithin
theinterval[2
3.8
,2
6.3
],andisgivenby
∑,
,(11)
whereisanumberthatsatisfiesEquation(9).
3.2.ELMLearningMethod
TheproposedELMmethodconsidersthelocalizationproblemasaregressionproblem.
Supposethegiventrainingdatais,∈,∈,1,2,⋯,,settingrank
,,wheredenotesthehiddenoutputmatrix.TheELMnetworkwithhiddennodesis
showninFigure3,andtheoutputfunctionofthisnetworkcanbeexpressedasfollows:
,,,
1,2,⋯,,(12)
whereistheweightconnectingthethhiddennodetotheoutputnode,andarethe
learningparametersofhiddennodes,and,,istheoutputofthethhiddennode.
Fortheadditivehiddennode,,,∙,∈,andifSLFNswith
hiddennodescanapproximatethesampleswithzeroerrors,thereexist,,andsuchthat
∙,
1,2,⋯,,(13)
where,,⋯,,,,⋯,.
TheELMlearningalgorithmconsistsofthreemainproceduresasshowninFigure6.
Figure6.ThelearningofELMalgorithm.
Step1:Randomlyassigntheinputparameters:inputweightsandbiases,1,2,⋯,.
Step2:Calculatetheinitialhiddenlayeroutputmatrix
,,⋯
,,
⋮⋮⋮
,,⋯
,,
.(14)
Figure 6. The learning of ELM algorithm.
Step1: Randomly assign the input parameters: input weights aiand biases bi,i=1, 2, · · · ,L.
Step2: Calculate the initial hidden layer output matrix H0
H0=
G(a1,b1,x1)··· G(aL,bL,x1)
.
.
..
.
..
.
.
G(a1,b1,xN)··· G(aL,bL,xN)
N×L
. (14)
Step3: Estimate the initial output weight
β(0)
. For
T0=[t1,t2,· · · ,tN]T
N×m
, the problem is
equivalent to minimizing H0β=T0, which can be written as
min
βkH0β−T0k. (15)
The optimal solution is given by
β(0)=H0+T0
, where
H+
is the Moore–Penrose generalized
inverse of hidden layer output matrix H0.
3.3. Overall System
The framework of the proposed ELM with a Gaussian process for IPS is shown in Figure 7.
It contains two localization phases: the ofﬂine phase and the online phase. During the ofﬂine phase,
the collected RSSI of reference tags and their physical locations are adopted as training inputs and
training targets, respectively. The cluster classiﬁcation is conducted for all DQ values in the database
through ELM learning to deﬁne the parameters of the network and establish an initial ELM regression
model of environment characteristics for online positioning. During the online phase, the realtime
DQ values will be fed into the trained ELM model, and, then, the estimated location will be calculated.
The main task of ELM learning algorithm here is the cluster analysis for training sample sets
which are stored in the RFID localization database. Deﬁning
Tagi={Li,RSSIi}
, where
Li=(xi,yi)
is the coordinate of the
i
th tag and its corresponding
RSSIi={RSSi,1,RSSIi,2,· · · ,RSSIi,m}
.
In addition, at time
T
, the reader repeatedly obtains the RSSI value of the tag at a ﬁxed point
Li
,
namely,
DQm=nRSSI1
i,m,RSSI2
i,m,··· ,RSS It
i,mo
,
(m=1, 2, ··· , 4)
is the acquired RSSI vector for
tag
i
that is measured by the
m
th reader
t
times. All
DQ
are collected to build an original database.
As discussed in Section 3.1, the RSSI value in the original database is processed by Gaussian ﬁlter to
reduce the signal ﬂuctuation that is caused by the indoor environment effects. Furthermore, in order
to avoid the domination of large feature values, the datasets are normalized to the range of [0, 1] by
using the corresponding maximum and minimum values.
Symmetry 2017,9, 30 8 of 16
Symmetry2017,9,307of15
Step3:Estimatetheinitialoutputweight.For,,⋯,
,theproblemis
equivalenttominimizing,whichcanbewrittenas
min
‖‖.(15)
Theoptimalsolutionisgivenby ,whereistheMoore–Penrosegeneralized
inverseofhiddenlayeroutputmatrix.
3.3.OverallSystem
TheframeworkoftheproposedELMwithaGaussianprocessforIPSisshowninFigure7.It
containstwolocalizationphases:theofflinephaseandtheonlinephase.Duringtheofflinephase,
thecollectedRSSIofreferencetagsandtheirphysicallocationsareadoptedastraininginputsand
trainingtargets,respectively.TheclusterclassificationisconductedforallDQvaluesinthe
databasethroughELMlearningtodefinetheparametersofthenetworkandestablishaninitial
ELMregressionmodelofenvironmentcharacteristicsforonlinepositioning.Duringtheonline
phase,thereal‐timeDQvalueswillbefedintothetrainedELMmodel,and,then,theestimated
locationwillbecalculated.
ThemaintaskofELMlearningalgorithmhereistheclusteranalysisfortrainingsamplesets
whicharestoredintheRFIDlocalizationdatabase.Defining,,where,
isthecoordinateofthethtaganditscorresponding,,,,⋯,,.In
addition,attime,thereaderrepeatedlyobtainstheRSSIvalueofthetagatafixedpoint,
namely,,
,,
,⋯,,
,1,2,⋯,4istheacquiredRSSIvectorfortag
thatismeasuredbythethreadertimes.Allarecollectedtobuildanoriginaldatabase.As
discussedinSection3.1,theRSSIvalueintheoriginaldatabaseisprocessedbyGaussianfilterto
reducethesignalfluctuationthatiscausedbytheindoorenvironmenteffects.Furthermore,in
ordertoavoidthedominationoflargefeaturevalues,thedatasetsarenormalizedtotherangeof[0,
1]byusingthecorrespondingmaximumandminimumvalues.
Figure7.TheframeworkofELMIPSbasedonGaussianprocess.
TheinputtrainingdatasetsinFigure7are,
,
,
,
,where,represents
thecoordinatesofthepointtobemeasured,and
,
,
,
representsitscorresponding
RSSIvalues,whichhasbeenprocessedbyGaussianfilterandnormalized.Thenewdatabaseis
utilizedtoselectthemodelparameters,whichareusedtoconstructtheELMbasedpositioning
model,andthecorrespondingELMlearningprocedureisshownasFigure6.
Figure 7. The framework of ELM IPS based on Gaussian process.
The input training data sets in Figure 7are
x,y
DQ0
1,DQ0
2,DQ0
3,DQ0
4
, where
(x,y)
represents
the coordinates of the point to be measured, and
DQ0
1,DQ0
2,DQ0
3,DQ0
4
represents its corresponding
RSSI values, which has been processed by Gaussian ﬁlter and normalized. The new database is utilized
to select the model parameters, which are used to construct the ELM based positioning model, and the
corresponding ELM learning procedure is shown as Figure 6.
4. Experimental Results and Analysis
In order to evaluate the performance of the proposed method, some experiments were conducted.
In our experiment, the type of LairdS8658WPL UHF RFID System (including reader and receiving
antenna, LJYZN, Shanghai, China) was used to collect the RFID signals. The UHF RFID unit of this
device is MHz and the main working frequency is 865–965 MHz, the gain is 6 dBiC and the power
is 10 W. In addition, the RSSI unit is dBm. All evaluation processes were conducted in a MATLAB
(R2013b version, MathWorks corporation, Natick, MA, USA) environment running in a Windows 7
machine with an Intel Core i7 central processing unit (CPU) of 3.07 GHz and a 4 GB of random access
memory (RAM).
4.1. Experimental Environment
The simulated environment is illustrated as shown in Figure 8. The size of the experimental
environment is approximately 11 m
×
6 m where a total of 60 reference tags and four UHF RFID
antennas were used. The distance between every two adjacent reference tags was 1.2 m, and the
four antennas are connected to a reader and ﬁxed in the four different corners of the location area,
respectively. When the reader is working, the four antennas will receive the different information
(including the RSSI values between the tags and the antennas), which comes from the tag feedback.
In order to obtain an accurate RSSI value from the reference tag, the RSSI value of the tag is repeatedly
measured 100 times
(t=100)
by the antenna at a ﬁxed point, and Gaussian is used to process these
RSSI values as described in Section 3.1. Finally, the experiment yielded 240 groups’ perfect sample
data elements, including the RSSI values of tags and corresponding positions (60 RSSI data elements
for each antenna), and the sample data are regarded as the input values for the ELM network as shown
in Figure 9, where
x=[x1, x2, x3, x4]T=DQ0
1,DQ0
2,DQ0
3,DQ0
4T(16)
Symmetry 2017,9, 30 9 of 16
Symmetry2017,9,308of15
4.ExperimentalResultsandAnalysis
Inordertoevaluatetheperformanceoftheproposedmethod,someexperimentswere
conducted.Inourexperiment,thetypeofLaird‐S8658WPLUHFRFIDSystem(includingreader
andreceivingantenna,LJYZN,Shanghai,China)wasusedtocollecttheRFIDsignals.TheUHF
RFIDunitofthisdeviceisMHzandthemainworkingfrequencyis865–965MHz,thegainis6
dBiCandthepoweris10W.Inaddition,theRSSIunitisdBm.Allevaluationprocesseswere
conductedinaMATLAB(R2013bversion,MathWorkscorporation,Natick,MA,USA)environment
runninginaWindows7machinewithanIntelCorei7centralprocessingunit(CPU)of3.07GHz
anda4GBofrandomaccessmemory(RAM).
4.1.ExperimentalEnvironment
ThesimulatedenvironmentisillustratedasshowninFigure8.Thesizeoftheexperimental
environmentisapproximately11m×6mwhereatotalof60referencetagsandfourUHFRFID
antennaswereused.Thedistancebetweeneverytwoadjacentreferencetagswas1.2m,andthe
fourantennasareconnectedtoareaderandfixedinthefourdifferentcornersofthelocationarea,
respectively.Whenthereaderisworking,thefourantennaswillreceivethedifferentinformation
(includingtheRSSIvaluesbetweenthetagsandtheantennas),whichcomesfromthetagfeedback.
InordertoobtainanaccurateRSSIvaluefromthereferencetag,theRSSIvalueofthetagis
repeatedlymeasured100times100bytheantennaatafixedpoint,andGaussianisusedto
processtheseRSSIvaluesasdescribedinSection3.1.Finally,theexperimentyielded240groups’
perfectsampledataelements,includingtheRSSIvaluesoftagsandcorrespondingpositions(60
RSSIdataelementsforeachantenna),andthesampledataareregardedastheinputvaluesforthe
ELMnetworkasshowninFigure9,where
x,x,x,x
,
,
,
.(16)
AsshowninFigure9,thearchitectureofELMnetworkhasnineinputs,eachrepresentingaset
ofRSSIvectors,andtwooutputs,representingthegeographycoordinateofthetagposition.
Figure8.Thesimulatedexperimentalenvironment.
Figure9.ThearchitectureofELMnetworkbasedonpositioningsystem.
Figure 8. The simulated experimental environment.
Symmetry2017,9,308of15
4.ExperimentalResultsandAnalysis
Inordertoevaluatetheperformanceoftheproposedmethod,someexperimentswere
conducted.Inourexperiment,thetypeofLaird‐S8658WPLUHFRFIDSystem(includingreader
andreceivingantenna,LJYZN,Shanghai,China)wasusedtocollecttheRFIDsignals.TheUHF
RFIDunitofthisdeviceisMHzandthemainworkingfrequencyis865–965MHz,thegainis6
dBiCandthepoweris10W.Inaddition,theRSSIunitisdBm.Allevaluationprocesseswere
conductedinaMATLAB(R2013bversion,MathWorkscorporation,Natick,MA,USA)environment
runninginaWindows7machinewithanIntelCorei7centralprocessingunit(CPU)of3.07GHz
anda4GBofrandomaccessmemory(RAM).
4.1.ExperimentalEnvironment
ThesimulatedenvironmentisillustratedasshowninFigure8.Thesizeoftheexperimental
environmentisapproximately11m×6mwhereatotalof60referencetagsandfourUHFRFID
antennaswereused.Thedistancebetweeneverytwoadjacentreferencetagswas1.2m,andthe
fourantennasareconnectedtoareaderandfixedinthefourdifferentcornersofthelocationarea,
respectively.Whenthereaderisworking,thefourantennaswillreceivethedifferentinformation
(includingtheRSSIvaluesbetweenthetagsandtheantennas),whichcomesfromthetagfeedback.
InordertoobtainanaccurateRSSIvaluefromthereferencetag,theRSSIvalueofthetagis
repeatedlymeasured100times100bytheantennaatafixedpoint,andGaussianisusedto
processtheseRSSIvaluesasdescribedinSection3.1.Finally,theexperimentyielded240groups’
perfectsampledataelements,includingtheRSSIvaluesoftagsandcorrespondingpositions(60
RSSIdataelementsforeachantenna),andthesampledataareregardedastheinputvaluesforthe
ELMnetworkasshowninFigure9,where
x,x,x,x
,
,
,
.(16)
AsshowninFigure9,thearchitectureofELMnetworkhasnineinputs,eachrepresentingaset
ofRSSIvectors,andtwooutputs,representingthegeographycoordinateofthetagposition.
Figure8.Thesimulatedexperimentalenvironment.
Figure9.ThearchitectureofELMnetworkbasedonpositioningsystem.
Figure 9. The architecture of ELM network based on positioning system.
As shown in Figure 9, the architecture of ELM network has nine inputs, each representing a set of
RSSI vectors, and two outputs, representing the geography coordinate of the tag position.
4.2. Selection of Parameters for the ELM Model
For the proposed ELM based positioning model, there are two important parameters that need to
be selected, namely, the type of activation function
G(a,b,x)
and the number of hidden nodes
L
. The sine
activation function
G(a,b,x)=sin(a·x+b)
, hardlim activation function
(a,b,x)=hardlim(a·x+b)
and sigmoid activation function
(a,b,x)=
1
/(
1
+exp((−(a·x+b)))
have been used to evaluate
the performance of the ELM, with different numbers of hidden nodes. The results of the parameter
selection of the optimal number of hidden nodes for ELM with sigmoid, sine and hardlim activation
functions are shown in Figure 10.
Symmetry2017,9,309of15
4.2.SelectionofParametersfortheELMModel
FortheproposedELMbasedpositioningmodel,therearetwoimportantparametersthatneed
tobeselected,namely,thetypeofactivationfunction,,andthenumberofhiddennodes.
Thesineactivationfunction,,sin∙,hardlimactivationfunction,,
hardlim∙andsigmoidactivationfunction,,1/1exp∙havebeen
usedtoevaluatetheperformanceoftheELM,withdifferentnumbersofhiddennodes.Theresults
oftheparameterselectionoftheoptimalnumberofhiddennodesforELMwithsigmoid,sineand
hardlimactivationfunctionsareshowninFigure10.
Figure10.Averagepositioningerrorregardingdifferentactivationfunctionsanddifferentnumbers
ofhiddennodes.
InFigure10,theperformanceofELMwiththesigmoidactivationfunctionisnotsignificantly
changedwithrespecttothesineactivationfunction.Forthesigmoidfunctions,theincreased
amplitudeofaverageerrorisrelativelysmallwhenthenumberofhiddennodeswasmorethan60.
Inaddition,theperformanceofthehardlimfunctionisthemoststableamongthethreeactivation
functions,buttheaverageerroristheworst.Thus,inthefollowingsimulations,weselectedthe
sigmoidfunctionastheactivationfunctionofELM,withthenumberofhiddennodessetto28,
asshowninTable1.ThedistributionofRSSIindicationfromfourreadersintheexperimentis
illustratedinFigure11.
(a)(b)
(c)(d)
Figure11.ThedistributionofRSSIindicationfromfourreaders:(a)The3DRSSlvalueswith
Reader1;(b)The3DRSSlvalueswithReader2;(c)The3DRSSlvalueswithReader3;(d)The3D
RSSlvalueswithReader4.
Figure 10.
Average positioning error regarding different activation functions and different numbers of
hidden nodes.
Symmetry 2017,9, 30 10 of 16
In Figure 10, the performance of ELM with the sigmoid activation function is not signiﬁcantly
changed with respect to the sine activation function. For the sigmoid functions, the increased amplitude
of average error is relatively small when the number of hidden nodes was more than 60. In addition,
the performance of the hardlim function is the most stable among the three activation functions, but
the average error is the worst. Thus, in the following simulations, we selected the sigmoid function
as the activation function of ELM, with the number of hidden nodes
L
set to 28, as shown in Table 1.
The distribution of RSSI indication from four readers in the experiment is illustrated in Figure 11.
Symmetry2017,9,309of15
4.2.SelectionofParametersfortheELMModel
FortheproposedELMbasedpositioningmodel,therearetwoimportantparametersthatneed
tobeselected,namely,thetypeofactivationfunction,,andthenumberofhiddennodes.
Thesineactivationfunction,,sin∙,hardlimactivationfunction,,
hardlim∙andsigmoidactivationfunction,,1/1exp∙havebeen
usedtoevaluatetheperformanceoftheELM,withdifferentnumbersofhiddennodes.Theresults
oftheparameterselectionoftheoptimalnumberofhiddennodesforELMwithsigmoid,sineand
hardlimactivationfunctionsareshowninFigure10.
Figure10.Averagepositioningerrorregardingdifferentactivationfunctionsanddifferentnumbers
ofhiddennodes.
InFigure10,theperformanceofELMwiththesigmoidactivationfunctionisnotsignificantly
changedwithrespecttothesineactivationfunction.Forthesigmoidfunctions,theincreased
amplitudeofaverageerrorisrelativelysmallwhenthenumberofhiddennodeswasmorethan60.
Inaddition,theperformanceofthehardlimfunctionisthemoststableamongthethreeactivation
functions,buttheaverageerroristheworst.Thus,inthefollowingsimulations,weselectedthe
sigmoidfunctionastheactivationfunctionofELM,withthenumberofhiddennodessetto28,
asshowninTable1.ThedistributionofRSSIindicationfromfourreadersintheexperimentis
illustratedinFigure11.
(a)(b)
(c)(d)
Figure11.ThedistributionofRSSIindicationfromfourreaders:(a)The3DRSSlvalueswith
Reader1;(b)The3DRSSlvalueswithReader2;(c)The3DRSSlvalueswithReader3;(d)The3D
RSSlvalueswithReader4.
Figure 11.
The distribution of RSSI indication from four readers: (
a
) The 3D RSSl values with Reader1;
(
b
) The 3D RSSl values with Reader2; (
c
) The 3D RSSl values with Reader3; (
d
) The 3D RSSl values
with Reader4.
Table 1. Parameter settings of ELM algorithm.
Algorithm Activation Function Number of Hidden Nodes (L) Type
ELM Sigmoid 28 0 (Regression)
As shown in Figure 11, the signals collected by one reader can be quite different even at the same
location due to noises and RSSI ﬂuctuations in the indoor environment.
4.3. Comparison of Data Quality
As depicted in Section 3.1, it is seen that the data Gaussian ﬁltering method in the ofﬂine and
online phases improved the quality of the training data and the model input data, respectively. In order
to analyze the inﬂuence of the data quality on the precision of the positioning model based on ELM, this
study uses the four different scenes, and the algorithm is repeated 10 times in each scenario experiment.
The details are as follows and the results of location estimation for impact of data quality on the
positioning model is shown in Figure 12. Table 2summarizes the positioning results in different scenes.
Case 1. In the ofﬂine phase, does not ﬁlter the training samples; in the online phase, does not ﬁlter
the test samples for the RSSI information of each tags, and it is used as the input of the model.
Case 2. In the ofﬂine phase, does not ﬁlter the training samples; however, in the online phase,
for the RSSI information of every tag, the reader repeated measurements 100 times at a ﬁxed point,
used the Gaussian ﬁlter for the collected RSSI, and then used the results as the input of the model.
Symmetry 2017,9, 30 11 of 16
Case 3. In the ofﬂine phase, ﬁltering the training samples according to the corresponding rules in
this study; in the online phase, does not ﬁlter the test samples for the RSSI information of every tag,
and it is used as the input of the model.
Case 4. In the ofﬂine phase, ﬁltering the training samples according to the corresponding rules in
this study; however, in the online phase, for the RSSI information of every tag, the reader repeated
measurements 100 times at a ﬁxed point, used the Gaussian ﬁlter for the collected RSSI, and then used
the results as the input of the model.
Symmetry2017,9,3010of15
Table1.ParametersettingsofELMalgorithm.
AlgorithmActivationFunction NumberofHiddenNodes()Type
ELMSigmoid280(Regression)
AsshowninFigure11,thesignalscollectedbyonereadercanbequitedifferentevenatthe
samelocationduetonoisesandRSSIfluctuationsintheindoorenvironment.
4.3.ComparisonofDataQuality
AsdepictedinSection3.1,itisseenthatthedataGaussianfilteringmethodintheofflineand
onlinephasesimprovedthequalityofthetrainingdataandthemodelinputdata,respectively.In
ordertoanalyzetheinfluenceofthedataqualityontheprecisionofthepositioningmodelbasedon
ELM,thisstudyusesthefourdifferentscenes,andthealgorithmisrepeated10timesineach
scenarioexperiment.Thedetailsareasfollowsandtheresultsoflocationestimationforimpactof
dataqualityonthepositioningmodelisshowninFigure12.Table2summarizesthepositioning
resultsindifferentscenes.
Case1.Intheofflinephase,doesnotfilterthetrainingsamples;intheonlinephase,doesnot
filterthetestsamplesfortheRSSIinformationofeachtags,anditisusedastheinputofthemodel.
Case2.Intheofflinephase,doesnotfilterthetrainingsamples;however,intheonlinephase,
fortheRSSIinformationofeverytag,thereaderrepeatedmeasurements100timesatafixedpoint,
usedtheGaussianfilterforthecollectedRSSI,andthenusedtheresultsastheinputofthemodel.
Case3.Intheofflinephase,filteringthetrainingsamplesaccordingtothecorrespondingrules
inthisstudy;intheonlinephase,doesnotfilterthetestsamplesfortheRSSIinformationofevery
tag,anditisusedastheinputofthemodel.
Case4.Intheofflinephase,filteringthetrainingsamplesaccordingtothecorrespondingrules
inthisstudy;however,intheonlinephase,fortheRSSIinformationofeverytag,thereader
repeatedmeasurements100timesatafixedpoint,usedtheGaussianfilterforthecollectedRSSI,
andthenusedtheresultsastheinputofthemodel.
(a)(b)
(c)(d)
Figure12.Impactofdataqualityonthepositioningmodel:(a)Case1;(b)Case2;(c)Case3;(d)Case4.
Figure 12.
Impact of data quality on the positioning model: (
a
)Case 1; (
b
)Case 2; (
c
)Case 3; (
d
)Case 4.
Table 2. The positioning error results in different scenes (m).
Positioning Scenes Min Error Max Error Average Error Average MSE
Case 1 0.1003 4.5780 1.3279 1.5079
Case 2 0.1125 6.3260 1.8074 2.2991
Case 3 0.3181 9.2672 2.4641 3.3730
Case 4 1.1505 1.5264 0.7114 1.0562
As observed from Table 2,Case 4 has the smallest average error and MSE values—0.7114 and
1.0562, respectively, while Case 3 has the largest average error and MSE values—2.4641 and 3.3730
respectively. Therefore, data ﬁltering in the ofﬂine training phase and in the online positioning phase
can effectively improve the positioning accuracy of the model. However, we found that, when the
positioning model comes from high quality training samples, the input data for online positioning
phase also need to be high quality.
Symmetry 2017,9, 30 12 of 16
4.4. Comparison with Other Algorithms
This section aims to further evaluate the performance of the proposed algorithm. The GABP,
PSOBP positioning method [
30
,
31
] with no ﬁltering training and testing samples are implemented
and compared, respectively, and the evaluation criteria include the positioning error and algorithm
running time. Since the same ANN structure and the same training set might draw a different result,
each positioning algorithm run 10 times repeatedly. The results of location estimation for different
positioning methods are presented in Figure 12, and the positioning error results are summarized
in Table 3.
Table 3. The positioning error results for different positioning methods (m).
Positioning Method Min Error Max Error Average Error Average MSE
ELMGauss ﬁltering 0.0820 1.6060 0.7009 0.7910
ELM 0.2642 4.5819 1.3237 1.6804
GABPGauss ﬁltering 0.1470 2.0779 0.9007 0.9366
GABP 0.3723 6.9965 1.3343 2.0387
PSOBPGauss ﬁltering 0.1317 1.6027 0.8027 0.8126
PSOBP 0.2854 8.8549 1.5821 1.9898
4.4.1. Positioning Error
It can be seen from Figure 13 and Table 3that the positioning accuracy of the proposed
ELMGaussian ﬁltering method is obviously superior to other ﬁve methods such as ELM (without
Gaussian ﬁltering), GABPGaussian ﬁltering (include ﬁltering training and testing samples) and
GABP (without Gaussian ﬁltering), PSOBPGaussian ﬁltering (include ﬁltering training and testing
samples) and PSOBP (without Gaussian ﬁltering). ELMGaussian ﬁltering enhances the average
positioning error by 47.05% over ELM, 22.18% over GABPGaussian ﬁltering, 47.47% over GABP,
12.68% over PSOBPGaussian ﬁltering, 55.69% over PSOBP, GABPGaussian ﬁltering enhances the
average positioning error by 32.50% over GABP, and PSOBPGaussian ﬁltering enhances the average
positioning error by 49.26% over PSOBP. However, the ELM (without Gaussian ﬁltering) and the
GABP (without Gaussian ﬁltering) provide similar average positioning error, which is 1.3237 m and
1.3343 m, respectively.
Symmetry2017,9,3011of15
Table2.Thepositioningerrorresultsindifferentscenes(m).
PositioningScenesMinError MaxError AverageError AverageMSE
Case10.10034.57801.32791.5079
Case20.11256.32601.80742.2991
Case30.31819.26722.46413.3730
Case41.15051.52640.71141.0562
AsobservedfromTable2,Case4hasthesmallestaverageerrorandMSEvalues—0.7114and
1.0562,respectively,whileCase3hasthelargestaverageerrorandMSEvalues—2.4641and3.3730
respectively.Therefore,datafilteringintheofflinetrainingphaseandintheonlinepositioning
phasecaneffectivelyimprovethepositioningaccuracyofthemodel.However,wefoundthat,
whenthepositioningmodelcomesfromhighqualitytrainingsamples,theinputdataforonline
positioningphasealsoneedtobehighquality.
4.4.ComparisonwithOtherAlgorithms
Thissectionaimstofurtherevaluatetheperformanceoftheproposedalgorithm.TheGA‐BP,
PSO‐BPpositioningmethod[30,31]withnofilteringtrainingandtestingsamplesareimplemented
andcompared,respectively,andtheevaluationcriteriaincludethepositioningerrorandalgorithm
runningtime.SincethesameANNstructureandthesametrainingsetmightdrawadifferentresult,
eachpositioningalgorithmrun10timesrepeatedly.Theresultsoflocationestimationfordifferent
positioningmethodsarepresentedinFigure12,andthepositioningerrorresultsaresummarized
inTable3.
4.4.1.PositioningError
ItcanbeseenfromFigure13andTable3thatthepositioningaccuracyoftheproposed
ELM‐GaussianfilteringmethodisobviouslysuperiortootherfivemethodssuchasELM(without
Gaussianfiltering),GA‐BP‐Gaussianfiltering(includefilteringtrainingandtestingsamples)and
GA‐BP(withoutGaussianfiltering), PSO‐BP‐Gaussianfiltering(includefilteringtrainingand
testingsamples)andPSO‐BP(withoutGaussianfiltering).ELM‐Gaussianfilteringenhancesthe
averagepositioningerrorby47.05%overELM,22.18%overGA‐BP‐Gaussianfiltering,47.47%over
GA‐BP,12.68%overPSO‐BP‐Gaussianfiltering,55.69%overPSO‐BP,GA‐BP‐Gaussianfiltering
enhancestheaveragepositioningerrorby32.50%overGA‐BP, andPSO‐BP‐Gaussianfiltering
enhancestheaveragepositioningerrorby49.26%overPSO‐BP.However,theELM(without
Gaussianfiltering)andtheGA‐BP(withoutGaussianfiltering)providesimilaraveragepositioning
error,whichis1.3237mand1.3343m,respectively.
(a)(b)
Figure 13. Cont.
Symmetry 2017,9, 30 13 of 16
Symmetry2017,9,3012of15
(c)(d)
(e)(f)
Figure13.Comparisonwiththeotherpositioningmethods:(a)ELM‐Gaussfiltering(include
filteringtrainingandtestingsamples);(b)ELM(withoutGaussfiltering);(c)GA‐BP‐Gaussian
filtering(includefilteringtrainingandtestingsamples);(d)GA‐BP(withoutGaussianfiltering);(e)
GA‐BP‐Gaussianfiltering(includefilteringtrainingandtestingsamples);(f)GA‐BP(without
Gaussianfiltering).
Table3.Thepositioningerrorresultsfordifferentpositioningmethods(m).
PositioningMethodMinErro
r
MaxError AverageError AverageMSE
ELM‐Gaussfiltering0.08201.60600.70090.7910
ELM0.26424.58191.32371.6804
GA‐BP‐Gaussfiltering0.14702.07790.90070.9366
GA‐BP0.37236.99651.33432.0387
PSO‐BP‐Gaussfiltering0.13171.60270.80270.8126
PSO‐BP0.28548.85491.58211.9898
Therefore,effectivedatafiltering(includingfilteringtrainingandtestsamples)cangreatly
improvethepositioningaccuracybasedonmachinelearningmethods,andtheaveragepositioning
accuracyoftheELMmethodisbetterthantheGA‐BPandPSO‐BPmethod.
4.4.2.ComputationalTime
Inordertofurtherevaluatetheperformanceoftheproposedalgorithm,theefficiencyofthree
machinelearningalgorithms(ELM,GA‐BPandPSO‐BP)basedonpositioningwascompared,and
eachpositioningalgorithmranthreetimescontinuously.Tables4and5listtheparametersof
GA‐BPandPSO‐BPalgorithmsbasedonpositioning,respectively,andTable6liststhe
computationaltimeforthreealgorithms.Intermsofcomputationaltime,asobservedfromTable6,
ELMrequiresatmost0.5156sinthetrainingphase,reducingthetrainingtimeoverGA‐BP(4.2998
s)byafactorof8,andreducingthetrainingtimeoverPSO‐BP(3.9891s)byafactorof7,andELM
requiresatmost0.1078sinthetestingphase,reducingthetestingtimeoverGA‐BP(1.5326s)bya
factorof1,and reducingthetestingtimeoverPSO‐BP(1.5198)byafactorof2.Thisdemonstrates
Figure 13.
Comparison with the other positioning methods: (
a
) ELMGauss ﬁltering (include ﬁltering
training and testing samples); (
b
) ELM (without Gauss ﬁltering); (
c
) GABPGaussian ﬁltering (include
ﬁltering training and testing samples); (
d
) GABP (without Gaussian ﬁltering); (
e
) GABPGaussian
ﬁltering (include ﬁltering training and testing samples); (f) GABP (without Gaussian ﬁltering).
Therefore, effective data ﬁltering (including ﬁltering training and test samples) can greatly
improve the positioning accuracy based on machine learning methods, and the average positioning
accuracy of the ELM method is better than the GABP and PSOBP method.
4.4.2. Computational Time
In order to further evaluate the performance of the proposed algorithm, the efﬁciency of three
machine learning algorithms (ELM, GABP and PSOBP) based on positioning was compared, and
each positioning algorithm ran three times continuously. Tables 4and 5list the parameters of GABP
and PSOBP algorithms based on positioning, respectively, and Table 6lists the computational time
for three algorithms. In terms of computational time, as observed from Table 6, ELM requires at most
0.5156 s in the training phase, reducing the training time over GABP (4.2998 s) by a factor of 8, and
reducing the training time over PSOBP (3.9891 s) by a factor of 7, and ELM requires at most 0.1078 s
in the testing phase, reducing the testing time over GABP (1.5326 s) by a factor of 1, and reducing
the testing time over PSOBP (1.5198) by a factor of 2. This demonstrates that the ELM algorithm is
superior to the GABP and PSOBP learning algorithms in the RFIDenabled positioning systems, since
the core of ELM is to transform the complex iterative process into the random generation of hidden
layer parameters.
Symmetry 2017,9, 30 14 of 16
Table 4. Parameter setting of GABP algorithm.
Algorithm Maxgen Popsize Crossover Rate Mutation Rate
GABP 100 50 0.75 0.01
Table 5. Parameter setting of PSOBP algorithm.
Algorithm Maxgen Popsize Learning Factor Weight
PSOBP 100 50 0.5 0.5
Table 6. The performance comparison of training and testing time.
Run Times Positioning Method Training Time (s) Testing Time (s)
1
ELM 0.5156 0.1078
GABP 4.2998 1.5326
PSOBP 3.9891 1.5196
2
ELM 0.5945 0.1916
GABP 4.0157 1.2564
PSOBP 4.2803 1.4267
3
ELM 0.6416 0.1277
GABP 4.4682 1.6831
PSOBP 4.5002 1.4652
5. Discussion
Based on the obtained results, it can be concluded that the ELM regression model could achieve
a higher computational efﬁciency in the positioning system, since it only needs to set up the hidden
nodes number, and the algorithm does not need to adjust the input weights and the bias of the network.
Additionally, indoor positioning based on RFID will display the phenomenon of signal reﬂection and
fading due to the space layout and multipath effects in indoor environment, which will lead to the
larger ﬂuctuations of signal strength in the acquired data samples. Therefore, the Gaussian ﬁltering
is proposed to reduce the signal ﬂuctuation of RSSI, and our experiments verify that ELMGaussian
ﬁltering (including ﬁltering training and testing samples) can provide high positioning accuracy
compared with the other ﬁve methods. We conclude that the combination of Gaussian ﬁltering
(include ﬁltering training and testing samples) and ELM can effectively solve the positioning problem
in the indoor environments.
6. Conclusions
The data qualities of the online and ofﬂine phases are analyzed, respectively, which shows
that the high quality positioning model can obviously improve the accuracy of indoor positioning.
In addition, the time consumption of indoor positioning is also analyzed and compared, and the
proposed algorithm estimates the current coordinate of the tag by building the ELM positioning model
combined with the RSSI measurement values and achieves a rapid positioning.
In this study, to solve the inaccurate localization and efﬁciency problems in a complex positioning
system, an ELM algorithm based on Gaussian ﬁltering is proposed, which introduces the Gaussian
ﬁltering rule and can effectively ﬁlter out the signiﬁcant ﬂuctuant signals that are caused by the
environment effects, and provide a high quality sample data. The proposed ELMGaussian ﬁltering
methods can quickly establish a high quality positioning model and achieve a fast position prediction.
The experimental results demonstrate that the proposed positioning system will contribute by
effectively avoiding the environment interferences and achieving the highest accuracy and efﬁciency
in all comparative methods.