Content uploaded by H.-P. Bernhard
Author content
All content in this area was uploaded by H.-P. Bernhard on Jul 30, 2020
Content may be subject to copyright.
ADAPTIVE PERIOD ESTIMATION FOR SPARSE POINT PROCESSES
Hans-Peter Bernhard, Andreas Springer
Johannes Kepler University Linz,
Institute for Communications Engineering and RF-Systems
Altenbergerstr. 69, 4040 Linz, Austria, Email: h.p.bernhard@ieee.org
ABSTRACT
Adaptive period estimation for time varying sparse point pro-
cesses is addressed in this paper. Sparsity results from sig-
nal loss, which reduces the number of samples available for
period estimation. We discuss bounds and minima of the
mean square error of fundamental period estimation suitable
in these situations. A ruleset is derived to determine the op-
timum memory length which achieves the minimum estima-
tion error. The used low complex adaptive algorithm operates
on variable memory length Nto optimally fit for the recorded
time varying process. The algorithm is of complexity 3O(N),
if a recursive implementation is applied the overall complex-
ity is reduced to 3O(1). This algorithm is the optimal imple-
mentation candidate to keep synchronicity in industrial wire-
less sensor networks operating in harsh and varying environ-
ments.
Index Terms—Frequency estimation, low complexity,
sparse process, synchronisation, industrial sensor networks
1. INTRODUCTION AND RELATED WORK
Many applications rely on period or frequency estimation,
such as carrier frequency recovery in communication sys-
tems, vital sign monitoring, or synchronization in wireless
sensor networks (WSNs) [1, 2, 3, 4]. Within networks beacon
signals are sent out periodically from a master and received
by many communication partners. The time stamping of
the arrival time allows to estimate the beacon period and to
synchronize the local clock. Sparsity is introduced by un-
avoidable occasional loss of communication links in harsh
environments. Hence, beacons are lost and time-varying
clocks, due to environmental influences, introduce an addi-
tional challenge. To the best of our knowledge, period estima-
tion of such time-varying processes has not been considered
This work has been supported in part by the Austrian Research Pro-
motion Agency (FFG) under grant number 853456 (FASAN: Flexible Au-
tonome Sensorik in industriellen ANwendungen) and by research from the
SCOTT project. SCOTT (www.scott-project.eu) has received funding from
the Electronic Component Systems for European Leadership Joint Under-
taking under grant agreement No 737422. This Joint Undertaking receives
support from the European Union’s Horizon 2020 research and innovation
programme and Austria, Spain, Finland, Ireland, Sweden, Germany, Poland,
Portugal, Netherlands, Belgium, Norway.
in literature. If Nis the memory size of the estimator, the es-
timation mean square error scales with O(N−3)in situations
where the signal is stationary [5] or time variation of the am-
plitude is much smaller than measurement noise. We model
measurements as sparse periodic point process with additive
phase noise in Sec. 2. The period of a sparse point process
is mostly assessed by spectral estimation techniques [6, 7, 8].
One of the common methods for that is the periodogram
estimation [9, 10, 11] considering stationary processes. Its
computational complexity is in the order of O(Nlog(N))
[12]. In [13, 14] fundamental frequency estimation of cyclo-
stationary processes was introduced, which lead to similar
results. To the best of our knowledge no method exists ad-
dressing the optimization of period estimation for sparse
time-varying point processes. The optimization of the mem-
ory usage is discussed in Sec. 3, which leads to a design rule
depending on straight forward measurable signal parameters.
In Sec. 4.1 an adaptive period estimation is presented with a
computational complexity 3O(N). The proposed estimator
is extremely simple and easy to implement in digital hard-
ware with limited computational capabilities. Simulations in
Sec. 3.3 and Sec. 4.2 support the theoretic considerations and
conclude the work.
2. TIME VARYING SPARSE POINT PROCESSES
n ,t
1 2 3 4 5 6
e[n]
y[n]
p[3]
p[9]
9
Fig. 1. Time varying sparse periodic point process
Events like periodic beacons in a sensor network, should be
detected by observing the signal y[n]. This time series
y[n] = nP +e[n] + φ , (1)
with random phase φand measurement noise e[n]=N(0, σe),
assumed to be Gaussian, is representing a monotonously in-
creasing time series which can be used to estimate the funda-
mental period of the generating periodic events. This process
is called a non-sparse point process. A process is sparse if
some events are missing as depicted in Fig. 1. Additionally
if the periods are changing over time, the process is also time
variant and we have to replace Pwith p[n]. Hence, to model
time variance and sparsity, the variable nin (1) is replaced by
a recursive description resulting in
y[n] = y[n−1] + d[n]p[n] + e[n] + φ . (2)
A discrete time random variable d[n]with mean 1≤µd<
∞is used to model this behavior. The time variation of the
period is modeled by
p[n] = P0+pt[n](3)
with n∈N. We assume, a periodic time variation as a start-
ing point before we extend the discussion to more complex
signals in section 3.2. The time variation is
p[n] = P0+pTsin(θn)(4)
with θ < π to fulfill the sampling theorem and pTas the peak
amplitude of the time variation.
3. MEMORY IN PERIOD ESTIMATION
3.1. Sinusodial time varying period
The MSE of the period estimator for sparse point process with
a sinusoidal time variation with frequency θis, referring [15],
MSE[N]up2
T
21−W(θ, µdN)W(θ, N )
bµdNcN2
+
4µ3
dNp2
Tθ2
3(1 + µdN)+σ2
d
bNµdcN2.(5)
if Nsamples are available. At average, the process has µd−1
lost events and additive white gaussian phase noise σ2
d/2. Ac-
cording to [15], all three terms in (5) have an identifiable
source. Firstly, the frequency estimation error of the station-
ary process with phase noise is
MSEn[N]uσ2
d
bNµdcN2.(6)
Secondly, the error introduced by the sinusoidal time variance
of the frequency is
MSEθ[N]up2
T
21−W(θ, µdN)W(θ, N )
bµdNcN2
(7)
and finally exists an additive upper bounded interpolation er-
ror of the generated samples which are inserted instead of the
lost samples of the time varying process
MSEi[N]/4µ3
dNp2
Tθ2
3(1 + µdN).(8)
All three MSE parts depend on process parameters µd,
pT,θand one parameter of the estimator N. Therefore only
Ncan be used to improve the MSE. Hence, we minimise the
MSE over Nby
min
N∈N+MSE[N],(9)
which is solved by finding a solution for
∂
∂N MSE[N]N0
= 0.(10)
Obviously, N∈N+and the first derivative exists only if
N∈R. Without loosing generality, we can treat (5) as con-
tinuous function in Rand remap N0to N+if the minima
are found. With W(θ, bµdNc) = sin θbµdNc
2/sin(θ
2)the
derivative of MSEθis rather complicated and it is not possible
to find a closed form solution to (10). To overcome this prob-
lem we use approximations for the trigonometric functions
assuming Nθ 1. As the first and second order approx-
imation vanish, at least a fourth order approximation has to
be used. As a first step we use 2 sin(θµdN/2) sin(θN/2) =
cos(θ(µd−1)N/2)−cos(θ(µd+1)N/2) to avoid the trigono-
metric product
1−sin(θµdN/2) sin(θN/2)
sin2(θ/2)bµdNcN2
=
1−cos(θ(µd−1)N/2) −cos(θ(µd+ 1)N/2)
2 sin2(θ/2)bµdNcN2
.(11)
Consequently, we apply a forth order approximation of the
cosine function cos(x)=1−1
2x2+1
24 x4. After some ma-
nipulations we receive
p2
T
21−W(θ, µdN)W(θ, N )
bµdNcN2
u
p2
T
2 1− 1
2 θ(µd+ 1)N
22
−θ(µd−1)N
22!+
1
24 θ(µd+ 1)N
24
−θ(µd−1)N
24!! 4
θ2µdN2!2
up2
T
1
1152N4θ4(1 + µ2
d)2(12)
and with (5) we obtain the approximation
MSE[N]up2
T
1152N4θ4(1 + µ2
d)2+4µ3
dNp2
Tθ2
3(1 + µdN)+σ2
d
µdN3.
(13)
With this expression it is possible to solve (10). It is obvious,
that MSEiis approximately constant in Nand so its derivative
vanishes. Moreover we can write a lower bound of the MSE
by
MSEθ[N] + MSEi[N] + MSEn[N]'
MSE[N]'MSEθ,n[N] = MSEθ[N] + MSEn[N].(14)
Thus, with the lower bound (14), (10) leads to
∂
∂N MSE[N]N0
=1
288θ4µ2
d+ 12N3
0p2
T−3σ2
d
µdN4
0
= 0
(15)
101102103
10−11
10−12
10−14
10−16
N
MSE B,θ=2π
250
random time-varying
upper bound
sin time-varying
lower bound
B,θ=2π
2000
µd=2 upper bound
σ2
d=10−11 sin time-varying
pT=10−5lower bound
random
time-varying
Fig. 2. Period estimation MSE of time-varying processes.
which yields
N0=7
s864 σ2
d
p2
Tθ4(µ2
d+ 1)2µd
.(16)
N0has to be remapped by rounding Nθ=bN0e. This result
is confirmed by simulations depicted in Fig. 2 and described
in section 3.2. It is inversly proportional to θ4
7and therefore,
if the period of the process is varying it is important to adapt
Nto improve the estimation quality. Consequently, this pa-
rameter is used to improve the estimation algorithm described
in section 4.1.
3.2. White band-limited time varying period
According to (3) a random time variation is introduced with
pt[n]∼N(0, σp)and a band limitation of B= [−B, B [.
To preserve power equivalence we set σp=pT
√2. Within B,
pt[n]is white and therefore it implies infinite frequency com-
ponents θwith individual MSEθ[N]. To emphasize this we
write MSE[N, θ]using (13). With Nθ 1, its frequency de-
pendence is proportional to θ4and for Nθ > 1the MSE is
limited with p2
T
2. The interpolation error is proportional to θ2
and therefore
MSE[N, θ]≤MSE[N, B]if θ≤B∧B < ∞.(17)
Hence it is evident that MSE[N, B]is the worst case for the
estimation error. Consequently (5) is also an upper bound
for the white band-limited time varying sparse process. The
MSE[N, θ]was derived based on a linear approximation,
therefore we proceed with the superposition assumption for
Nθ 1and use for the approximated mean MSE within B
MSEB
θ[N] = 1
2B
B
Z
−B
MSE[N, θ]dθ
=1
B
B
Z
0
p2
T
1152N4θ4(1 + µ2
d)2dθ +σ2
d
µdN3
=3.2p2
T
1152 N4B
24
(1 + µ2
d)2+σ2
d
µdN3.(18)
According to (15), (16) and the signal power equivalence, we
find for the optimal NBof the band limited time varying pe-
riod
NB=7
s864 σ2
d
6.4σ2
PB
24.(µ2
d+ 1)2µd
.(19)
3.3. Simulation results
Throughout this paper all simulations were done with 100
Monte Carlo simulations considering a timespan of 2 peri-
ods of the time varying process realizations. In Fig. 2 the
MSE[N, θ]is depicted with θas parameter. Let us consider
the first simulation with θ=2π
250 . The simulation results
lie within the derived upper and lower bound given by (14).
With these parameter settings all three terms of the bound are
clearly identifiable. The first part for N= 3 . . . Nθis domi-
nated by the decay of the error proportional to O(N−3)which
is represented by MSEn[N, θ]. The MSE decay reaches its
minimum at Nθ= 9 in accordance to the theoretically derived
minima. The value was observed by simulation and calcu-
lated with equation (16). The MSE minimum is increased by
the interpolation error MSEi[N, θ]which is noticeable by flat-
tening the peak of the minima. Beyond Nθthe MSEθ[N, θ]
dominates with its increase proportional to O(N4)and fi-
nally it saturates at p2
T
2. The second simualtion in Fig. 2
with θ=2π
2000 , shows the same behaviour and a minimum
at N2π
2000 = 29.
In a further simulation we consider periodic point pro-
cesses, showing a time varying period modeled by a white
band limited stochastic process. The MSE of period estima-
tion is considered in Fig. 4. We depict the upper bound of the
MSE[N, 2π
250 ]as dashed curve. It represents the worst case as
if the time variation is assumed a single sinusoid with power
equivalence to stochastic time variance. As expected, the sim-
ulated MSE[N, θ]lays way below the worst case of the esti-
mation error. Moreover, there exists also a minimum for the
estimation error at NB, which is confirmed by the theoretical
result of (19). With B=2π
250 the value of NB= 17. These re-
sults are supported by further simulations e.g. with frequency
band B=2π
2000 as depicted using red curves in Fig. 2.
4. ESTIMATOR DESIGN
The time variance of the observed signal is based on sys-
tem parameters of the underlying process, pT,σd,µdand
B, hence the algorithm can be tuned using only the estima-
tor memory N. The parameter µdcan be easily measured as
ratio between the total number of samples and received events
of the sparse process. The parameter σdis derived from the
maximum prediction gain [16]. If the prediction horizon is
near zero, the prediction error is converging to the measure-
ment noise variance. If we consider the period estimation as
+
y
d
2
[n]
++ +
^
P[n]
y[n]
⏞
N
-
y
d
[n]
v[n]
y
d
[n]
μ
d
N
T
T
T
T
T
T
+
t
sample:
t>y[n]+Δ t
max
⏞
μdN
if event
√
v[n]
μ
d
N
√
N
else if
√
v
1
[n]/2
-
^
σd
2
σ
P
2
(y
μ
d
[n]−¯
y
μ
d
)
2
y
μ
d
2
[n]
N
N
B
μ
d
^
μ
d
v
1
[n]
y
μ
d
[n]
Fig. 3. Adaptiv period estimation
101102103
10−6
10−8
10−10
10−12
10−14
N
MSE estimate bound σ2
d= 10−5
estimate bound σ2
d= 10−10
estimate bound σ2
d= 10−15
estimate bound σ2
d= 10−20
estimate bound σ2
d= 10−25
B=2π
250 pT=10−5µd=2
Fig. 4. MSE for white time variance and different σ2
d
period prediction, the results of [16] can be applied straight
forward. To estimate σd, we are using an estimate with one
memory element and compare it with the following period
measurement. This is equivalent to the shortest possible pe-
riod prediction horizon and therefore the mean estimation er-
ror with one memory tap is the best guess we can get for the
measurement noise bσd/2used in (5). The variance of the time
varying process σPis calculated from the recoded samples
yd[n]. Finally, the bandwidth Bhas to be assumed from the
underlying physics or a band limiting input device.
4.1. The estimator
According to [17], the estimator is designed as depicted in
Fig. 3. It is estimating the period b
P[n]of repeating events as
shown in Fig. 1. Time stamps are stored in y[n]whenever a
trigger event occurs. The last bµdNctimestamps are stored to
calculate the difference yd[n]between the current timestamp
and the timestamp bµdNcsamples earlier. The difference
is squared y2
d[n]and averaged over Nsamples. The scaled
square root of the moving average is the estimate of the pro-
cess period. Missing events are detected if the time counter t
gets larger than y[n] + ∆tmax. In case of one or more missing
events, new events are created by adding an averaged sam-
ple difference yd[n]
bµdNcto the previous sample via a feedback
loop. The ratio between the total sample number (inserted
and detected) and the number of detected samples represents
the parameter µdused for the calculation of NB.σ2
dis calcu-
lated using the first partial sum of the moving average filter.
Finally σ2
Pis the variance of the differences yd[n]. The op-
timized NBcan be calculated with (19) for each estimated
period. The parameter Nis updated according to the result
for NB. Computation complexity of the non adaptive algo-
rithm was derived in [17] with O(N). The adaption process
is adding two times O(N)to the total complexity based on
the two variance estimation tasks. In subsequent estimates the
algorithm can be altered to use the last estimates. Therefore
we reach an extremely efficient algorithm having a constant
complexity 3O(1) after the first initial calculation of 3O(N).
4.2. The estimation results
In Fig. 4 the period estimation error of sparse periodic pro-
cesses with time varying period is depicted. The figures show
the MSE of the presented estimator considering 5 differ-
ent measurement noise levels. For high measurement noise
power the MSE is decaying with Nto the level of the time
varying disturbance which cannot be reduced further by in-
creasing N. For lower measurement noise a minimum of the
estimation error for the memory size of NBexists, which
coincides with theory. Finally, if the measurement noise is
below the interpolation error no minimum exists. In all three
cases the adaptive estimator is able to achieve the optimum
MSE based on its adaptation of N.
5. CONCLUSION
We presented an adaptive period estimator for sparse periodic
time varying point processes. The main result of this contribu-
tion is an algorithm designed to follow the changes of a time
varying process by adapting its memory length Nto achieve
a minimum MSE in period estimation. The algorithm is ex-
tremely low complex with 3O(N)and can be implemented
in a recursive form which reduces the complexity to 3O(1).
The estimator is applicable in e.g. low power sensing devices
which are exposed to temperature variations. These environ-
mental changes introduce time varying behaviour of the crys-
tals and the presented algorithm is able to follow the changes
with a minimum MSE.
6. REFERENCES
[1] H. P. Bernhard, A. Berger, and A. Springer, “Timing
synchronization of low power wireless sensor nodes
with largely differing clock frequencies and variable
synchronization intervals,” in 2015 IEEE 20th Con-
ference on Emerging Technologies Factory Automation
(ETFA), Luxemburg, Luxemburg, Sept 2015, pp. 1–7.
[2] E. Conte, A. Filippi, and S. Tomasin, “ML period esti-
mation with application to vital sign monitoring,” IEEE
Sig. Process. Letters, vol. 17, no. 11, pp. 905–908, 2010.
[3] L. Tavares Bruscato, T. Heimfarth, and E. Pignaton de
Freitas, “Enhancing time synchronization support in
wireless sensor networks,” Sensors, vol. 17, no. 12,
2017. [Online]. Available: http://www.mdpi.com/1424-
8220/17/12/2956
[4] M. Xu, W. Xu, T. Han, and Z. Lin, “Energy-efficient
time synchronization in wireless sensor networks via
temperature-aware compensation,” ACM Trans. Sen.
Netw., vol. 12, no. 2, pp. 12:1–12:29, Apr. 2016. [On-
line]. Available: http://doi.acm.org/10.1145/2876508
[5] I. V. L. Clarkson, “Approximate maximum-likelihood
period estimation from sparse, noisy timing data,” IEEE
Transactions on Signal Processing, vol. 56, no. 5, pp.
1779–1787, May 2008.
[6] H. Ye, Z. Liu, and W. Jiang, “Efficient maximum-
likelihood period estimation from incomplete timing
data,” in International Conference on Automatic Control
and Artificial Intelligence (ACAI 2012), March 2012,
pp. 959–962.
[7] P. Stoica, J. Li, and H. He, “Spectral analysis of nonuni-
formly sampled data: A new approach versus the pe-
riodogram,” IEEE Transactions on Signal Processing,
vol. 57, no. 3, pp. 843–858, March 2009.
[8] J. K. Nielsen, M. G. Christensen, and S. H. Jensen, “De-
fault bayesian estimation of the fundamental frequency,”
IEEE Transactions on Audio, Speech, and Language
Processing, vol. 21, no. 3, pp. 598–610, March 2013.
[9] E. J. Hannan, “The estimation of frequency,” Journal of
Applied Probability, vol. 10, no. 3, pp. 510–519, 1973.
[10] B. G. Quinn, “Recent advances in rapid frequency esti-
mation,” Digital Signal Processing, vol. 19, no. 6, pp.
942 – 948, 2009, dASP’06 - Defense Applications of
Signal Processing.
[11] B. G. Quinn, R. G. McKilliam, and I. V. L. Clark-
son, “Maximizing the periodogram,” in IEEE Global
Telecommunications Conference, Nov 2008, pp. 1–5.
[12] R. G. McKilliam, I. V. L. Clarkson, and B. G. Quinn,
“Fast sparse period estimation,” IEEE Signal Processing
Letters, vol. 22, no. 1, pp. 62–66, Jan 2015.
[13] A. Napolitano, “Asymptotic normality of cyclic auto-
correlation estimate with estimated cycle frequency,”
in 23rd European Signal Processing Conference (EU-
SIPCO 2015), Aug 2015, pp. 1481–1485.
[14] ——, “On cyclic spectrum estimation with estimated
cycle frequency,” in 24rd European Signal Processing
Conference (EUSIPCO 2016), Budapest, Hungary, Aug
2016, pp. 160–164.
[15] H.-P. Bernhard, B. Etzlinger, and A. Springer, “Period
estimation with linear complexity of sparse time vary-
ing point processes,” in 2017 51th Asilomar Conference
on Signals, Systems and Computers, Asilomar, Pacific
Grove, Nov 2017, pp. 1–5.
[16] H. P. Bernhard, “A tight upper bound on the gain of
linear and nonlinear predictors for stationary stochas-
tic processes,” IEEE Transactions on Signal Processing,
vol. 46, no. 11, pp. 2909–2917, Nov 1998.
[17] H.-P. Bernhard and A. Springer, “Linear complex itera-
tive frequency estimation of sparse and non-sparse pulse
and point processes,” in 2017 25th European Signal
Processing Conference (EUSIPCO 2017), Kos, Greece,
Aug. 2017, pp. 1150–1154.