Conference PaperPDF Available

Period estimation with linear complexity of sparse time varying point processes

Authors:
ADAPTIVE PERIOD ESTIMATION FOR SPARSE POINT PROCESSES
Hans-Peter Bernhard, Andreas Springer
Johannes Kepler University Linz,
Institute for Communications Engineering and RF-Systems
Altenbergerstr. 69, 4040 Linz, Austria, Email: h.p.bernhard@ieee.org
ABSTRACT
Adaptive period estimation for time varying sparse point pro-
cesses is addressed in this paper. Sparsity results from sig-
nal loss, which reduces the number of samples available for
period estimation. We discuss bounds and minima of the
mean square error of fundamental period estimation suitable
in these situations. A ruleset is derived to determine the op-
timum memory length which achieves the minimum estima-
tion error. The used low complex adaptive algorithm operates
on variable memory length Nto optimally fit for the recorded
time varying process. The algorithm is of complexity 3O(N),
if a recursive implementation is applied the overall complex-
ity is reduced to 3O(1). This algorithm is the optimal imple-
mentation candidate to keep synchronicity in industrial wire-
less sensor networks operating in harsh and varying environ-
ments.
Index TermsFrequency estimation, low complexity,
sparse process, synchronisation, industrial sensor networks
1. INTRODUCTION AND RELATED WORK
Many applications rely on period or frequency estimation,
such as carrier frequency recovery in communication sys-
tems, vital sign monitoring, or synchronization in wireless
sensor networks (WSNs) [1, 2, 3, 4]. Within networks beacon
signals are sent out periodically from a master and received
by many communication partners. The time stamping of
the arrival time allows to estimate the beacon period and to
synchronize the local clock. Sparsity is introduced by un-
avoidable occasional loss of communication links in harsh
environments. Hence, beacons are lost and time-varying
clocks, due to environmental influences, introduce an addi-
tional challenge. To the best of our knowledge, period estima-
tion of such time-varying processes has not been considered
This work has been supported in part by the Austrian Research Pro-
motion Agency (FFG) under grant number 853456 (FASAN: Flexible Au-
tonome Sensorik in industriellen ANwendungen) and by research from the
SCOTT project. SCOTT (www.scott-project.eu) has received funding from
the Electronic Component Systems for European Leadership Joint Under-
taking under grant agreement No 737422. This Joint Undertaking receives
support from the European Union’s Horizon 2020 research and innovation
programme and Austria, Spain, Finland, Ireland, Sweden, Germany, Poland,
Portugal, Netherlands, Belgium, Norway.
in literature. If Nis the memory size of the estimator, the es-
timation mean square error scales with O(N3)in situations
where the signal is stationary [5] or time variation of the am-
plitude is much smaller than measurement noise. We model
measurements as sparse periodic point process with additive
phase noise in Sec. 2. The period of a sparse point process
is mostly assessed by spectral estimation techniques [6, 7, 8].
One of the common methods for that is the periodogram
estimation [9, 10, 11] considering stationary processes. Its
computational complexity is in the order of O(Nlog(N))
[12]. In [13, 14] fundamental frequency estimation of cyclo-
stationary processes was introduced, which lead to similar
results. To the best of our knowledge no method exists ad-
dressing the optimization of period estimation for sparse
time-varying point processes. The optimization of the mem-
ory usage is discussed in Sec. 3, which leads to a design rule
depending on straight forward measurable signal parameters.
In Sec. 4.1 an adaptive period estimation is presented with a
computational complexity 3O(N). The proposed estimator
is extremely simple and easy to implement in digital hard-
ware with limited computational capabilities. Simulations in
Sec. 3.3 and Sec. 4.2 support the theoretic considerations and
conclude the work.
2. TIME VARYING SPARSE POINT PROCESSES
n ,t
1 2 3 4 5 6
e[n]
y[n]
p[3]
p[9]
9
Fig. 1. Time varying sparse periodic point process
Events like periodic beacons in a sensor network, should be
detected by observing the signal y[n]. This time series
y[n] = nP +e[n] + φ , (1)
with random phase φand measurement noise e[n]=N(0, σe),
assumed to be Gaussian, is representing a monotonously in-
creasing time series which can be used to estimate the funda-
mental period of the generating periodic events. This process
is called a non-sparse point process. A process is sparse if
some events are missing as depicted in Fig. 1. Additionally
if the periods are changing over time, the process is also time
variant and we have to replace Pwith p[n]. Hence, to model
time variance and sparsity, the variable nin (1) is replaced by
a recursive description resulting in
y[n] = y[n1] + d[n]p[n] + e[n] + φ . (2)
A discrete time random variable d[n]with mean 1µd<
is used to model this behavior. The time variation of the
period is modeled by
p[n] = P0+pt[n](3)
with nN. We assume, a periodic time variation as a start-
ing point before we extend the discussion to more complex
signals in section 3.2. The time variation is
p[n] = P0+pTsin(θn)(4)
with θ < π to fulfill the sampling theorem and pTas the peak
amplitude of the time variation.
3. MEMORY IN PERIOD ESTIMATION
3.1. Sinusodial time varying period
The MSE of the period estimator for sparse point process with
a sinusoidal time variation with frequency θis, referring [15],
MSE[N]up2
T
21W(θ, µdN)W(θ, N )
bµdNcN2
+
4µ3
dNp2
Tθ2
3(1 + µdN)+σ2
d
bNµdcN2.(5)
if Nsamples are available. At average, the process has µd1
lost events and additive white gaussian phase noise σ2
d/2. Ac-
cording to [15], all three terms in (5) have an identifiable
source. Firstly, the frequency estimation error of the station-
ary process with phase noise is
MSEn[N]uσ2
d
bNµdcN2.(6)
Secondly, the error introduced by the sinusoidal time variance
of the frequency is
MSEθ[N]up2
T
21W(θ, µdN)W(θ, N )
bµdNcN2
(7)
and finally exists an additive upper bounded interpolation er-
ror of the generated samples which are inserted instead of the
lost samples of the time varying process
MSEi[N]/4µ3
dNp2
Tθ2
3(1 + µdN).(8)
All three MSE parts depend on process parameters µd,
pT,θand one parameter of the estimator N. Therefore only
Ncan be used to improve the MSE. Hence, we minimise the
MSE over Nby
min
NN+MSE[N],(9)
which is solved by finding a solution for
∂N MSE[N]N0
= 0.(10)
Obviously, NN+and the first derivative exists only if
NR. Without loosing generality, we can treat (5) as con-
tinuous function in Rand remap N0to N+if the minima
are found. With W(θ, bµdNc) = sin θbµdNc
2/sin(θ
2)the
derivative of MSEθis rather complicated and it is not possible
to find a closed form solution to (10). To overcome this prob-
lem we use approximations for the trigonometric functions
assuming Nθ 1. As the first and second order approx-
imation vanish, at least a fourth order approximation has to
be used. As a first step we use 2 sin(θµdN/2) sin(θN/2) =
cos(θ(µd1)N/2)cos(θ(µd+1)N/2) to avoid the trigono-
metric product
1sin(θµdN/2) sin(θN/2)
sin2(θ/2)bµdNcN2
=
1cos(θ(µd1)N/2) cos(θ(µd+ 1)N/2)
2 sin2(θ/2)bµdNcN2
.(11)
Consequently, we apply a forth order approximation of the
cosine function cos(x)=11
2x2+1
24 x4. After some ma-
nipulations we receive
p2
T
21W(θ, µdN)W(θ, N )
bµdNcN2
u
p2
T
2 1 1
2 θ(µd+ 1)N
22
θ(µd1)N
22!+
1
24 θ(µd+ 1)N
24
θ(µd1)N
24!! 4
θ2µdN2!2
up2
T
1
1152N4θ4(1 + µ2
d)2(12)
and with (5) we obtain the approximation
MSE[N]up2
T
1152N4θ4(1 + µ2
d)2+4µ3
dNp2
Tθ2
3(1 + µdN)+σ2
d
µdN3.
(13)
With this expression it is possible to solve (10). It is obvious,
that MSEiis approximately constant in Nand so its derivative
vanishes. Moreover we can write a lower bound of the MSE
by
MSEθ[N] + MSEi[N] + MSEn[N]'
MSE[N]'MSEθ,n[N] = MSEθ[N] + MSEn[N].(14)
Thus, with the lower bound (14), (10) leads to
∂N MSE[N]N0
=1
288θ4µ2
d+ 12N3
0p2
T3σ2
d
µdN4
0
= 0
(15)
101102103
1011
1012
1014
1016
N
MSE B,θ=2π
250
random time-varying
upper bound
sin time-varying
lower bound
B,θ=2π
2000
µd=2 upper bound
σ2
d=1011 sin time-varying
pT=105lower bound
random
time-varying
Fig. 2. Period estimation MSE of time-varying processes.
which yields
N0=7
s864 σ2
d
p2
Tθ4(µ2
d+ 1)2µd
.(16)
N0has to be remapped by rounding Nθ=bN0e. This result
is confirmed by simulations depicted in Fig. 2 and described
in section 3.2. It is inversly proportional to θ4
7and therefore,
if the period of the process is varying it is important to adapt
Nto improve the estimation quality. Consequently, this pa-
rameter is used to improve the estimation algorithm described
in section 4.1.
3.2. White band-limited time varying period
According to (3) a random time variation is introduced with
pt[n]N(0, σp)and a band limitation of B= [B, B [.
To preserve power equivalence we set σp=pT
2. Within B,
pt[n]is white and therefore it implies infinite frequency com-
ponents θwith individual MSEθ[N]. To emphasize this we
write MSE[N, θ]using (13). With Nθ 1, its frequency de-
pendence is proportional to θ4and for Nθ > 1the MSE is
limited with p2
T
2. The interpolation error is proportional to θ2
and therefore
MSE[N, θ]MSE[N, B]if θBB < .(17)
Hence it is evident that MSE[N, B]is the worst case for the
estimation error. Consequently (5) is also an upper bound
for the white band-limited time varying sparse process. The
MSE[N, θ]was derived based on a linear approximation,
therefore we proceed with the superposition assumption for
Nθ 1and use for the approximated mean MSE within B
MSEB
θ[N] = 1
2B
B
Z
B
MSE[N, θ]
=1
B
B
Z
0
p2
T
1152N4θ4(1 + µ2
d)2 +σ2
d
µdN3
=3.2p2
T
1152 N4B
24
(1 + µ2
d)2+σ2
d
µdN3.(18)
According to (15), (16) and the signal power equivalence, we
find for the optimal NBof the band limited time varying pe-
riod
NB=7
s864 σ2
d
6.4σ2
PB
24.(µ2
d+ 1)2µd
.(19)
3.3. Simulation results
Throughout this paper all simulations were done with 100
Monte Carlo simulations considering a timespan of 2 peri-
ods of the time varying process realizations. In Fig. 2 the
MSE[N, θ]is depicted with θas parameter. Let us consider
the first simulation with θ=2π
250 . The simulation results
lie within the derived upper and lower bound given by (14).
With these parameter settings all three terms of the bound are
clearly identifiable. The first part for N= 3 . . . Nθis domi-
nated by the decay of the error proportional to O(N3)which
is represented by MSEn[N, θ]. The MSE decay reaches its
minimum at Nθ= 9 in accordance to the theoretically derived
minima. The value was observed by simulation and calcu-
lated with equation (16). The MSE minimum is increased by
the interpolation error MSEi[N, θ]which is noticeable by flat-
tening the peak of the minima. Beyond Nθthe MSEθ[N, θ]
dominates with its increase proportional to O(N4)and fi-
nally it saturates at p2
T
2. The second simualtion in Fig. 2
with θ=2π
2000 , shows the same behaviour and a minimum
at N2π
2000 = 29.
In a further simulation we consider periodic point pro-
cesses, showing a time varying period modeled by a white
band limited stochastic process. The MSE of period estima-
tion is considered in Fig. 4. We depict the upper bound of the
MSE[N, 2π
250 ]as dashed curve. It represents the worst case as
if the time variation is assumed a single sinusoid with power
equivalence to stochastic time variance. As expected, the sim-
ulated MSE[N, θ]lays way below the worst case of the esti-
mation error. Moreover, there exists also a minimum for the
estimation error at NB, which is confirmed by the theoretical
result of (19). With B=2π
250 the value of NB= 17. These re-
sults are supported by further simulations e.g. with frequency
band B=2π
2000 as depicted using red curves in Fig. 2.
4. ESTIMATOR DESIGN
The time variance of the observed signal is based on sys-
tem parameters of the underlying process, pT,σd,µdand
B, hence the algorithm can be tuned using only the estima-
tor memory N. The parameter µdcan be easily measured as
ratio between the total number of samples and received events
of the sparse process. The parameter σdis derived from the
maximum prediction gain [16]. If the prediction horizon is
near zero, the prediction error is converging to the measure-
ment noise variance. If we consider the period estimation as
+
y
d
2
[n]
++ +
^
P[n]
N
-
y
d
[n]
v[n]
y
d
[n]
μ
d
N
T
T
T
T
T
T
+
t
sample:
t>y[n]+Δ t
max
μdN
if event
v[n]
μ
d
N
N
else if
v
1
[n]/2
-
^
σd
2
σ
P
2
(y
μ
d
[n]¯
y
μ
d
)
2
y
μ
d
2
[n]
N
N
B
μ
d
^
μ
d
v
1
[n]
y
μ
d
[n]
Fig. 3. Adaptiv period estimation
101102103
106
108
1010
1012
1014
N
MSE estimate bound σ2
d= 105
estimate bound σ2
d= 1010
estimate bound σ2
d= 1015
estimate bound σ2
d= 1020
estimate bound σ2
d= 1025
B=2π
250 pT=105µd=2
Fig. 4. MSE for white time variance and different σ2
d
period prediction, the results of [16] can be applied straight
forward. To estimate σd, we are using an estimate with one
memory element and compare it with the following period
measurement. This is equivalent to the shortest possible pe-
riod prediction horizon and therefore the mean estimation er-
ror with one memory tap is the best guess we can get for the
measurement noise bσd/2used in (5). The variance of the time
varying process σPis calculated from the recoded samples
yd[n]. Finally, the bandwidth Bhas to be assumed from the
underlying physics or a band limiting input device.
4.1. The estimator
According to [17], the estimator is designed as depicted in
Fig. 3. It is estimating the period b
P[n]of repeating events as
shown in Fig. 1. Time stamps are stored in y[n]whenever a
trigger event occurs. The last bµdNctimestamps are stored to
calculate the difference yd[n]between the current timestamp
and the timestamp bµdNcsamples earlier. The difference
is squared y2
d[n]and averaged over Nsamples. The scaled
square root of the moving average is the estimate of the pro-
cess period. Missing events are detected if the time counter t
gets larger than y[n] + tmax. In case of one or more missing
events, new events are created by adding an averaged sam-
ple difference yd[n]
bµdNcto the previous sample via a feedback
loop. The ratio between the total sample number (inserted
and detected) and the number of detected samples represents
the parameter µdused for the calculation of NB.σ2
dis calcu-
lated using the first partial sum of the moving average filter.
Finally σ2
Pis the variance of the differences yd[n]. The op-
timized NBcan be calculated with (19) for each estimated
period. The parameter Nis updated according to the result
for NB. Computation complexity of the non adaptive algo-
rithm was derived in [17] with O(N). The adaption process
is adding two times O(N)to the total complexity based on
the two variance estimation tasks. In subsequent estimates the
algorithm can be altered to use the last estimates. Therefore
we reach an extremely efficient algorithm having a constant
complexity 3O(1) after the first initial calculation of 3O(N).
4.2. The estimation results
In Fig. 4 the period estimation error of sparse periodic pro-
cesses with time varying period is depicted. The figures show
the MSE of the presented estimator considering 5 differ-
ent measurement noise levels. For high measurement noise
power the MSE is decaying with Nto the level of the time
varying disturbance which cannot be reduced further by in-
creasing N. For lower measurement noise a minimum of the
estimation error for the memory size of NBexists, which
coincides with theory. Finally, if the measurement noise is
below the interpolation error no minimum exists. In all three
cases the adaptive estimator is able to achieve the optimum
MSE based on its adaptation of N.
5. CONCLUSION
We presented an adaptive period estimator for sparse periodic
time varying point processes. The main result of this contribu-
tion is an algorithm designed to follow the changes of a time
varying process by adapting its memory length Nto achieve
a minimum MSE in period estimation. The algorithm is ex-
tremely low complex with 3O(N)and can be implemented
in a recursive form which reduces the complexity to 3O(1).
The estimator is applicable in e.g. low power sensing devices
which are exposed to temperature variations. These environ-
mental changes introduce time varying behaviour of the crys-
tals and the presented algorithm is able to follow the changes
with a minimum MSE.
6. REFERENCES
[1] H. P. Bernhard, A. Berger, and A. Springer, “Timing
synchronization of low power wireless sensor nodes
with largely differing clock frequencies and variable
synchronization intervals, in 2015 IEEE 20th Con-
ference on Emerging Technologies Factory Automation
(ETFA), Luxemburg, Luxemburg, Sept 2015, pp. 1–7.
[2] E. Conte, A. Filippi, and S. Tomasin, “ML period esti-
mation with application to vital sign monitoring,” IEEE
Sig. Process. Letters, vol. 17, no. 11, pp. 905–908, 2010.
[3] L. Tavares Bruscato, T. Heimfarth, and E. Pignaton de
Freitas, “Enhancing time synchronization support in
wireless sensor networks, Sensors, vol. 17, no. 12,
2017. [Online]. Available: http://www.mdpi.com/1424-
8220/17/12/2956
[4] M. Xu, W. Xu, T. Han, and Z. Lin, “Energy-efficient
time synchronization in wireless sensor networks via
temperature-aware compensation, ACM Trans. Sen.
Netw., vol. 12, no. 2, pp. 12:1–12:29, Apr. 2016. [On-
line]. Available: http://doi.acm.org/10.1145/2876508
[5] I. V. L. Clarkson, Approximate maximum-likelihood
period estimation from sparse, noisy timing data,” IEEE
Transactions on Signal Processing, vol. 56, no. 5, pp.
1779–1787, May 2008.
[6] H. Ye, Z. Liu, and W. Jiang, “Efficient maximum-
likelihood period estimation from incomplete timing
data,” in International Conference on Automatic Control
and Artificial Intelligence (ACAI 2012), March 2012,
pp. 959–962.
[7] P. Stoica, J. Li, and H. He, “Spectral analysis of nonuni-
formly sampled data: A new approach versus the pe-
riodogram,” IEEE Transactions on Signal Processing,
vol. 57, no. 3, pp. 843–858, March 2009.
[8] J. K. Nielsen, M. G. Christensen, and S. H. Jensen, “De-
fault bayesian estimation of the fundamental frequency,”
IEEE Transactions on Audio, Speech, and Language
Processing, vol. 21, no. 3, pp. 598–610, March 2013.
[9] E. J. Hannan, “The estimation of frequency, Journal of
Applied Probability, vol. 10, no. 3, pp. 510–519, 1973.
[10] B. G. Quinn, “Recent advances in rapid frequency esti-
mation,” Digital Signal Processing, vol. 19, no. 6, pp.
942 948, 2009, dASP’06 - Defense Applications of
Signal Processing.
[11] B. G. Quinn, R. G. McKilliam, and I. V. L. Clark-
son, “Maximizing the periodogram,” in IEEE Global
Telecommunications Conference, Nov 2008, pp. 1–5.
[12] R. G. McKilliam, I. V. L. Clarkson, and B. G. Quinn,
“Fast sparse period estimation, IEEE Signal Processing
Letters, vol. 22, no. 1, pp. 62–66, Jan 2015.
[13] A. Napolitano, “Asymptotic normality of cyclic auto-
correlation estimate with estimated cycle frequency,”
in 23rd European Signal Processing Conference (EU-
SIPCO 2015), Aug 2015, pp. 1481–1485.
[14] ——, “On cyclic spectrum estimation with estimated
cycle frequency,” in 24rd European Signal Processing
Conference (EUSIPCO 2016), Budapest, Hungary, Aug
2016, pp. 160–164.
[15] H.-P. Bernhard, B. Etzlinger, and A. Springer, “Period
estimation with linear complexity of sparse time vary-
ing point processes,” in 2017 51th Asilomar Conference
on Signals, Systems and Computers, Asilomar, Pacific
Grove, Nov 2017, pp. 1–5.
[16] H. P. Bernhard, A tight upper bound on the gain of
linear and nonlinear predictors for stationary stochas-
tic processes,” IEEE Transactions on Signal Processing,
vol. 46, no. 11, pp. 2909–2917, Nov 1998.
[17] H.-P. Bernhard and A. Springer, “Linear complex itera-
tive frequency estimation of sparse and non-sparse pulse
and point processes,” in 2017 25th European Signal
Processing Conference (EUSIPCO 2017), Kos, Greece,
Aug. 2017, pp. 1150–1154.
... Hence, if beacons are lost and time-varying clocks occur due to environmental influences, frequency estimation becomes more complicated. Period estimation of such time-varying processes has been considered in [5]. If N is the memory size of the estimator, the estimation mean square error scales with O(N −3 ) in situations where the signal is stationary [6] or time variation of the amplitude is much smaller than measurement noise. ...
... The MSE of the period estimator for sparse point process with a sinusoidal time variation with frequency θ is, referring to [5], ...
... On average, the process has µ d − 1 lost events and additive white Gaussian phase noise σ 2 d /2. According to [5], all three terms in (5) have an identifiable source. Firstly, the frequency estimation error of the stationary process with phase noise is ...
... If the process is stationary for at least N considered samples, the estimation error can be deviated in the same way as the fundamental frequency f P = 1 P of the point process presented in [18]. Considering (27), it was proven in [20] that, the mean square error (MSE) of the period estimator for stationary sparse point processes is ...
... A detailed prove is given in [20]. ...
... Different sources will show a distinct period or pattern in their transmit behaviour and with an accurate period estimation future collisions with the own communication can be predicted. Authors in [23], [24] presented methods to estimate the period of sparse point processes that can also be used for interference tracking. However, without additions, the presented algorithms are not able to track multiple sources of interference and fail in distinguishing sources with the same interference pattern, e.g., multiple WLAN sources. ...
Article
Full-text available
Nowadays Internet-of-Things and Industry 4.0 devices are often connected wirelessly. Current wireless sensor network (WSN) deployments are relying in most cases on the industrial, scientific and medical (ISM) bands without centralized resource scheduling. Thus, each device is a potential source of interference to other devices, both within its own WSN but also to devices in other collocated WSNs. If the transmission behaviour of devices from other WSNs is not random, we are able to find patterns in the time domain in their channel access. This is for example possible for periodic channel access, which is quite common for WSNs with demanding low-power and reliability requirements. The main goal of this work is to detect multiple sources of periodic interference in time slotted signal level measurements and estimate the time windows of future transmissions. This gives a WSN a certain understanding of the radio surrounding and can be used to adapt the transmission behaviour to thus avoid collisions. For this, the Multi Hypothesis Tracking algorithm is adapted and used together with timeslot-based interference measurements on low-cost sensor nodes. The applicability of the algorithm is shown with extensive simulations and the performance is demonstrated with measurements on a time division multiple access based WSN built upon the Bluetooth Low Energy physical layer.
Article
Full-text available
With the emerging Internet of Things (IoT) technology becoming reality, a number of applications are being proposed. Several of these applications are highly dependent on wireless sensor networks (WSN) to acquire data from the surrounding environment. In order to be really useful for most of applications, the acquired data must be coherent in terms of the time in which they are acquired, which implies that the entire sensor network presents a certain level of time synchronization. Moreover, to efficiently exchange and forward data, many communication protocols used in WSN rely also on time synchronization among the sensor nodes. Observing the importance in complying with this need for time synchronization, this work focuses on the second synchronization problem, proposing, implementing and testing a time synchronization service for low-power WSN using low frequency real-time clocks in each node. To implement this service, three algorithms based on different strategies are proposed: one based on an auto-correction approach, the second based on a prediction mechanism, while the third uses an analytical correction mechanism. Their goal is the same, i.e., to make the clocks of the sensor nodes converge as quickly as possible and then to keep them most similar as possible. This goal comes along with the requirement to keep low energy consumption. Differently from other works in the literature, the proposal here is independent of any specific protocol, i.e., it may be adapted to be used in different protocols. Moreover, it explores the minimum number of synchronization messages by means of a smart clock update strategy, allowing the trade-off between the desired level of synchronization and the associated energy consumption. Experimental results, which includes data acquired from simulations and testbed deployments, provide evidence of the success in meeting this goal, as well as providing means to compare these three approaches considering the best synchronization results and their costs in terms of energy consumption.
Article
Full-text available
Time synchronization is critical for wireless sensor networks (WSNs) because data fusion and duty cycling schemes all rely on synchronized schedules. Traditional synchronization protocols assume that wireless channels are available around the clock.However, this assumption is not true for WSNs deployed in intertidal zones. In this article, we present TACO, a synchronization scheme for WSNs with intermittent wireless channels and volatile environmental temperatures. TACO estimates the correlation of clock skews and temperatures by solving a constrained least squares problem and continuously adjusts the local time with the predicted clock skews according to temperatures. Our experiment conducted in an intertidal zone shows that TACO can greatly reduce the clock drift and prolong the resynchronization intervals.
Conference Paper
The problem of cyclic spectrum estimation for almost-cyclostationary processes with unknown cycle frequencies is addressed. This problem arises in spectrum sensing and source location algorithms in the presence of relative motion between transmitter and receiver. Sufficient conditions on the process and the cycle frequency estimator are derived such that frequency-smoothed cyclic periodograms with estimated cycle frequencies are mean-square consistent and asymptotically jointly complex normal. Under the same conditions, the asymptotic complex normal law is shown to coincide with the normal law of the case of known cycle frequencies. Monte Carlo simulations corroborate the effectiveness of the theoretical results.
Article
Very general forms of the strong law of large numbers and the central limit theorem are proved for estimates of the unknown parameters in a sinusoidal oscillation observed subject to error. In particular when the unknown frequency θ 0 , is in fact 0 or <it is shown that the estimate, , satisfies for N ≧ N 0 ( ω ) where N 0 ( ω ) is an integer, determined by the realisation, ω , of the process, that is almost surely finite.
Conference Paper
For an almost-cyclostationary signal, mean-square consistent and asymptotically complex normal estimators of the cyclic statistics exist, provided that the signal has finite or practically finite memory and the cycle frequency is perfectly known. In the paper, conditions are derived to obtain a mean-square consistent and asymptotically complex normal estimator of the cyclic autocorrelation function with estimated cycle frequency. For this purpose, a new lemma on conditioned cumulants of complex-valued random variables is derived. As an example of application, the problem of detecting a rapidly moving source emitting a cyclostationary signal is addressed and the case of a low Earth orbit satellite considered.
Conference Paper
The problem of estimation the period of a periodic event from a sequence of noisy, incomplete time-of-arrival (TOA) observations is studied. A novel grid spacing determination strategy is proposed for a class of numerical-search estimators, whose performances is determined by grid spacing selection. A modified algorithm based on the Fogel's Periodogram estimator is proposed by employing that strategy. It is shown that our algorithm is very likely to yield the maximum likelihood estimate (MLE), with a low complexity of O(n2). Simulation results demonstrate the superior performance of our estimator comparing with the other existing estimators.
Article
The problem considered is that of estimating the frequencies of sinusoidal components of a signal received along with noise, which is taken to be stationary. Of course the amplitudes and phases also need to be estimated as well, in some cases, as the number of components. The more important problem is that of on-line frequency tracking, where the frequency is changing with time and where the time available for computation may be small. This, very large, topic is also discussed.
Article
The problem of estimating the period of a point process from observations that are both sparse and noisy is considered. By sparse it is meant that only a potentially small unknown subset of the process is observed. By noisy it is meant that the subset that is observed, is observed with error, or noise. Existing accurate algorithms for estimating the period require O(N2)O({N^2}) operations where N is the number of observations. By quantizing the observations we produce an estimator that requires only O(NlogN)O(Nlog N) operations by use of the chirp z-transform or the fast Fourier transform. The quantization has the adverse effect of decreasing the accuracy of the estimator. This is investigated by Monte-Carlo simulation. The simulations indicate that significant computational savings are possible with negligible loss in statistical accuracy.