Conference PaperPDF Available

Adaptive Period Estimation For Sparse Point Processes

Authors:
  • Silicon Austria Labs GmbH /Johannes Kepler University Linz

Figures

Content may be subject to copyright.
ADAPTIVE PERIOD ESTIMATION FOR SPARSE POINT PROCESSES
Hans-Peter Bernhard, Andreas Springer
Johannes Kepler University Linz,
Institute for Communications Engineering and RF-Systems
Altenbergerstr. 69, 4040 Linz, Austria, Email: h.p.bernhard@ieee.org
ABSTRACT
In this paper, adaptive period estimation for time varying
sparse point processes is addressed. Sparsity results from
signal loss, which reduces the number of samples available
for period estimation. We discuss bounds and minima of the
mean square error of fundamental period estimation suitable
in these situations. A ruleset is derived to determine the opti-
mum memory length which achieves the minimum estimation
error. The used low complex adaptive algorithm operates with
variable memory length Nto fit optimally for the recorded
time varying process. The algorithm is of complexity 3O(N),
in addition to that the overall complexity is reduced to 3O(1),
if a recursive implementation is applied. This algorithm is
the optimal implementation candidate to keep synchronicity
in industrial wireless sensor networks operating in harsh and
time varying environments.
Index TermsFrequency estimation, low complexity,
sparse process, synchronisation, industrial sensor networks
1. INTRODUCTION AND RELATED WORK
Many applications rely on period or frequency estimation
such as carrier frequency recovery in communication sys-
tems, vital sign monitoring or synchronization in wireless
sensor networks (WSNs) [1, 2, 3, 4]. Within networks, beacon
signals are sent out periodically from a master and received
by many communication partners. The time stamping with
the arrival time enables the estimation of the beacon period
and the synchronization the local clock. Sparsity is caused by
unavoidable occasional loss of communication links in harsh
environments. Hence, if beacons are lost and time-varying
clocks occur due to environmental influences, frequency esti-
mation becomes more complicated. Period estimation of such
This work has been supported in part by the Austrian Research Pro-
motion Agency (FFG) under grant number 853456 (FASAN: Flexible Au-
tonome Sensorik in industriellen ANwendungen) and by research from the
SCOTT project. SCOTT (www.scott-project.eu) has received funding from
the Electronic Component Systems for European Leadership Joint Under-
taking under grant agreement No 737422. This Joint Undertaking receives
support from the European Union’s Horizon 2020 research and innovation
programme and Austria, Spain, Finland, Ireland, Sweden, Germany, Poland,
Portugal, Netherlands, Belgium, Norway.
time-varying processes has been considered in [5]. If Nis
the memory size of the estimator, the estimation mean square
error scales with O(N3)in situations where the signal is sta-
tionary [6] or time variation of the amplitude is much smaller
than measurement noise. We model measurements as sparse
periodic point process with additive phase noise in Sec. 2.
The period of a sparse point process is mostly assessed by
spectral estimation techniques [7, 8, 9]. One of the common
methods is the periodogram estimation [10, 11, 12] by con-
sidering stationary processes. Its computational complexity
is in the order of O(Nlog(N)) [13]. In [14, 15], funda-
mental frequency estimation of cyclo-stationary processes
was introduced, leading to similar results. To the best of our
knowledge, no method exists addressing the optimization of
period estimation for sparse time-varying point processes.
The optimization of the memory usage is discussed in Sec. 3,
leading to a design rule that depends on straightforwardly
measurable signal parameters. In Sec. 4.1, an adaptive pe-
riod estimation is presented with a computational complexity
3O(N). The proposed estimator is extremely simple and easy
to implement in digital hardware with limited computational
capabilities. Simulations in Sec. 3.3 and Sec. 4.2 support the
theoretic considerations and conclude the work.
2. TIME VARYING SPARSE POINT PROCESSES
y[n],t
e[n]
p[3]
p[9]
y[1]
y[2]
y[3]
y[5]
y[9]
δ
b
t
Fig. 1. Time varying sparse periodic point process
Events like periodic beacons δt
bshould be detected by the re-
ceiver of a node in a sensor network. Whenever a beacon is
received, a time stamp y[n] = tis taken from the time vari-
able t.y[n]represents a periodic time series with period P,
written as y[n] = nP +e[n] + φ(1)
with random phase φand measurement noise e[n]=N(0, σe)
assumed to be Gaussian. This process is called a non-sparse
point process. A process is sparse if some events are missing
as depicted in Fig. 1. Additionally, if periods changes over
time, the process is also time variant and we have to replace
Pwith p[n]. In order to model time variance and sparsity, the
variable y[n]in (1) is replaced by the recursive formulation
y[n] = y[n1] + d[n]p[n] + e[n] + φ . (2)
A time discrete random variable d[n]with mean 1µd<
is used to model this behavior. The time variation of the
period pt[n]and the average period P0defines the individual
period p[n] = P0+pt[n](3)
with nN. We assume a periodic time variation as a start-
ing point before we extend the discussion to more complex
signals in section 3.2. The time variation is
p[n] = P0+pTsin(θn)(4)
with θ < π to fulfill the sampling theorem and pTis the peak
amplitude of the time variation.
3. MEMORY IN PERIOD ESTIMATION
3.1. Sinusoidal time varying period
The MSE of the period estimator for sparse point process with
a sinusoidal time variation with frequency θis, referring to
[5],
MSE[N]up2
T
21W(θ, µdN)W(θ, N )
bµdNcN2
+
4µ3
dNp2
Tθ2
3(1 + µdN)+σ2
d
bNµdcN2,(5)
if Nsamples are available. As an abbreviation W(θ,bµdNc)=
sin θbµdNc
2/sin(θ
2)is used. On average, the process has
µd1lost events and additive white Gaussian phase noise
σ2
d/2. According to [5], all three terms in (5) have an iden-
tifiable source. Firstly, the frequency estimation error of the
stationary process with phase noise is
MSEn[N]uσ2
d
bNµdcN2.(6)
Secondly, the error introduced by the sinusoidal time variance
of the frequency is
MSEθ[N]up2
T
21W(θ, µdN)W(θ, N )
bµdNcN2
(7)
and finally, there is an additive upper bounded interpolation
error introduced by generated samples which are inserted in-
stead of lost samples
MSEi[N]/4µ3
dNp2
Tθ2
3(1 + µdN).(8)
All three MSE parts depend on process parameters µd,pT,θ
and one parameter of the estimator N. Therefore, only Ncan
be used to improve the MSE. Hence, we minimise the MSE
over Nby
min
NN+MSE[N],(9)
which is solved by finding a solution for
∂N MSE[N]N0
= 0.(10)
Obviously, NN+and the first derivative exists only if
NR. Without losing generality, we can treat (5) as
continuous function in Rand remap N0to N+if the min-
ima is found. The derivative of MSEθis rather compli-
cated and it is not possible to find a closed form solution
for (10). To overcome this problem, we use approxima-
tions for the trigonometric functions by assuming Nθ 1.
Owing to the vanishing of first and second order approx-
imation, at least a fourth order approximation has to be
used. As a first step we use 2 sin(θµdN/2) sin(θN/2) =
cos(θ(µd1)N/2) cos(θ(µd+ 1)N/2) to circumvent the
trigonometric product and write
1sin(θµdN/2) sin(θN/2)
sin2(θ/2)bµdNcN2
=
1cos(θ(µd1)N/2) cos(θ(µd+ 1)N/2)
2 sin2(θ/2)bµdNcN2
.(11)
Consequently, we apply the fourth order approximation of the
cosine function cos(x) = 1 1
2x2+1
24 x4. After some trans-
formations we obtain
p2
T
21W(θ, µdN)W(θ, N )
bµdNcN2
u
p2
T
2 1 1
2 θ(µd+ 1)N
22
θ(µd1)N
22!+
1
24 θ(µd+ 1)N
24
θ(µd1)N
24!! 4
θ2µdN2!2
up2
T
1
1152N4θ4(1 + µ2
d)2(12)
and with (5), the approximation results in
MSE[N]up2
T
1152N4θ4(1 + µ2
d)2+4µ3
dNp2
Tθ2
3(1 + µdN)+σ2
d
µdN3.
(13)
Finally, it is possible to solve (10) with this expression. It is
obvious that MSEiis approximately constant in Nand so its
derivative vanishes. Moreover, we can write a lower bound of
the MSE as
MSEθ[N] + MSEi[N] + MSEn[N]'
MSE[N]'MSEθ,n[N] = MSEθ[N] + MSEn[N].(14)
Thus, (10) leads to
∂N MSE[N]N0
=1
288θ4µ2
d+ 12N3
0p2
T3σ2
d
µdN4
0
= 0
(15)
101102103
1011
1012
1014
1016
N
MSE B,θ=2π
250
random time-varying
upper bound
sin time-varying
lower bound
B,θ=2π
2000
µd=2 upper bound
σ2
d=1011 sin time-varying
pT=105lower bound
random
time-varying
Fig. 2. Period estimation MSE of time-varying processes.
with the lower bound (14). An explicit expression for N0can
be found by solving (15) which yields
N0=7
s864 σ2
d
p2
Tθ4(µ2
d+ 1)2µd
.(16)
N0has to be remapped by rounding Nθ=bN0e. This result
is confirmed by simulations depicted in Fig. 2 and described
in section 3.2. Nθis inversely proportional to θ4
7, therefore,
Nθis adapted to the time variance for achieving better esti-
mation quality. Consequently, the estimation algorithm uses
the optimized Nθto aquire the results presented in Fig. 2. The
algorithm itself is described in section 4.1.
3.2. White band-limited time varying period
According to (3), a random time variation is introduced with
pt[n]N(0, σp)and a band limitation of B= [B, B [. To
preserve power equivalence we set σp=pT
2. Within B,pt[n]
is white and therefore it implies an infinite number of fre-
quency components θwith individual MSEθ[N]. To empha-
size this we write MSE[N, θ]using (13). With Nθ 1, its
frequency dependence is proportional to θ4and for Nθ > 1
the MSE is limited with p2
T
2. The interpolation error is pro-
portional to θ2and therefore
MSE[N, θ]MSE[N, B]if θBB < .(17)
Hence, it is evident that MSE[N, B]is the worst case for the
estimation error. Consequently, (5) is also an upper bound
for the white band-limited time varying sparse process. The
MSE[N, θ]was derived based on a linear approximation,
therefore we proceed with the superposition assumption for
Nθ 1and use for the approximated mean MSE within B
MSEB
θ[N] = 1
2B
B
Z
B
MSE[N, θ]
=1
B
B
Z
0
p2
T
1152N4θ4(1 + µ2
d)2+σ2
d
µdN3
=3.2p2
T
1152 N4B
24
(1 + µ2
d)2+σ2
d
µdN3.(18)
According to (15), (16) and the signal power equivalence, we
find for the optimal NBof the band limited time varying pe-
riod
NB=7
s864 σ2
d
6.4σ2
PB
24.(µ2
d+ 1)2µd
.(19)
3.3. Simulation results
Throughout this paper all simulations were done with 100
Monte Carlo simulations considering a time span of two pe-
riods of the time varying process realizations. In Fig. 2 the
MSE[N, θ]is depicted with θas parameter. Let us consider
the first simulation with θ=2π
250 . The simulation results are
lying within the derived upper and lower bound given by (14).
With these parameter settings, all three terms of the bound are
clearly identifiable. The first part for N= 3 . . . Nθis domi-
nated by the decay of the error proportional to O(N3)which
is represented by MSEn[N, θ]. The MSE decay reaches its
minimum at Nθ= 9 in accordance to the theoretically derived
minima. The value was observed by simulation and calcu-
lated with equation (16). The MSE minimum is increased by
the interpolation error MSEi[N, θ]which is noticeable by flat-
tening the peak of the minima. Beyond Nθthe MSEθ[N, θ]
dominates with its increase proportional to O(N4)and fi-
nally it saturates at p2
T
2. The second simulation in Fig. 2
with θ=2π
2000 shows the same behavior and a minimum at
N2π
2000 = 29.
Furthermore, we consider periodic point processes show-
ing a time varying period modeled by a white band limited
stochastic process. The MSE of period estimation is consid-
ered in Fig. 4. We depict the upper bound of the MSE[N, 2π
250 ]
as dashed curve. It represents the worst case as if the time
variation is assumed a single sinusoidal with power equiva-
lence to stochastic time variance. As expected, the simulated
MSE[N, θ]lays way below the worst case of the estimation
error. Moreover, there exists also a minimum for the estima-
tion error at NB, which is confirmed by the theoretical result
of (19). With B=2π
250 the value of NB= 17. These results
are supported by further simulations e.g. with frequency band
B=2π
2000 as depicted using red curves in Fig. 2.
4. ESTIMATOR DESIGN
The time variance of the observed signal is based on sys-
tem parameters of the underlying process, pT,σd,µdand
B, hence the algorithm can be tuned using only the estima-
tor memory N. The parameter µdcan be easily measured as
ratio between the total number of samples and received events
of the sparse process. And, the parameter σdis derived from
the maximum prediction gain [16]. If the prediction horizon
is near zero, the prediction error is converging to the measure-
ment noise variance. If we consider the period estimation as
+
y
d
2
[n]
++ +
^
P[n]
y[n]
N
-
y
d
[n]
v[n]
y
d
[n]
μ
d
N
T
T
T
T
T
T
+
t
sample:
t>y[n]+Δ t
max
μdN
if event
v[n]
μ
d
N
N
else if
v
1
[n]/2
-
^
σd
2
σ
P
2
(y
μ
d
[n]−¯
y
μ
d
)
2
y
μ
d
2
[n]
N
N
B
μ
d
^
μ
d
v
1
[n]
y
μ
d
[n]
Fig. 3. Adaptive period estimation
101102103
106
108
1010
1012
1014
N
MSE estimate bound σ2
d= 105
estimate bound σ2
d= 1010
estimate bound σ2
d= 1015
estimate bound σ2
d= 1020
estimate bound σ2
d= 1025
B=2π
250 pT=105µd=2
Fig. 4. MSE for white time variance and different σ2
d
period prediction, the results of [16] can be applied straight
forward. To estimate σd, we are using an estimate with one
memory element and compare it with the following period
measurement. This is equivalent to the shortest possible pe-
riod prediction horizon and therefore the mean estimation er-
ror with one memory tap is the best guess we can get for the
measurement noise bσd/2used in (5). The variance of the time
varying process σPis calculated from the recorded samples
yd[n]. Finally, the bandwidth Bhas to be assumed from the
underlying physics or a band limiting input device.
4.1. The estimator
According to [17], the estimator is designed as depicted in
Fig. 3. It is estimating the period b
P[n]of repeating events as
shown in Fig. 1. Time stamps are stored in y[n]whenever a
trigger event occurs. The last bµdNctimestamps are stored to
calculate the difference yd[n]between the current timestamp
and the timestamp bµdNcsamples earlier. The difference is
squared y2
d[n]and averaged over Nsamples. In the follow-
ing, the scaled square root of the moving average is the es-
timate of the process period. Missing events are detected if
the time counter tgets larger than y[n] + ∆tmax. In case of
one or more missing events, new events are created by adding
an averaged sample difference yd[n]
bµdNcto the previous sam-
ple via a feedback loop. The ratio between the total sample
number (inserted and detected) and the number of detected
samples represents the parameter µdused for the calculation
of NB.σ2
dis estimated by using the first partial sum of the
moving average filter. Finally, σ2
Pis the variance of the dif-
ferences yd[n]. The optimized NBis calculated with (19) for
each estimated period. According to NBthe parameter Nis
updated. The computational complexity of the non-adaptive
algorithm was derived in [17] with O(N). The adaption pro-
cess is adding two-times O(N)to the total complexity based
on the two variance estimation tasks. For subsequent esti-
mates the algorithm can be altered to use the previous esti-
mates. Therefore, we reach an extremely efficient algorithm
having a constant complexity 3O(1) after the first initial cal-
culation of 3O(N).
4.2. The estimation results
The period estimation error of sparse periodic processes with
time varying period is depicted in Fig. 4. The figures show the
MSE of the presented estimator considering 5 different mea-
surement noise levels. For high measurement noise power the
MSE is decaying N3to the MSE level caused by the time
varying disturbance. This estimation error cannot be reduced
by increasing N. For lower measurement noise a minimum of
the estimation error for the memory size of NBexists, which
coincides with the theory. Finally, if the measurement noise is
below the interpolation error no minimum exists. In all three
cases the adaptive estimator is able to achieve the optimum
MSE based on its adaptation of N.
5. CONCLUSION
We presented an adaptive period estimator for sparse periodic
time varying point processes. The main result of this contribu-
tion is an algorithm designed to follow the changes of a time
varying process by adapting its memory length Nto achieve
a minimum MSE in period estimation. The algorithm is ex-
tremely low complex with 3O(N)and can be implemented
recursively which reduces the complexity to 3O(1). The es-
timator is applicable in e.g. low power sensing devices which
are exposed to temperature variations. These environmen-
tal changes introduce time varying behavior of the crystals
and the presented algorithm is able to follow the frequency
changes with a minimum MSE.
6. REFERENCES
[1] H.-P. Bernhard, A. Berger, and A. Springer, “Tim-
ing synchronization of low power wireless sensor nodes
with largely differing clock frequencies and variable
synchronization intervals, in 2015 IEEE 20th Con-
ference on Emerging Technologies Factory Automation
(ETFA), Luxemburg, Sept 2015, pp. 1–7.
[2] E. Conte, A. Filippi, and S. Tomasin, “ML period esti-
mation with application to vital sign monitoring,” IEEE
Sig. Process. Letters, vol. 17, no. 11, pp. 905–908, 2010.
[3] L. Tavares Bruscato, T. Heimfarth, and E. Pignaton de
Freitas, “Enhancing time synchronization support in
wireless sensor networks, Sensors, vol. 17, no. 12,
2017.
[4] M. Xu, W. Xu, T. Han, and Z. Lin, “Energy-efficient
time synchronization in wireless sensor networks via
temperature-aware compensation, ACM Trans. Sen.
Netw., vol. 12, no. 2, pp. 12:1–12:29, Apr. 2016.
[5] H.-P. Bernhard, B. Etzlinger, and A. Springer, “Period
estimation with linear complexity of sparse time vary-
ing point processes,” in 2017 51th Asilomar Conference
on Signals, Systems and Computers, Asilomar, Pacific
Grove, Nov 2017, pp. 1–5.
[6] I. Vaughan L. Clarkson, Approximate maximum-
likelihood period estimation from sparse, noisy timing
data,” IEEE Transactions on Signal Processing, vol. 56,
no. 5, pp. 1779–1787, May 2008.
[7] H. Ye, Z. Liu, and W. Jiang, “Efficient maximum-
likelihood period estimation from incomplete timing
data,” in International Conference on Automatic Control
and Artificial Intelligence (ACAI 2012), March 2012,
pp. 959–962.
[8] P. Stoica, J. Li, and H. He, “Spectral analysis of nonuni-
formly sampled data: A new approach versus the pe-
riodogram,” IEEE Transactions on Signal Processing,
vol. 57, no. 3, pp. 843–858, March 2009.
[9] J. K. Nielsen, M. G. Christensen, and S. H. Jensen, “De-
fault bayesian estimation of the fundamental frequency,”
IEEE Transactions on Audio, Speech, and Language
Processing, vol. 21, no. 3, pp. 598–610, March 2013.
[10] E. J. Hannan, “The estimation of frequency, Journal of
Applied Probability, vol. 10, no. 3, pp. 510–519, 1973.
[11] B. G. Quinn, “Recent advances in rapid frequency esti-
mation,” Digital Signal Processing, vol. 19, no. 6, pp.
942 – 948, 2009, DASP’06 - Defense Applications of
Signal Processing.
[12] B. G. Quinn, R. G. McKilliam, and I. V. L. Clark-
son, “Maximizing the periodogram,” in IEEE Global
Telecommunications Conference, Nov 2008, pp. 1–5.
[13] R. G. McKilliam, I. V. L. Clarkson, and B. G. Quinn,
“Fast sparse period estimation, IEEE Signal Processing
Letters, vol. 22, no. 1, pp. 62–66, Jan 2015.
[14] A. Napolitano, Asymptotic normality of cyclic au-
tocorrelation estimate with estimated cycle frequency,”
in 23rd European Signal Processing Conference (EU-
SIPCO 2015), Aug 2015, pp. 1481–1485.
[15] A. Napolitano, “On cyclic spectrum estimation with es-
timated cycle frequency,” in 24rd European Signal Pro-
cessing Conference (EUSIPCO 2016), Budapest, Hun-
gary, Aug 2016, pp. 160–164.
[16] H.-P. Bernhard, “A tight upper bound on the gain of
linear and nonlinear predictors for stationary stochastic
processes,” IEEE Transactions on Signal Processing,
vol. 46, no. 11, pp. 2909–2917, Nov 1998.
[17] H.-P. Bernhard and A. Springer, “Linear complex itera-
tive frequency estimation of sparse and non-sparse pulse
and point processes,” in 2017 25th European Signal
Processing Conference (EUSIPCO 2017), Kos, Greece,
Aug. 2017, pp. 1150–1154.
... Using the new highprecision 11 ps time chip TDC-GP30 [9], combined with the threshold detection technology to calculate the pulse width ratio and the measurement of the amplitude voltage, the ultrasonic water meter has a self-diagnosis function [10] to avoid errors. The adaptive [11] fluid tracking measurement method is adopted to enhance the tracking of the fluid state and automatically adjust the measurement period, thus the water meter can adapt to a more complex environment [13]- [15], also reducing power consumption and improving accuracy. Aiming at the research of high-precision small-diameter ultrasonic water meters, it is necessary to consider the accuracy of small flow in the Low flow point interval. ...
... The experiment proves that using Emaa instead of dtmax can achieve better results. Equation (11) can be transformed into: ...
Article
Full-text available
Ultrasonic water meters have current widespread problems of poor site adaptability, low measurement accuracy, and poor stability. A high-precision intelligent ultrasonic water meter with self-diagnosis function and adaptive technology was proposed in this paper, which is focus on improving the measurement accuracy and repeatability of the small pipe diameters. The hardware circuit design of the ultrasonic water meter studied in this paper adopts the low-power STM32L053 microcontroller with Cortex-M0 core architecture and the high-precision 11 ps time resolution timing chip TDC-GP30 to complete the metering function. The software combines pulse width ratio and amplitude voltage measurement technology to make the water meter have self-diagnosis function, avoid measurement errors caused by accidental factors, improve adaptability and measurement accuracy, also the application of adaptive measurement period method is used to improve measurement repeatability. The software adopts the moving average filtering algorithm, which effectively reduces the fluctuation of the time-of-flight(ToF) difference and improve the measurement accuracy of the flow point in the low zone (between the minimum flow and the boundary flow). The experimental verification results show that the accuracy of the small-caliber ultrasonic water meter can reach within ± 1.5 %, and the repeatability is less than 0.5 %. In the face of fluid disturbances, adaptive technology is used in this water meter to adjust the measurement period and carry out self-diagnostic function research, and automatically respond to deal with abnormal faults, thereby realizing the demand for intelligence.
Article
Full-text available
With the emerging Internet of Things (IoT) technology becoming reality, a number of applications are being proposed. Several of these applications are highly dependent on wireless sensor networks (WSN) to acquire data from the surrounding environment. In order to be really useful for most of applications, the acquired data must be coherent in terms of the time in which they are acquired, which implies that the entire sensor network presents a certain level of time synchronization. Moreover, to efficiently exchange and forward data, many communication protocols used in WSN rely also on time synchronization among the sensor nodes. Observing the importance in complying with this need for time synchronization, this work focuses on the second synchronization problem, proposing, implementing and testing a time synchronization service for low-power WSN using low frequency real-time clocks in each node. To implement this service, three algorithms based on different strategies are proposed: one based on an auto-correction approach, the second based on a prediction mechanism, while the third uses an analytical correction mechanism. Their goal is the same, i.e., to make the clocks of the sensor nodes converge as quickly as possible and then to keep them most similar as possible. This goal comes along with the requirement to keep low energy consumption. Differently from other works in the literature, the proposal here is independent of any specific protocol, i.e., it may be adapted to be used in different protocols. Moreover, it explores the minimum number of synchronization messages by means of a smart clock update strategy, allowing the trade-off between the desired level of synchronization and the associated energy consumption. Experimental results, which includes data acquired from simulations and testbed deployments, provide evidence of the success in meeting this goal, as well as providing means to compare these three approaches considering the best synchronization results and their costs in terms of energy consumption.
Article
Full-text available
Time synchronization is critical for wireless sensor networks (WSNs) because data fusion and duty cycling schemes all rely on synchronized schedules. Traditional synchronization protocols assume that wireless channels are available around the clock.However, this assumption is not true for WSNs deployed in intertidal zones. In this article, we present TACO, a synchronization scheme for WSNs with intermittent wireless channels and volatile environmental temperatures. TACO estimates the correlation of clock skews and temperatures by solving a constrained least squares problem and continuously adjusts the local time with the predicted clock skews according to temperatures. Our experiment conducted in an intertidal zone shows that TACO can greatly reduce the clock drift and prolong the resynchronization intervals.
Conference Paper
The problem of cyclic spectrum estimation for almost-cyclostationary processes with unknown cycle frequencies is addressed. This problem arises in spectrum sensing and source location algorithms in the presence of relative motion between transmitter and receiver. Sufficient conditions on the process and the cycle frequency estimator are derived such that frequency-smoothed cyclic periodograms with estimated cycle frequencies are mean-square consistent and asymptotically jointly complex normal. Under the same conditions, the asymptotic complex normal law is shown to coincide with the normal law of the case of known cycle frequencies. Monte Carlo simulations corroborate the effectiveness of the theoretical results.
Conference Paper
For an almost-cyclostationary signal, mean-square consistent and asymptotically complex normal estimators of the cyclic statistics exist, provided that the signal has finite or practically finite memory and the cycle frequency is perfectly known. In the paper, conditions are derived to obtain a mean-square consistent and asymptotically complex normal estimator of the cyclic autocorrelation function with estimated cycle frequency. For this purpose, a new lemma on conditioned cumulants of complex-valued random variables is derived. As an example of application, the problem of detecting a rapidly moving source emitting a cyclostationary signal is addressed and the case of a low Earth orbit satellite considered.
Conference Paper
The problem of estimation the period of a periodic event from a sequence of noisy, incomplete time-of-arrival (TOA) observations is studied. A novel grid spacing determination strategy is proposed for a class of numerical-search estimators, whose performances is determined by grid spacing selection. A modified algorithm based on the Fogel's Periodogram estimator is proposed by employing that strategy. It is shown that our algorithm is very likely to yield the maximum likelihood estimate (MLE), with a low complexity of O(n2). Simulation results demonstrate the superior performance of our estimator comparing with the other existing estimators.
Article
The problem considered is that of estimating the frequencies of sinusoidal components of a signal received along with noise, which is taken to be stationary. Of course the amplitudes and phases also need to be estimated as well, in some cases, as the number of components. The more important problem is that of on-line frequency tracking, where the frequency is changing with time and where the time available for computation may be small. This, very large, topic is also discussed.
Article
The problem of estimating the period of a point process from observations that are both sparse and noisy is considered. By sparse it is meant that only a potentially small unknown subset of the process is observed. By noisy it is meant that the subset that is observed, is observed with error, or noise. Existing accurate algorithms for estimating the period require $O({N^2})$ operations where $N$ is the number of observations. By quantizing the observations we produce an estimator that requires only $O(Nlog N)$ operations by use of the chirp z-transform or the fast Fourier transform. The quantization has the adverse effect of decreasing the accuracy of the estimator. This is investigated by Monte-Carlo simulation. The simulations indicate that significant computational savings are possible with negligible loss in statistical accuracy.