Conference PaperPDF Available

Non-parametric Change Point Detection for Spike Trains

Copyrights to IEEE – to appear in CISS 2016 @ Princeton
Non-parametric Change Point Detection
for Spike Trains
Thiago Mosqueiro
BioCircuits Institute
University of California San Diego
Martin Strube-Bloss
Theodor-Boveri-Institute of Bioscience
Biocenter University of Wuerzburg
Rafael Tuma
Institute of Physics Of S˜
ao Carlos
University of S˜
ao Paulo
Reynaldo Pinto
Institute of Physics Of S˜
ao Carlos
University of S˜
ao Paulo
Brian H. Smith
School of Life Sciences
Arizona State University
Ramon Huerta
BioCircuits Institute
University of California San Diego
Abstract—Two techniques of non-parametric change point
detection are applied to two different neuroscience datasets. In
the first dataset, we show how the multivariate non-parametric
change point detection can precisely estimate reaction times to
input stimulation in the olfactory system using joint information
of spike trains from several neurons. In the second example, we
propose to analyze communication and sequence coding using
change point formalism as a time segmentation of homogeneous
pieces of information, revealing cues to elucidate directionality
of the communication in electric fish. We are also sharing our
software implementation Chapolins at GitHub.
Index Terms—Non-parametric, Olfaction, Electric fish, Com-
In a wide range of disciplines – finance, medicine, molecular
biology, neuroscience, geology, etc –, it is often critical
to detect inhomogeneities and changes in time series. By
changes, we may refer to a sudden drop/high in the daily
stock price, rise in neural activity due to the onset of an
external stimulus, or changes in geological core samples [1].
In biology, all important processes operate based on sensing
and signaling pathways that detect changes (both externally
or internally) and trigger response mechanisms [2], [3]. As
an example, reflex motor reactions to fast stimuli (such as
a burning sensation, or a sudden loud noise) are usually
nimble responses triggered by local interneurons that are very
sensible to certain changes in their sensory input. In the field
of statistics, determining if there is a change in time series
and estimating the most likely points of change is called the
change point problem [1], [4], [5]. Change point analysis can
be traced back to the 1950’s, first appearing in the context of
industrial quality control.
As an illustration, let us consider figures 1(a-b), where we
shift the average and increase the variance at time t= 0 of a
Wiener process. The goal of change point analysis is to find
the most probable time τat which either of these changes
take place, as in figure 1(c). More interestingly, such methods
can detect events in real time series: for instance, 1(d) shows
detection of CO presence based on signals recorded from
electronic noses [6].
In neuroscience, timing is a central topic, however advanced
change point techniques are underutilized. Even though in
many cases it is possible to come up with alternative measures
or proxies [9], it would be hard to consistently generalize
them to spike trains or other neural measures. In this paper,
we revisit some of the state-of-the-art techniques for change
point detection available [10] and how to apply them to
neuroscience data. We will use two very recent datasets to
analyze different problem types: (i) assess the reaction time
of neural populations based on their spike trains and (ii)
the construction of indicators for sequence segmentation to
elucidate animal communication.
In the first case, we use the spike trains to detect change
points in activity of a neural population based on electrophysi-
ological recordings of neurons in the first olfactory processing
stage of insects. Although in several cases firing rate accurately
represents information, this is usually only true to some extent.
Individual dynamics may also enclose important patterns and
information, as is known to be the case in the insect olfac-
tion [11]. We then use a multivariate non-parametric method
(introduced in section II [10]) to jointly examine trains of
spikes. In the second case, we examine a signal recorded
from two freely swimming electric fish (Gymnotus sp.). Since
these fish are territorialists, whenever they are placed in the
same aquarium they tend to intensely interact both physically
and using their electrical pulses, until a relation of dominance
is stated. Although electrocommunication is highly accepted,
little is known in terms of how information is conveyed
in sequences of electrical pulses. Using Inter-Pulse-Intervals,
we use change point detection as an indicator to sequence
segmentation, which can potentially aid to the detection of
important symbols in the electric fish vocabulary.
This paper is intended to give a minimal introduction to
non-parametric change point techniques for neuroscientists,
while presenting applications that might be appealing to a
wider audience. With both examples, we cover two of the
mainstream classes of analyses performed in neural data,
giving a glimpse of how powerful such techniques are and
their benefits. Up to our knowledge, such techniques have not
been used as described, and to help changing this scenario we
made available our own implementations and a few examples
[12], accessible using Python and (eventually) Matlab.
(a) (b) (d)
(c) (e) (f)
Fig. 1. Examples of time series where change point analysis are required. (a-b): Artificial data where simple changes in either average or variance. (c):
Results using a simple classical non-parametric approach – no hypotheses are made regarding the underlying model. (d): Real data from an array of electronic
noses at the onset of detection of CO volatiles [6]. Each line represents one sensor in one of the noses. (e): Example of neural activity where a stimulus is
presented (at t= 0) and the Instantaneous Firing Rate (IFR) of the population suddenly changes. There recordings are from honeybees’ Projection Neurons
(PN) and Mushroom Bodies Output Neurons (MBON) [7]. (f): Recordings of freely swimming, pulse-type electric fish [8]. Inter-Pulse-Intervals (IPI) are
introduced in section IV.
Let X={Xt:t= 1,2, . . . , n}be a stochastic process in
discrete time. Detection of change events can be reduced to
the problem of contrasting two hypotheses. For instance, for
a single change point,
H1:XtF0, t = 1,2, . . . , τ 1
F1, t =τ, τ + 1, . . . , n
are the candidate hypothesis. F0and F1are the underlying
distributions during each of the regimes of the stochastic
process. As anticipated, in neuroscience the time series X
may represent, for instance, the instantaneous firing rate of
a given neural population, while τis the time when an
external stimulus is perceived. F0then describes the stimulus-
free behavior of that population, while F1indicates the new
behavior after stimulation. In this sense, the difference between
τand the time when the stimulation started is regarded as the
perception reaction time. Next, we discuss methodologies that
requires the least prior knowledge about F0and F1as possible.
The goal is to answer the following two questions:
i How likely is to find a change point?
ii What is the best estimate of the change point τ?
Naturally, (i) is answered with a p-value that indicates how
certain we are of rejecting H0in favor of H1, whereas the
estimation of the change point will come out as the result of
some optimization – explained in the following subsections.
Figure 1(c) shows examples of estimated change points. In
general, for a stochastic process with nrandom variables, most
methods are based in solving n+ 1 statistical tests [4]. The
generalization for several change points is somewhat direct,
and can be found in the literature [1], [4].
A. Single change point using non-parametric test statistics
We first approach this problem using a very simple, but
useful, methodology first proposed by Pettitt [13]. It is based
on a chosen test statistic (not necessarily parametric), say, D,
and its associated two-sample hypothesis test. For instance,
(Mann-Whitney’s) Ustatistic can easily solve the example in
figure 1(a), while failing at even strong differences in variance.
In general, Ustatistics is a good candidate when the average
shifts, while for scaling changes as in figure 1(b) the Median
test statistic is more suitable.
In general, Kolmogorov-Smirnov or Cramer-von-Mises tend
to capture a wider range of features, although possibly without
the same precision or accuracy as more specific tests. Several
analytical results and asymptotic bounds have been calculated
in the past for most test statistics [13], [14]. Finally, In case
of high confidence on the nature of F0or F1, parametric test
statistics may be used to improve the precision of the change
point estimation.
Once Dis chosen, we draw two sample fractions from the
time series and compare them. There is more than one way
to approach this problem, and we briefly introduce two in the
following paragraph: the Sliding Window and the Moving Bar
strategies. For simplicity of notation, let [a, b]Z= [a, b]Z
the discrete interval between two whole numbers aand b.
Sliding Window p-value. Consider two time ranges R0=
[t0, t0+ ∆0]Zand R(t1) = [t1, t1+ ∆0]Z, with t1[t0+
0, n 0]Z. Set R0to be located at the beginning of the
time series (t0is not necessarily 1). Then, we perform a two-
sample statistical test using Dand evaluate the p(t1)as the
p-value of {Xt:tR0}against {Xt:tR(t1)}. We set a
threshold θto indicate that the test rejects its null hypothesis
and both windows R0and R(t1)are in different regimes. For
a single change point, the estimation becomes
ˆτ= inf argmin
Ip(tk)< θ,(2)
with I(·)being the indicator function.
This algorithm is especially suitable for long time series,
given its linear complexity and low computational cost –
given that the statistical tests are only computed over two
time windows. We show simple applications on figure 2(a-
b). Whenever the R(t1)goes through a change point, p(t1)
decreases significantly. From our experience, most of the times
by three or more orders of magnitude – see example in figure
2(a-b). Finally, notice that as 0and 1both increase, Dis
more likely to capture slight changes in distribution. Yet, the
bigger 1, the lower the precision of the estimated ˆτ.
Moving Bar. Let Dkbe the two-sample statistic evaluated
for {Xt:t[1, k 1]Z}against {Xt:t[k, n]Z}. To
avoid dealing with signal of Dand standardize the method,
with σ=Var(Dk)as its variance. Finally,
ˆτ= argmax
Under the hypothesis of single change point, ˆ
Dklike presents
a single maximum.
Kolmogorov-Smirnov can be evaluated in linear time, which
means this approach has complexity O(n2). It is worth noting
that larger data series might be treated in overlapping sections,
otherwise it may get computationally expensive to compute the
two-sample test statistics. In figure 2, we show an example of
this procedure.
Notice that both methods (Sliding Window and Moving Bar)
have skewed distributions, i.e., they are statistically biased,
estimated in less than 0.8% of the whole interval used. As
expected, the bias is often positive, except when the change
point is artificially abrupt.
B. Multiple change points using a non-parametric divergence
Next, we review a recent approach proposed by James &
Matteson, namely, E-Divisive, based on a divergence measure
and a bisection search for change points [10]. No fixed number
is assumed a priori, and the only strong assumption made is
the existence of the α-th momentum. For our numerical tests,
we set α= 1.
Performing the same division as in the Moving Bar method,
let Xk={Xt:t[1, k 1]Z}and Yk={Xt:t[k, n]Z}.
James & Matteson proposed the use of the following diver-
gence measure:
Eα(X,Y)=2E|X − Y|α
E|X X 0|αE|Y − Y 0|α,(5)
Fig. 2. Examples using non-parametric test statistics. For generality, we have
used two-sample Kolmogorov-Smirnov for both techniques. For each case,
we examined the interval of confidence of the estimated change point. Right
panel represents deviation from actual change point (τ= 0) for 103trials.
(a-b): Sliding window strategy, with threshold pthr = 103.(c): Moving
bar strategy. (d): E-Divisive strategy proposed by James & Matteson [10].
where X0is an independent copy of Xand |·| denotes
euclidean norm. Estimating ˆ
Eα(X,Y)is not particularly ex-
pensive. Then, for a single point-change estimation we could
t= argmax
Eα(Xk,Yk) = argmax
where ˆ
Qα(Xk,Yk)is known to converge in distribution, as
mand ngrow, to a non-degenerate random variable if Xk
and Ykshare the same distribution; if not, ˆ
Qα(Xk,Yk)→ ∞
[10]. Finally, James & Matteson proposed combining this
divergence with a strategy similar to bisection root-finding
method to locate multiple change points [10]. Given the simple
form of equation 5, the extension for multivariate distributions
is direct. The overall complexity of E-Divisive algorithm is
O(Ncpn2), with Ncp being the number of (unknown a priori)
change points.
Especially for neuroscience (and biology in general), as we
point out in section III, this is especially important: this opens
the possibility of identifying change points using the joint
information of spike trains. Figure 2(d) shows the detection
of change point using the E-Divisive algorithm. Note that it
does detect more than one point, highlighting several different
regimes. All p-values are at least 0.025.
(a) (b)
(e) (f)
Fig. 3. Activity of Projection Neurons (PNs) in the Antennal Lobe of
honeybees. In all cases, stimulation starts at t= 0.(a): Raster plot and
Instantaneous Firing Rate (IFR) of PNs. (b) Each color represents a different
exposal to the same odorant. (c): Change point analysis using the population
IFR. (d): We show different reaction times and second order statistics
depending on which odor is used. (e): 3 principal components obtained by
PCA are shown, presenting hidden activity patterns in the spike train invisible
to IFRs. (f): Change point detection using jointly the five most important
components found by PCA.
Olfaction is one of the sensory modalities used to capture
important cues for survival, such as in mating and finding re-
sources. In insects, the main pathway for olfaction information
is known in some detail [15], [16], although much is yet to be
explained. Timing and time integration, for instance, is a key
to elucidate the mechanisms of decision making [11], [17].
Upon odor stimulation the Antennal Lobes (ALs) consti-
tute the first stage of olfaction processing in the brain. The
top panel of figure 3 showcases an experimental setup for
electrophysiological experiments involving odor stimulation
and conditioning. As one of the mainstream techniques for
studying population activity, we start with the Instantaneous
Firing Rate (IFR). Figure 3(a) shows recordings of Projec-
tion Neurons (PNs) from the ALs of honeybees [7]. If not
mentioned otherwise, stimulation starts at t= 0. Due to
the intrinsic out-of-equilibrium and noisy nature of olfaction
information, responses are slightly different each repetition, as
shown in figure 3(b).
In this context, a natural question would be how to estimate
the reaction time of that population to different odorants,
exploring its sensitivity and selectivity [9], [18], [19]. PNs can
discriminate different odorants in a time scale of hundreds of
milliseconds. Behaviorally, it is known that honeybees may
learn to react fast when trained, taking less than 0.4s for a
motor response. Since these recordings come from untrained
bees, the observed reaction in this dataset reflects honeybees
response to novelty.
As a first approach, we can use E-Divisive change point
detection on the IFRs of the PNs to estimate their reaction
time. We show in figure 3(c) the detected reaction times for
a few trials. Since we have recordings of multiple odorants
(1-hexanol, 2-octanone and their mixture), we may even
assess how PNs react to each odorant individually: using 10
recordings for 3different stimuli, we show in figure 3(d) PNs
response. We can clearly see differences that are statistically
significant across odorants, and maybe the most interesting
result is that detecting mixture – which would be the hardest
task – shows larger variance with lower median of reaction
In many cases, neural populations do encode changes in
their IFRs [20], nonetheless IFRs only partially represent the
actual information. Indeed, this is the case for olfaction [21]:
although the activity in 3(a) bounces back to its original firing
rate, different spatio-temporal patterns emerge. This can be
visualized using the 3most important components captured
by a PCA analysis over all spike trains, as shown in 3(e).
The black line represents the stimulus-free condition, while
the blue line is the activity of the stimulated PNs.
Thus, to include these second order features we can employ
the multivariate version of E-Divisive in the spike trains, using
each train as a different dimension. However, there are caveats:
as the dimension of the analyzed time series grow, not only
the computational effort explodes, but also precise estimation
requires much more samples. A possible solution is to use the
first few PCA components of the spike train. In figure 3(f)
we show the result of the multivariate change point detection
using the first five PCA components (the ratio between first
and fifth eigenvalues is 0.09.). We show both the detected
reaction time, with p-value of 0.002, and the profile of each
spike train over time.
Therefore, this is a general approach that can be applied
when it is necessary to go beyond firing-rate coding, including
information from individual dynamics in the spike trains.
For instance, if the firing rates remain the same, while the
subpopulation changes – which is the case of odor identity
coding the mammalian olfactory bulb [22] –, this methodology
would successfully identify it. Also, changes in the correla-
tions among neurons, such as a change in synchrony, can be
easily captured by this multivariate analysis.
Some electric fishes generate pulses of electric field with
stereotypical waveform, similar to neurons’ action potential
[23], [24]. These pulses (or spikes) partially bounces back
and deformations are perceived by its electrosensory system,
working as echoes for electrolocation [25]. Additionally, recent
work has shown that pulse-type electric fish may also use
these pulses to communicate with its conspecifics, possibly
stating dominance [8], [26]. If that is the case, little is known
about their language: do they present grammar-structured
language? Which kind of sequence coding do these fish use?
[8], [27]. In the following, we propose the use of change point
detection to split the time series in “homogeneous” segments
of data enabling one to identify hidden regularities in the fish
behavior and communication. This time series segmentation
may reveals cues to elucidate discretization and directionality
of communication in electric fishes. Figure 4(top) summarizes
the methodology.
Notice that communication studies often use fixed time
discretization to define symbols and words, which is not
necessary here. Segments have variable lengths instead. In
[26] it was shown that abundant information is encoded in the
Inter-Spike-Interval (IPI) time series. Let τkbe the k-th spike
of a fish, the its Inter-Spike-Interval time series is defined as
∆(tk) = tk+1 tk, in close analogy to Inter Spike Interval
(ISI). In figure 4(top) we show IPI time series of two electric
fish (Gymnotus sp.) freely swimming in the same environment
(an aquarium of 1×1×1m3). To split the signals from the two
freely behaving fish, we have used a very recent suggestion
Depending on the time scale considered, each segment may
be assigned to either different symbols/words in the fish’s
vocabulary, or whole sentences/messages. After the segmen-
tation, a ad hoc similarity analyses, ranging from comparing
pairs of correlations to an unsupervised learning strategy (such
as in [5], [29]), can be used to identify clusters of symbols.
In figure 4(top), we picture ykand xkas the series of
symbols identified per segment, per fish. Then, complementary
information-theoretic tools and statistical learning methods
can assess the information content and flow in such symbols
(Pearson’s correlation, mutual information, transfer entropy,
We start in a coarse time scale in figure 4(a): evaluating
the Instantaneous Firing Rate (IFR) associated with the pulse
time series, which aggregates hundreds of spikes and captures
the overall trend of the IPIs. Notice that this is done over the
course of hours, and may be easily linked to behavior time
scale. Interestingly enough, figure 4(b) shows this analysis for
two fish interacting. The number of coincidences in the de-
tected change points is somewhat remarkable, with more than
half being minutes apart. This reflects some level expected
synchrony between fishes for several reasons, including an
effect similar to “Jamming Avoidance Response” [8].
We show in figure 4(c) segmentation of only part of the
(a) (b)
Clus te r 0
Clus te r 1
Clus te r 2
Fig. 4. An indicator to sequence segmentation in both a coarser time scale
and using Inter-Spike-Intervals (IPIs). (a): Detection on a coarse-grained time
scale. (b): Comparing the detections using recordings of two fish freely
swimming. (c): Detection in a smaller time scale: change point detection likely
reveal information units in terms of communication between both fish. (d):
Small scale example of identifying communication motifs using unsupervised
IPIs of a fish in a smaller time scale, over the course of
few seconds. Probably the most interesting feature is that the
detected change points intercalate in the vast majority of cases:
for each change point of fish 2, fish 1 has a change point
closely following. Moreover, when the same fish presents two
change points in a row (predominantly observed in fish 1),
they are separated by a larger time period – possibly reflecting
silent lapses in the communication.
Finally, we have constructed a vector of statistics for each
segment (IPI variance, average slope, area under the curve
and time interval). Using PCA combined with a clustering
algorithm – K-Means or Affinity – show three main clusters.
Defining each cluster from 0to 2and labeling xkand yk
segments, we get a discretized version of the IPI series. In
figure 4(d) we show the detected cluster per fish and some of
the symbols.
Interestingly, we estimated the Mutual Information (MI)
between the discretized series as 0.16 bits/s. To test the
significance of this number, we propose the use of boot-
strap/surrogate techniques to test for data snooping. Reshuf-
fling the IPIs obliterates all patterns and no change point is de-
tected whatsoever (repeated more than 100 times). This simply
shows that the segmentation is likely not an artifact of chance,
although it does not guarantee any kind of information flow
or correlation between the two segmented IPI series. Based on
White’s Reality Check [30], we shuffle entire segments of IPIs
based on the larger scale detection. On 100 shuffles tested, the
MI dropped by a factor 4or more.
We revisited three nonparametric methods to solve change
point problems using two different test cases. The E-Divisive
approach reviewed in section II-B was recently proposed as
an efficient and fast method to detect multiple change points
sequences multivariate distributions [10].
In the first case, we analyzed the timing of the neural
reaction to (novel) stimuli in the presence of extremely noisy
environment and out-of-equilibrium signals. By studying firing
rates or sequences of spike trains, we determined, for instance,
that the reaction time of honeybees to different odors is
roughly of the order of 100ms, and observed that their response
to a mixture of odors has lower median with higher variance.
In our second example, we have used change point anal-
ysis as an indicator to sequence segmentation that may be
especially suitable for elucidating questions on animal com-
munication. Since change points split the series in chunks
with homogeneous statistics, these likely qualify as bits of
information being communicated. We also suggest how to
perform segmentation in different time scales and how to test
for data snooping.
Several other applications can be proposed, especially in-
volving subtle behavioral transitions controlled altogether by
several time scales [3]. Also, the literature on change point
problems is extensive and makes use of several different top-
ics (control theory, classical/Bayesian estimation, hypotheses
testing, etc). Thus, this paper is only scratching the surface of
possibilities for computational neuroscience and biology.
Authors thank for useful discussion with P Matias. TS
Mosqueiro acknowledges support from CNPq 234817/2014-
3. R Huerta and B Smith acknowledge partial support from
NIDCD R01DC011422 and NIH/NIGMS R01GM113967.
[1] J. Chen and A. K. Gupta, Parametric Statistical Change Point Analysis.
auser, 2012.
[2] R. G. Endres, Physical Principles in Sensing and Signaling. Oxford:
Oxford University Press, 2013.
[3] T. Mosqueiro, L. de Lecea, and R. Huerta, “Control of sleep-to-wake
transitions via fast amino acid and slow neuropeptide transmission,New
journal of physics, vol. 16, no. 11, p. 115010, 2014.
[4] B. Brodsky and B. Darkhovsky, Nonparametric Methods in Change-
Point Problems. Kluwer Academic Pablisher, 1993.
[5] M. Bassevile and I. Nikiforov, Detection of Abrupt Events: Theory and
Application. Prentice Hall, 1993.
[6] J. Fonollosa, I. Rodriguez-Lujan, A. V. Shevade, M. L. Homer, M. A.
Ryan, and R. Huerta, “Human activity monitoring using gas sensor
arrays,” Sensors and Actuators B: Chemical, vol. 199, no. 0, pp. 398–
402, 2014.
[7] M. F. Strube-Bloss, M. P. Nawrot, and R. Menzel, “Mushroom body
output neurons encode odor-reward associations.” The Journal of neu-
roscience : the official journal of the Society for Neuroscience, vol. 31,
no. 8, pp. 3129–40, mar 2011.
[8] R. T. Guariento, T. S. Mosqueiro, A. A. Caputi, and R. D. Pinto,
“A simple model for eletrocommunication: “refractoriness avoidance
response”?” BMC Neuroscience, vol. 15, no. Suppl 1, p. P68, 2014.
[9] M. F. Strube-Bloss, M. A. Herrera-Valdez, and B. H. Smith, “Ensemble
response in mushroom body output neurons of the honey bee outpaces
spatiotemporal odor processing two synapses earlier in the antennal
lobe.” PloS one, vol. 7, no. 11, p. e50322, jan 2012.
[10] D. S. Matteson and N. a. James, “A Nonparametric Approach for
Multiple Change Point Analysis of Multivariate Data,Journal of the
American Statistical Association, vol. 109, no. 505, pp. 334–345, 2014.
[11] T. Nowotny and R. Huerta, “On the equivalence of hebbian learning and
the svm formalism,” in Information Sciences and Systems (CISS), 2012
46th Annual Conference on. IEEE, 2012, pp. 1–4.
[12] T. S. Mosqueiro. (2016, March) Change point library for non-parametric
statistics. [Online]. Available:
[13] A. Pettitt, “A non-parametric approach to the change-point problem,
Applied statistics, pp. 126–135, 1979.
[14] K. Worsley, “An improved bonferroni inequality and applications,”
Biometrika, vol. 69, no. 2, pp. 297–302, 1982.
[15] G. Laurent, “Olfactory network dynamics and the coding of multidi-
mensional signals,” Nature Reviews Neuroscience, vol. 3, pp. 884–895,
[16] T. S. Mosqueiro and R. Huerta, “Computational models to understand
decision making and pattern recognition in the insect brain,” Current
Opinion in Insect Science, vol. 6, no. i, pp. 80–85, dec 2014.
[17] A. Resulaj and D. Rinberg, “Novel behavioral paradigm reveals lower
temporal limits on mouse olfactory decisions,” The Journal of Neuro-
science, vol. 35, no. 33, pp. 11 667–11 673, 2015.
[18] F. B. Rodr´
ıguez, R. Huerta, and M. Aylwin, “Neural sensitivity to
odorants in deprived and normal olfactory bulbs,PloS one, vol. 8, no. 4,
[19] F. B. Rodr´
ıguez and R. Huerta, “Techniques for temporal detection of
neural sensitivity to external stimulation,Biological cybernetics, vol.
100, no. 4, pp. 289–297, 2009.
[20] H. R. Wilson, Spikes, Decisions, and Actions: The Dynamical Founda-
tions of Neuroscience. Oxford University Press, USA, 1999.
[21] T. Nowotny, R. Huerta, H. D. Abarbanel, and M. I. Rabinovich, “Self-
organization in the olfactory system: one shot odor recognition in
insects,” Biological cybernetics, vol. 93, no. 6, pp. 436–446, 2005.
[22] M. Wachowiak and L. B. Cohen, “Representation of Odorants by
Receptor Neuron Input to the Mouse Olfactory Bulb,” Neuron, vol. 32,
no. 4, pp. 723–735, nov 2001.
[23] G. W. Westby, “Comparative studies of the aggressive behaviour of
two gymnotid electric fish (Gymnotus carapo and Hypopomus artedi).”
Animal behaviour, vol. 23, no. 1, pp. 192–213, feb 1975.
[24] J. J. Jun, A. Longtin, and L. Maler, “Precision measurement of electric
organ discharge timing from freely moving weakly electric fish,Journal
of neurophysiology, vol. 107, no. 7, pp. 1996–2007, 2012.
[25] A. C. Pereira and A. A. Caputi, “Imaging in electrosensory systems,”
Interdisciplinary Sciences: Computational Life Sciences, vol. 2, no. 4,
pp. 291–307, 2010.
[26] C. G. Forlim, R. D. Pinto, P. Varona, and F. B. Rodr´
ıguez, “Delay-
Dependent Response in Weakly Electric Fish under Closed-Loop Pulse
Stimulation,” PLOS ONE, vol. 10, no. 10, p. e0141007, oct 2015.
[27] A. A. Caputi, “Timing Self-generated Actions for Sensory Streaming,
in ICANN 2012, 2012nd ed. Springer Berlin / Heidelberg, 2012, pp.
[28] P. Matias, J. Frans Willem Slaets, and R. Daniel Pinto, “Individual
discrimination of freely swimming pulse-type electric fish from electrode
array recordings,” Neurocomputing, vol. 153, pp. 191–198, apr 2015.
[29] C. R. Shalizi et al., “Causal architecture, complexity and self-
organization in the time series and cellular automata,” Ph.D. dissertation,
University of Wisconsin–Madison, 2001.
[30] H. White, “A reality check for data snooping,Econometrica, pp. 1097–
1126, 2000.
... Accordingly, Cribben, Wager and Lindquist (2013); Schröder and Ombao (2015); Kirch, Muhsal and Ombao (2015); Gibberd and Nelson (2014); Avanesov et al. (2018); Dai, Zhang and Srivastava (2019); Anastasiou, Cribben and Fryzlewicz (2020) then proposed further methods for estimating FC change points. Mosqueiro et al. (2016); Koepcke, Ashida and Kretzberg (2016); Xiao et al. (2019) also considered change points in spike trains. While these methods are effective, they all are limited in the number of time series that can be considered. ...
Functional magnetic resonance imaging (fMRI) time series data presents a unique opportunity to understand temporal brain connectivity, and models that uncover the complex dynamic workings of this organ are of keen interest in neuroscience. Change point models can capture and reflect the dynamic nature of brain connectivity, however methods that translate well into a high-dimensional context (where $p>>n$) are scarce. To this end, we introduce $\textit{factorized binary search}$ (FaBiSearch), a novel change point detection method in the network structure of multivariate high-dimensional time series. FaBiSearch uses non-negative matrix factorization, an unsupervised dimension reduction technique, and a new binary search algorithm to identify multiple change points. In addition, we propose a new method for network estimation for data between change points. We show that FaBiSearch outperforms another state-of-the-art method on simulated data sets and we apply FaBiSearch to a resting-state and to a task-based fMRI data set. In particular, for the task-based data set, we explore network dynamics during the reading of Chapter 9 in $\textit{Harry Potter and the Sorcerer's Stone}$ and find that change points across subjects coincide with key plot twists. Further, we find that the density of networks was positively related to the frequency of speech between characters in the story. Finally, we make all the methods discussed available in the R package $\textbf{fabisearch}$ on CRAN.
... In addition to detecting periodicity of a time series which could be correlated to regional or global climate variability [48], detecting in-homogeneities and changes in time series are critically important, as they can reveal the role of any external or internal stimulus which has triggered a shift in a phenomenon [49]. A change point (CP) is defined as a probable point with the most/significant likelihood in time from where onward the statistical characteristics of the time series change. ...
Full-text available
This paper presents the development of an evenly spaced volume time series for Lakes Azuei and Enriquillo both located on the Caribbean island of Hispaniola. The time series is derived from an unevenly spaced Landsat imagery data set which is then exposed to several imputation methods to construct the gap filled uniformly-spaced time series so it can be subjected to statistical analyses methods. The volume time series features both gradual and sudden changes the latter of which is attributed to North Atlantic cyclone activity. Relevant cyclone activity is defined as an event passing within 80 km and having regional monthly rainfall averages higher than a threshold value of 87 mm causing discontinuities in the lake responses. Discontinuities are accounted for in the imputation algorithm by dividing the time series into two sub-sections: Before/after the event. Using leave-p-out cross-validation and computing the NRMSE index the Stineman interpolation proves to be the best algorithm among 15 different imputation alternatives that were tested. The final time series features 16-day intervals which is subsequently resampled into one with monthly time steps. Data analyses of the monthly volume change time series show Lake Enriquillo’s seasonal periodicity in its behavior and also its sensitivity due to the occurrence of storm events. Response times feature a growth pattern lasting for one to two years after an extreme event, followed by a shrinking pattern lasting 5–7 years returning the lake to its original state. While both lakes show a remarkable long term increase in size starting in 2005, Lake Azuei is different in that it is much less sensitive to storm events and instead shows a stronger response to just changing seasonal rainfall patterns.
... Additionally, humidity sensors are extremely appealing due to a high correlation between humidity levels and human perception of air quality [45,46]. Thus, when combined with other techniques [18,27,35,47,48,49], our model is likely to significantly enhance the performance of chemical detection systems, as for instance of home monitoring tasks. Our contribution thus emphasizes the importance of simultaneous recordings of humidity and temperature, and that their use is computationally amenable in sensor boards using low-energy micro-controllers. ...
A method for online decorrelation of chemical sensor readings from the effects of environmental humidity and temperature variations is proposed. The goal is to improve the accuracy of electronic nose measurements for continuous monitoring by processing data from simultaneous readings of environmental humidity and temperature. The electronic nose setup built for this study included eight different metal-oxide sensors, temperature and humidity sensors with a wireless communication link to PC. This wireless electronic nose was used to monitor air for two years in the residence of one of the authors and collected data continuously during 510 full days with a sampling rate of 2 samples per second. To estimate the effects of variations in air humidity and temperature on the chemical sensors readings, we used a standard energy band model for an n-type metal-oxide sensor. The main assumption of the model is that variations in sensor conductivity can be expressed as a nonlinear function of changes in the semiconductor energy bands in the presence of external humidity and temperature variations. Fitting this model to the collected data, we confirmed that the most statistically significant factors are humidity changes and correlated changes of temperature and humidity. This simple model achieves excellent accuracy with $R^2$ performance close to 1. To show how the humidity-temperature correction model works for gas discrimination, we also collected 100 samples of wine and banana. The goal is to distinguish between wine, banana, and baseline. We show that pattern recognition algorithms improve performance and reliability by including the filtered signal of the chemical sensors.
... Due to the generality of the chirp electric signature, it was possible to implement an efficient detector based on few manually classified samples. Such methodology allows to analyze very long IPI sequences from interacting fish, which is of fundamental importance for investigating electrocommunication and complex hyerachical behaviors Mosqueiro et al. (2016). ...
Full-text available
Pulse-type weakly electric fish present a rich repertoire of spatio-temporal electrical patterns used in electrolocation and electrocommunication. Common characteristic patterns, such as pulse rate changes, offs and chirps, are often associated with important behavioral contexts, including aggression, hiding, and mating. However these behaviors are only observed when at least two fish are freely interacting. Although their electrical pulses can be easily recorded by non-invasive techniques, discriminating the emitter of each pulse is challenging when physically similar fish are allowed to freely move and interact. Here we describe the statistical changes of some communication patterns during a dominance contest of freely moving \textit{Gymnotus carapo} dyads. Quantitative analysis was possible by using home-made software for automated pulse discrimination and chirp detection. In all freely interacting dyads chirps were signatures of subsequent submission, even when they occurred early in the contest. However, offs were not exclusive of the submissive fish, but more frequent and longer on those. Both results are in agreement to previously reported manual analysis, validating our automated analysis. We show that the submissive fish slows down its average pulse rate while the dominant keeps it almost unchanged during and after the dominance is established, in all experiments performed. Additionally, we analyzed if the direct interference of electric organs could cause offs and chirps. But none were found by simply forcibly keeping fish touching each other, regardless of their relative position or interaction time.
Sequential change-point detection from time series data is a common problem in many neuroscience applications, such as seizure detection, anomaly detection, and pain detection. In our previous work (Chen Z, Zhang Q, Tong AP, Manders TR, Wang J. J Neural Eng 14: 036023, 2017), we developed a latent state-space model, known as the Poisson linear dynamical system, for detecting abrupt changes in neuronal ensemble spike activity. In online brain-machine interface (BMI) applications, a recursive filtering algorithm is used to track the changes in the latent variable. However, previous methods have been restricted to Gaussian dynamical noise and have used Gaussian approximation for the Poisson likelihood. To improve the detection speed, we introduce non-Gaussian dynamical noise for modeling a stochastic jump process in the latent state space. To efficiently estimate the state posterior that accommodates non-Gaussian noise and non-Gaussian likelihood, we propose particle filtering and smoothing algorithms for the change-point detection problem. To speed up the computation, we implement the proposed particle filtering algorithms using advanced graphics processing unit computing technology. We validate our algorithms, using both computer simulations and experimental data for acute pain detection. Finally, we discuss several important practical issues in the context of real-time closed-loop BMI applications. NEW & NOTEWORTHY Sequential change-point detection is an important problem in closed-loop neuroscience experiments. This study proposes novel sequential Monte Carlo methods to quickly detect the onset and offset of a stochastic jump process that drives the population spike activity. This new approach is robust with respect to spike sorting noise and varying levels of signal-to-noise ratio. The GPU implementation of the computational algorithm allows for parallel processing in real time.
Conference Paper
Full-text available
Bioinspired Neural Networks have in many instances paved the way for significant discoveries in Statistical and Machine Learning. Among the many mechanisms employed by biological systems to implement learning, gain control is a ubiquitous and essential component that guarantees standard representation of patterns for improved performance in pattern recognition tasks. Gain control is particularly important for the identification of different odor molecules, regardless of their concentration. In this paper, we explore the functional impact of a biologically plausible model of the gain control on classification performance by representing the olfactory system of insects with a Single Hidden Layer Network (SHLN). Common to all insects, the primary olfactory pathway starts at the Antennal Lobes (ALs) and, then, odor identity is computed at the output of the Mushroom Bodies (MBs).We show that gain-control based on lateral inhibition in the Antennal Lobe robustly solves the classification of highly-concentrated odors. Furthermore, the proposed mechanism does not depend on learning at the AL level, in agreement with biological literature. Due to its simplicity, this bioinspired mechanism may not only be present in other neural systems but can also be further explored for applications, for instance, involving electronic noses.
Full-text available
In this paper, we apply a real time activity-dependent protocol to study how freely swimming weakly electric fish produce and process the timing of their own electric signals. Specifically, we address this study in the elephant fish, Gnathonemus petersii, an animal that uses weak discharges to locate obstacles or food while navigating, as well as for electro-communication with conspecifics. To investigate how the inter pulse intervals vary in response to external stimuli, we compare the response to a simple closed-loop stimulation protocol and the signals generated without electrical stimulation. The activity-dependent stimulation protocol explores different stimulus delivery delays relative to the fish's own electric discharges. We show that there is a critical time delay in this closed-loop interaction, as the largest changes in inter pulse intervals occur when the stimulation delay is below 100 ms. We also discuss the implications of these findings in the context of information processing in weakly electric fish.
Full-text available
Pulse-type weakly electric fishes communicate through electrical discharges with a stereotyped waveform, varying solely the interval between pulses according to the information being transmitted. This simple codification mechanism is similar to the one found in various known neuronal circuits, which renders these animals as good models for the study of natural communication systems, allowing experiments involving behavioral and neuroethological aspects. Performing analysis of data collected from more than one freely swimming fish is a challenge since the detected electric organ discharge (EOD) patterns are dependent on each animal's position and orientation relative to the electrodes. However, since each fish emits a characteristic EOD waveform, computational tools can be employed to match each EOD to the respective fish. In this paper we describe a computational method able to recognize fish EODs from dyads using normalized feature vectors obtained by applying Fourier and dual-tree complex wavelet packet transforms. We employ support vector machines as classifiers, and a continuity constraint algorithm allows us to solve issues caused by overlapping EODs and signal saturation. Extensive validation procedures with Gymnotus sp. showed that EODs can be assigned correctly to each fish with only two errors per million discharges.
Full-text available
Neural correlates of learning and memory formation have been reported at different stages of the olfactory pathway in both vertebrates and invertebrates. However, the contribution of different neurons to the formation of a memory trace is little understood. Mushroom bodies (MBs) in the insect brain are higher-order structures involved in integration of olfactory, visual, and mechanosensory information and in memory formation. Here we focus on the ensemble spiking activity of single MB output neurons (ENs) when honeybees learned to associate an odor with reward. A large group of ENs (∼50%) changed their odor response spectra by losing or gaining sensitivity for specific odors. This response switching was dominated by the rewarded stimulus (CS+), which evoked exclusively recruitment. The remaining ENs did not change their qualitative odor spectrum but modulated their tuning strength, again dominated by increased responses to the CS+. While the bees showed a conditioned response (proboscis extension) after a few acquisition trials, no short-term effects were observed in the neuronal activity. In both EN types, associative plastic changes occurred only during retention 3 h after conditioning. Thus, long-term but not short-term memory was reflected by increased EN activity to the CS+. During retention, the EN ensemble separated the CS+ most differently from the CS- and control odors ∼140 ms after stimulus onset. The learned behavioral response appeared ∼330 ms later. It is concluded that after memory consolidation, the ensemble activity of the MB output neurons predicts the meaning of the stimulus (reward) and may provide the prerequisite for the expression of the learned behavior.
Full-text available
Neural representations of odors are subject to computations that involve sequentially convergent and divergent anatomical connections across different areas of the brains in both mammals and insects. Furthermore, in both mammals and insects higher order brain areas are connected via feedback connections. In order to understand the transformations and interactions that this connectivity make possible, an ideal experiment would compare neural responses across different, sequential processing levels. Here we present results of recordings from a first order olfactory neuropile - the antennal lobe (AL) - and a higher order multimodal integration and learning center - the mushroom body (MB) - in the honey bee brain. We recorded projection neurons (PN) of the AL and extrinsic neurons (EN) of the MB, which provide the outputs from the two neuropils. Recordings at each level were made in different animals in some experiments and simultaneously in the same animal in others. We presented two odors and their mixture to compare odor response dynamics as well as classification speed and accuracy at each neural processing level. Surprisingly, the EN ensemble significantly starts separating odor stimuli rapidly and before the PN ensemble has reached significant separation. Furthermore the EN ensemble at the MB output reaches a maximum separation of odors between 84-120 ms after odor onset, which is 26 to 133 ms faster than the maximum separation at the AL output ensemble two synapses earlier in processing. It is likely that a subset of very fast PNs, which respond before the ENs, may initiate the rapid EN ensemble response. We suggest therefore that the timing of the EN ensemble activity would allow retroactive integration of its signal into the ongoing computation of the AL via centrifugal feedback.
Full-text available
The Locus Coeruleus (LC) modulates cortical, subcortical, cerebellar, brainstem and spinal cord circuits and it expresses receptors for neuromodulators that operate in a time scale of several seconds. Evidences from anatomical, electrophysiological and optogenetic experiments have shown that LC neurons receive input from a group of neurons called Hypocretins (HCRTs) that release a neuropeptide called hypocretin. It is less known how these two groups of neurons can be coregulated using GABAergic neurons. Since the time scales of GABA A inhibition is several orders of magnitude faster than the hypocretin neuropeptide effect, we investigate the limits of circuit activity regulation using a realistic model of neurons. Our investigation shows that GABA A inhibition is insufficient to control the activity levels of the LCs. Despite slower forms of GABA A can in principle work, there is not much plausibility due to the low probability of the presence of slow GABA A and lack of robust stability at the maximum firing frequencies. The best possible control mechanism predicted by our modeling analysis is the presence of inhibitory neuropeptides that exert effects in a similar time scale as the hypocretin/orexin. Although the nature of these inhibitory neuropeptides has not been identified yet, it provides the most efficient mechanism in the modeling analysis. Finally, we present a reduced mean-field model that perfectly captures the dynamics and the phenomena generated by this circuit. This investigation shows that brain communication involving multiple time scales can be better controlled by employing orthogonal mechanisms of neural transmission to decrease interference between cognitive processes and hypothalamic functions.
Temporal limits on perceptual decisions set strict boundaries on the possible underlying neural computations. How odor information is encoded in the olfactory system is still poorly understood. Here, we sought to define the limit on the speed of olfactory processing. To achieve this, we trained mice to discriminate different odor concentrations in a novel behavioral setup with precise odor delivery synchronized to the sniffing cycle. Mice reported their choice by moving a horizontal treadmill with their front limbs. We found that mice reported discriminations of 75% accuracy in 70-90 ms after odor inhalation. For a low concentration and nontrigeminal odorant, this time was 90-140 ms, showing that mice process odor information rapidly even in the absence of trigeminal stimulation. These response times establish, after accounting for odor transduction and motor delays, that olfactory processing can take tens of milliseconds. This study puts a strong limit on the underlying neural computations and suggests that the action potentials forming the neural basis for these decisions are fired in a few tens of milliseconds. Understanding how sensory information is processed requires different approaches that span multiple levels of investigation from genes to neurons to behavior. Limits on behavioral performance constrain the possible neural mechanisms responsible for specific computations. Using a novel behavioral paradigm, we established that mice can make decisions about odor intensity surprisingly fast. After accounting for sensory and motor delays, the limit on some olfactory neural computations can be as low as a few tens of milliseconds, which suggests that only the first action potentials across a population of neurons contribute to these computations. Copyright © 2015 the authors 0270-6474/15/3511667-07$15.00/0.