[Show abstract][Hide abstract] ABSTRACT: An important question in neuroscience is understanding the relationship between high-dimensional elec-trophysiological data and complex, dynamic behavioral data. One general strategy to address this problem is to define a low-dimensional representation of essential cognitive features describing this relationship. Here we describe a general state-space method to model and fit a low-dimensional cognitive state process that allows us to relate behavioral outcomes of various tasks to simultaneously recorded neural activity across multiple brain areas. In particular, we apply this model to data recorded in the lateral prefrontal cortex (PFC) and caudate nucleus of non-human primates as they perform learning and adaptation in a rule-switching task. First, we define a model for a cognitive state process related to learning, and estimate the progression of this learning state through the experiments. Next, we formulate a point process generalized linear model to relate the spiking activity of each PFC and caudate neuron to * the stimated learning state. Then, we compute the posterior densities of the cognitive state using a recursive Bayesian decoding algorithm. We demonstrate that accurate decoding of a learning state is possible with a simple point process model of population spiking. Our analyses also allow us to compare decoding accuracy across neural populations in the PFC and caudate nucleus.
[Show abstract][Hide abstract] ABSTRACT: Point process filters have been applied successfully to decode neural signals and track neural dynamics. Traditionally these methods assume that multiunit spiking activity has already been correctly spike-sorted. As a result, these methods are not appropriate for situations where sorting cannot be performed with high precision, such as real-time decoding for brain-computer interfaces. Because the unsupervised spike-sorting problem remains unsolved, we took an alternative approach that takes advantage of recent insights into clusterless decoding. Here we present a new point process decoding algorithm that does not require multiunit signals to be sorted into individual units. We use the theory of marked point processes to construct a function that characterizes the relationship between a covariate of interest (in this case, the location of a rat on a track) and features of the spike waveforms. In our example, we use tetrode recordings, and the marks represent a four-dimensional vector of the maximum amplitudes of the spike waveform on each of the four electrodes. In general, the marks may represent any features of the spike waveform. We then use Bayes's rule to estimate spatial location from hippocampal neural activity. We validate our approach with a simulation study and experimental data recorded in the hippocampus of a rat moving through a linear environment. Our decoding algorithm accurately reconstructs the rat's position from unsorted multiunit spiking activity. We then compare the quality of our decoding algorithm to that of a traditional spike-sorting and decoding algorithm. Our analyses show that the proposed decoding algorithm performs equivalent to or better than algorithms based on sorted single-unit activity. These results provide a path toward accurate real-time decoding of spiking patterns that could be used to carry out content-specific manipulations of population activity in hippocampus or elsewhere in the brain.
Full-text · Article · May 2015 · Neural Computation
[Show abstract][Hide abstract] ABSTRACT: The signal-to-noise ratio (SNR), a commonly used measure of fidelity in physical systems, is defined as the ratio of the squared amplitude or variance of a signal relative to the variance of the noise. This definition is not appropriate for neural systems in which spiking activity is more accurately represented using point processes. We show that the SNR estimates a ratio of expected prediction errors and extend the standard definition to one appropriate for single neurons by representing neural spiking activity using point process generalized linear models (PP-GLM). We estimate the prediction errors using the residual deviances from the PP-GLM fits. Because the deviance is an approximate chi-squared random variable whose expected value is the number of degrees of freedom, we compute a bias-corrected SNR estimate appropriate for single neuron analysis and use the bootstrap to assess its uncertainty. In the analysis of four systems neuroscience experiments, we show that the SNRs are -10 to -3 dB for rat auditory cortex, -18 to -7 dB for rat thalamus, -28 to -14 dB for monkey hippocampus and -29 to -20 dB for human subthalamic neurons. The new SNR definition makes explicit in the measure commonly used for physical systems the often-quoted observation that single neurons have low SNRs. As a corollary of this observation, we use the SNR analysis to show that the neurons spiking history is often a more informative covariate for predicting spiking propensity than the applied stimulus. Our new SNR definition can be extended to analyze the SNR of neuronal ensembles as well as to any system in which modulation of the systems response by distinct signal components can be expressed as separate components of a likelihood function.
Full-text · Article · Mar 2015 · Proceedings of the National Academy of Sciences
[Show abstract][Hide abstract] ABSTRACT: We propose a time-frequency representation based on the ridges of the continuous chirplet transform to identify both fast transients and components with well-defined instantaneous frequency in noisy data. At each chirplet modulation rate, every ridge corresponds to a territory in the time-frequency plane such that the territories form a partition of the time-frequency plane. For many signals containing sparse signal components, ridge length and ridge stability are maximized when the analysis kernel is adapted to the signal content. These properties provide opportunities for enhancing signal in noise and for elementary stream segregation by combining information across multiple analysis chirp rates.
Full-text · Article · Oct 2014 · IEEE Transactions on Signal Processing
[Show abstract][Hide abstract] ABSTRACT: The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.
Full-text · Article · Mar 2014 · Frontiers in Computational Neuroscience
[Show abstract][Hide abstract] ABSTRACT: A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach - linking statistical, computational, and experimental neuroscience - provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.
[Show abstract][Hide abstract] ABSTRACT: Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings-such as local field potential, magnetoencephalography, and electroencephalography data-require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.
Full-text · Article · Dec 2013 · Chaos (Woodbury, N.Y.)
[Show abstract][Hide abstract] ABSTRACT: Grid cells in the medial entorhinal cortex fire in an array of locations falling on the vertices of tightly packed equilateral triangles as an animal explores an environment. This is thought to provide a dense neural representation of the environment. This unique, spatially tuned firing pattern has lead to grid cells being implicated in spatial memory and navigation. With increasing emphasis being placed on understanding how the brain mediates path integration, understanding the firing properties of grid cells is a chief goal. Despite this importance however, there is still a paucity of rigorous statistical techniques for analyzing the firing fields of grid cells and, therefore, it remains very challenging to test hypotheses about how the grid cell spatial firing pattern may change under different manipulations. In order to address this issue, we have developed a technique to determine multiple features of grid cells via automated likelihood fitting. Specifically, we constructed point process models that describe the spiking of each grid cell as a function of the positions of a rat in an open field environment. The statistical model incorporates a number of unknown parameters including parameters related to grid spacing, field size, position, and orientation, among others. Using both simulated data from mechanistic models of grid cells and real data recorded from the medial entorhinal cortex while a rat foraged for food, we computed multidimensional likelihood surfaces as functions of the statistical model parameters. We show that these likelihoods have a complicated, multi-modal structure. We show, in simulation, that with an appropriate initial guess of the model parameters based on simple visualization tools, a gradient ascent procedure will reliably attain the global maximum of the likelihood surface. We use the likelihood to estimate the parameters that best define each grid cell’s firing field, and construct confidence intervals. Using these methods, we also define a maximum likelihood ratio test, to determine significant differences between grid firing fields. We have developed MATLAB code to implement these methods, which can be adapted to most experimental preparations
[Show abstract][Hide abstract] ABSTRACT: Brain voltage activity displays distinct neuronal rhythms spanning a wide frequency range. How rhythms of different frequency interact - and the function of these interactions - remains an active area of research. Many methods have been proposed to assess the interactions between different frequency rhythms, in particular measures that characterize the relationship between the phase of a low frequency rhythm and the amplitude envelope of a high frequency rhythm. However, an optimal analysis method to assess this cross-frequency coupling (CFC) does not yet exist.
Here we describe a new procedure to assess CFC that utilizes the generalized linear modeling (GLM) framework.
We illustrate the utility of this procedure in three synthetic examples. The proposed GLM-CFC procedure allows a rapid and principled assessment of CFC with confidence bounds, scales with the intensity of the CFC, and accurately detects biphasic coupling.
Compared to existing methods, the proposed GLM-CFC procedure is easily interpretable, possesses confidence interval s that are easy and efficient to compute, and accurately detects biphasic coupling.
The GLM-CFC statistic provides a method for accurate and statistically rigorous assessment of CFC.
No preview · Article · Sep 2013 · Journal of Neuroscience Methods
[Show abstract][Hide abstract] ABSTRACT: Electrical neurostimulation techniques, such as deep brain stimulation (DBS) and transcranial magnetic stimulation (TMS), are increasingly used in the neurosciences, e.g., for studying brain function, and for neurotherapeutics, e.g., for treating depression, epilepsy, and Parkinson's disease. The characterization of electrical properties of brain tissue has guided our fundamental understanding and application of these methods, from electrophysiologic theory to clinical dosing-metrics. Nonetheless, prior computational models have primarily relied on ex-vivo impedance measurements. We recorded the in-vivo impedances of brain tissues during neurosurgical procedures and used these results to construct MRI guided computational models of TMS and DBS neurostimulatory fields and conductance-based models of neurons exposed to stimulation. We demonstrated that tissues carry neurostimulation currents through frequency dependent resistive and capacitive properties not typically accounted for by past neurostimulation modeling work. We show that these fundamental brain tissue properties can have significant effects on the neurostimulatory-fields (capacitive and resistive current composition and spatial/temporal dynamics) and neural responses (stimulation threshold, ionic currents, and membrane dynamics). These findings highlight the importance of tissue impedance properties on neurostimulation and impact our understanding of the biological mechanisms and technological potential of neurostimulatory methods.
[Show abstract][Hide abstract] ABSTRACT: We develop a particle filter algorithm to simultaneously estimate and track the instantaneous peak frequency, amplitude, and bandwidth of multiple concurrent non-stationary components of an EEG signal in the time-frequency domain. We use this method to characterize human EEG activity during anesthesia-induced unconsciousness.
No preview · Article · Jul 2013 · Conference proceedings: ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference
[Show abstract][Hide abstract] ABSTRACT: Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.
[Show abstract][Hide abstract] ABSTRACT: Muscle sympathetic nerve activity is a primary source of cardiovascular control in humans. Traditional analyses smooth away the fine temporal structure of the sympathetic recordings, limiting our understanding of sympathetic activation mechanisms. We use multifiber spike trains extracted from standard microneurography voltage trace to characterize the sympathetic spiking at rest and during sympathoexcitation. Our analysis corroborates known features of sympathetic activity, such as bursting behavior, cardiac rhythmicity, and long conduction delays. It also elucidates new features such as large heartbeat-to-heartbeat variability of firing rates and precise pattern of spiking within cardiac cycles. We find that at low firing rates spikes occur uniformly throughout the cardiac cycle, but at higher rates they tend to cluster in bursts around a particular latency. This latency shortens and the clusters tighten as the firing rates grow. Sympathoexcitation increases firing rates and shifts the burst latency later. Negative rate/latency correlation and the sympathoexcitatory shift suggest that spike production of the individual fibers contributes significantly to the control of the sympathetic bursts strength. Access to fine scale temporal information, more physiologically accurate description of nerve activity, and new hypotheses about the nervous outflow control establishes sympathetic spiking as a valuable tool for the cardiovascular research.
No preview · Article · Jun 2013 · IEEE transactions on bio-medical engineering
[Show abstract][Hide abstract] ABSTRACT: The instantaneous phase of neural rhythms is important to many neuroscience-related studies. In this letter, we show that the statistical sampling properties of three instantaneous phase estimators commonly employed to analyze neuroscience data share common features, allowing an analytical investigation into their behavior. These three phase estimators-the Hilbert, complex Morlet, and discrete Fourier transform-are each shown to maximize the likelihood of the data, assuming the observation of different neural signals. This connection, explored with the use of a geometric argument, is used to describe the bias and variance properties of each of the phase estimators, their temporal dependence, and the effect of model misspecification. This analysis suggests how prior knowledge about a rhythmic signal can be used to improve the accuracy of phase estimates.
No preview · Article · Jan 2013 · Neural Computation