[Show abstract][Hide abstract] ABSTRACT: This paper introduces a novel regression-algorithm based on factored
functions. We analyze the regression problem with sample- and label-noise, and
derive a regularization term from a Taylor approximation of the cost function.
The regularization can be efficiently exploited by a greedy optimization scheme
to learn factored basis functions during training. The novel algorithm performs
competitively to Gaussian processes (GP), but is less susceptible to the curse
of dimensionality. Learned linear factored functions (LFF) are on average
represented by only 4-9 factored bases, which is considerably more compact than
[Show abstract][Hide abstract] ABSTRACT: Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency.
[Show abstract][Hide abstract] ABSTRACT: We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.
[Show abstract][Hide abstract] ABSTRACT: Introduction: Brain stimulation is emerging as a fundamental tool in the
clinical repertoire of a neurologist. Whereas invasive approaches are well
established in clinical practice, non-invasive approaches are quickly gaining
on importance. Independent of the type of stimulation, it is becoming
remarkably clear that a better understanding of the neurophysiological
mechanisms of interactions between patterns of stimulation and patterns
of subject specific neural activity is necessary. The aim of this pilot study is
to address if short periods of stimulation can entrain brain-rhythms. More
explicitly, due to striking neurophysiological similarities between “photic
driving” and “transorbital alternating current stimulation”, we compare
short term photic- and electric stimulation. The hypothesis is that 30
seconds of bandwidth confined stimulation will evoke entrainment of the
central alpha rhythm.
Methods: To address this question, we stimulated 10 healthy subjects with
retinofugal alternating current stimulation at 10 Hz for 30 seconds. In direct
comparison, we induced steady-state visual evoked potentials at 10 Hz for
30 seconds. Sessions were applied in randomized order with baseline EEG
recordings prior, during and after stimulation. EEG analyses were defined
by clinical standards to identify “photic driving”.
Results: In this framework we investigated: if a subject was susceptible to
10 Hz photic stimulation (DRIVING), if carry over effects exist for visual (VIS
POST) and electric (ELC POST) stimulation. Results show that entrainment
(DRIVING) could be induced and that alpha-entrainment persisted in both
VIS POST and ELC POST conditions. All effects were significant in one-sided
paired t-tests against baseline (p<0.05).
Discussion: These findings show that short terms of brief stimulation can
evoke significant entrainment of central rhythms. Remarkably, this was the
case for both electric and photic stimulation. This provides a method to
investigate quick changes in central rhythms induced by stimulation. One
perspective is Brain-Computer-Interface driven stimulus optimization (DFG
grant Nr: BR 1691/8-1).
International Congress of Clinical Neurophysiology (ICCN), Berlin, Germany; 03/2014
[Show abstract][Hide abstract] ABSTRACT: We introduce the Lyapunov approach to optimal control problems of
risk-sensitive Markov control processes on general Borel spaces equipped with
risk maps, especially, with strictly convex risk maps like the entropic map. To
ensure the existence and uniqueness of a solution to the associated nonlinear
Poisson equation, we propose a new set of conditions: 1) Lyapunov-type
conditions on both risk maps and cost functions that control the growth speed
of iterations, and 2) Doeblin's conditions that generalize the known conditions
for Markov chains. In the special case of the entropic map, we show that the
above conditions can be replaced by the existence of a Lyapunov function, a
local Doeblin's condition for the underlying Markov chain, and a growth
condition for cost functions.
[Show abstract][Hide abstract] ABSTRACT: Many types of neurons exhibit spike rate adaptation, mediated by intrinsic slow K+ currents, which effectively inhibit neuronal responses. How these adaptation currents change the relationship between in vivo like fluctuating synaptic input, spike rate output, and the spike train statistics, however, is not well understood. In this computational study we show that an adaptation current that primarily depends on the subthreshold membrane voltage changes the neuronal input-output relationship (I-O curve) subtractively, thereby increasing the response threshold, and decreases its slope (response gain) for low spike rates. A spike-dependent adaptation current alters the I-O curve divisively, thus reducing the response gain. Both types of an adaptation current naturally increase the mean interspike interval (ISI), but they can affect ISI variability in opposite ways. A subthreshold current always causes an increase of variability while a spike-triggered current decreases high variability caused by fluctuation-dominated inputs and increases low variability when the average input is large. The effects on I-O curves match those caused by synaptic inhibition in networks with asynchronous irregular activity, for which we find subtractive and divisive changes caused by external and recurrent inhibition, respectively. Synaptic inhibition, however, always increases the ISI variability. We analytically derive expressions for the I-O curve and ISI variability, which demonstrate the robustness of our results. Furthermore, we show how the biophysical parameters of slow K+ conductances contribute to the two different types of an adaptation current and find that Ca2+ activated K+ currents are effectively captured by a simple spike-dependent description, while muscarine-sensitive or Na+ activated K+ currents show a dominant subthreshold component.
Journal of Neurophysiology 03/2014; 111(5):939-953. · 3.04 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: The purpose of this experiment was to test a computational model of reinforcement learning with and without fictive prediction error (FPE) signals to investigate how counterfactual consequences contribute to acquired representations of action-specific expected value, and to determine the functional neuroanatomy and neuromodulator systems that are involved. 80 male participants underwent dietary depletion of either tryptophan or tyrosine/phenylalanine to manipulate serotonin (5HT) and dopamine (DA), respectively. They completed 80 rounds (240 trials) of a strategic sequential investment task that required accepting interim losses in order to access a lucrative state and maximize long-term gains, while being scanned. We extended the standard Q-learning model by incorporating both counterfactual gains and losses into separate error signals. The FPE model explained the participants' data significantly better than a model that did not include counterfactual learning signals. Expected value from the FPE model was significantly correlated with BOLD signal change in the ventromedial prefrontal cortex (vmPFC) and posterior orbitofrontal cortex (OFC), whereas expected value from the standard model did not predict changes in neural activity. The depletion procedure revealed significantly different neural responses to expected value in the vmPFC, caudate, and dopaminergic midbrain in the vicinity of the substantia nigra (SN). Differences in neural activity were not evident in the standard Q-learning computational model. These findings demonstrate that FPE signals are an important component of valuation for decision making, and that the neural representation of expected value incorporates cortical and subcortical structures via interactions among serotonergic and dopaminergic modulator systems.
[Show abstract][Hide abstract] ABSTRACT: According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers.
[Show abstract][Hide abstract] ABSTRACT: We analyze zero-lag and cluster synchrony of delay-coupled nonsmooth dynamical systems by extending the master stability approach, and apply this to networks of adaptive threshold-model neurons. For a homogeneous population of excitatory and inhibitory neurons we find (i) that subthreshold adaptation stabilizes or destabilizes synchrony depending on whether the recurrent synaptic excitatory or inhibitory couplings dominate, and (ii) that synchrony is always unstable for networks with balanced recurrent synaptic inputs. If couplings are not too strong, synchronization properties are similar for very different coupling topologies, i.e., random connections or spatial networks with localized connectivity. We generalize our approach for two subpopulations of neurons with nonidentical local dynamics, including bursting, for which activity-based adaptation controls the stability of cluster states, independent of a specific coupling topology.
Physical Review E 10/2013; 88(4-1):042713. · 2.31 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: 1 1 RFP-expressing PV+ neuronswere targeted for loose-patch recordings under two-photon guidance, with a patch pipette containing green dye (Alexa 488). (B) The spikes recorded from RFP+ neurons (red, n = 29) and RFP-neurons (black, n = 12) were averaged and normalized by their maximum voltage. Spikes recorded from the RFP+ neurons show the characteristic shape of fast-spiking PV+ neurons. (C) The spike shapes of RFP+ neurons and RFP-neurons are distinct. The ratio of peak amplitude and valley amplitude (p<.05), repolarization rate (p>.1), and spike width (p<.01) are plotted for RFP+ and RFP-neurons. (D) The orientation selectivity of PV+ neurons shows a multimodal distri-bution. Because of a large group of untuned RFP+ neurons, the mean orientation selectivity index (OSI) of RFP+ neuronsislower than that of RFP-neurons(p<05); however, asecond mode centered around OSI = 0.8-1.0 in the RFP+ distribution suggestsasecond subtype of PV+ neuronswith high selectivity. (E)The most highly 2.8 3.0 3.2 3.4 3.6 −240 −180 −120 −60 0 60 0 20 40 60 80 R e p o la r iz a t io n R a t e (V / s e c)One of the most prominent stimulus specific output features that gets encoded in the primary visual cortex (V1) s the orientation selectivity and tuning of the input. Several recent in-vivo experimental studies on mouse visual cortex have found that the inhibitory cells of all subtypes are broadly tuned for orientation, contrasting the findings of many other studies in higher mammals and rodents, which have shown the existence of inhibitory neurons that are as sharply tuned as excitatory neurons. Two very critical questions naturally emerge as a result of these contrasting findings: (1) How similar is the output responses such as orientation selectivity compare with that in previously described species? (2) What is the synaptic and network mechanism behind the sharpening of orientation selectivity in the mouse visual cortex? Here, we investigate the above questions in a computational framework with a recurrent network model of rodent primary visual cortex which lacks functional map. The synapses with and without astrocytic mechanisms are incorporated independently in a recurrent network model consists of an excitatory and inhibitory populations with orientation tuning organized in a "salt-and-pepper" manner. Further, we have incorporated differential afferent input to inhibitory cells motivated from new experimental findings of differential output responses of soma-targeting subtypes. Layer 2/3 excitatory cells are connected preferentially to neighboring cells with similar orientation tuning. Network simulation reveals combined feedforward drive with precise fine scale lateral excitation and inhibition predicts a range of orientation tuning for both excitatory and inhibitory neurons placed in layer 2/3 of primary visual cortex. In order to further constrain our network parameters we estimate the p-values using Kolmogorov-Smirnov test (K-S test) over the entire range of recurrent excitation and inhibition values. Based on the estimated p-Values we infer that there are several points in different operational regimes of this network under sensory drive which commensurate well with several recent experimental observations. In particular, there are several points in the recurrent regime of this network which gives significant p-values, an operational regime, where network parameters most likely generate sharp orientation tuning particularly within orientation representations with diverse local neighborhoods. Afferent input specifity could explain sharp tuning among a subtype of inhibitory cells, Feature specific lateral connectivity combined with afferent specificity provides network parameter which commensurates well with experimental OSI distributions for membrane potential and conductance selectivity. Moreover, astrocytic modulatory mechanisms such as differentiated glutamate decay times for both connections can lead to enhanced response at preferred orientation and broadening of tuning for neurons.
[Show abstract][Hide abstract] ABSTRACT: Primary visual cortex (V1) provides crucial insights into the selectivity and
emergence of specific output features such as orientation tuning. Tuning and
selectivity of cortical neurons in mouse visual cortex is not equivocally
resolved so far. While many in-vivo experimental studies found inhibitory
neurons of all subtypes to be broadly tuned for orientation other studies
report inhibitory neurons that are as sharply tuned as excitatory neurons.
These diverging findings about the selectivity of excitatory and inhibitory
cortical neurons prompted us to ask the following questions: (1) How different
or similar is the cortical computation with that in previously described
species that relies on map? (2) What is the network mechanism underlying the
sharpening of orientation selectivity in the mouse primary visual cortex? Here,
we investigate the above questions in a computational framework with a
recurrent network composed of Hodgkin-Huxley (HH) point neurons. Our cortical
network with random connectivity alone could not account for all the
experimental observations, which led us to hypothesize, (a) Orientation
dependent connectivity (b) Feedforward afferent specificity to understand
orientation selectivity of V1 neurons in mouse. Using population (orientation
selectivity index) OSI as a measure of neuronal selectivity to stimulus
orientation we test each hypothesis separately and in combination against
experimental data. Based on our analysis of orientation selectivity (OS) data
we find a good fit of network parameters in a model based on afferent
specificity and connectivity that scales with feature similarity. We conclude
that this particular model class best supports data sets of orientation
selectivity of excitatory and inhibitory neurons in layer 2/3 of primary visual
cortex of mouse.
[Show abstract][Hide abstract] ABSTRACT: Recent research suggests that novelty has an influence on reward-related learning. Here, we showed that novel stimuli presented from a pre-familiarized category can accelerate or decelerate learning of the most rewarding category, depending on the condition. The extent of this influence depended on the individual trait of novelty seeking. Different reinforcement learning models were developed to quantify subjects' choices. We introduced a bias parameter to model explorative behavior toward novel stimuli and characterize individual variation in novelty response. The theoretical framework allowed us to test different assumptions, concerning the motivational value of novelty. The best fitting-model combined all novelty components and had a significant positive correlation with both the experimentally measured novelty bias and the independent novelty-seeking trait. Altogether, we have not only shown that novelty by itself enhances behavioral responses underlying reward processing, but also that novelty has a direct influence on reward-dependent learning processes, consistently with computational predictions.
Progress in brain research 01/2013; 202:415-39. · 4.19 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Neural mass signals from recordings often show oscillations with frequencies ranging from <1 to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network-based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance-based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network-based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
Frontiers in Computational Neuroscience 01/2013; 7:9. · 2.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Spyke Viewer is an open source application designed to help researchers analyze data from electrophysiological recordings or neural simulations. It provides a graphical data browser and supports finding and selecting relevant subsets of the data. Users can interact with the selected data using an integrated Python console or plugins. Spyke Viewer includes plugins for several common visualizations and allows users to easily extend the program by writing their own plugins. New plugins are automatically integrated with the graphical interface. Additional plugins can be downloaded and shared on a dedicated website.
[Show abstract][Hide abstract] ABSTRACT: Brain stimulation is having remarkable impact on clinical neurology. Brain stimulation can modulate neuronal activity in functionally segregated circumscribed regions of the human brain. Polarity, frequency, and noise specific stimulation can induce specific manipulations on neural activity. In contrast to neocortical stimulation, deep-brain stimulation has become a tool that can dramatically improve the impact clinicians can possibly have on movement disorders. In contrast, neocortical brain stimulation is proving to be remarkably susceptible to intrinsic brain-states. Although evidence is accumulating that brain stimulation can facilitate recovery processes in patients with cerebral stroke, the high variability of results impedes successful clinical implementation. Interestingly, recent data in healthy subjects suggests that brain-state dependent patterned stimulation might help resolve some of the intrinsic variability found in previous studies. In parallel, other studies suggest that noisy "stochastic resonance" (SR)-like processes are a non-negligible component in non-invasive brain stimulation studies. The hypothesis developed in this manuscript is that stimulation patterning with noisy and oscillatory components will help patients recover from stroke related deficits more reliably. To address this hypothesis we focus on two factors common to both neural computation (intrinsic variables) as well as brain stimulation (extrinsic variables): noise and oscillation. We review diverse theoretical and experimental evidence that demonstrates that subject-function specific brain-states are associated with specific oscillatory activity patterns. These states are transient and can be maintained by noisy processes. The resulting control procedures can resemble homeostatic or SR processes. In this context we try to extend awareness for inter-individual differences and the use of individualized stimulation in the recovery maximization of stroke patients.
Frontiers in Human Neuroscience 01/2013; 7:325. · 2.90 Impact Factor