Klaus Obermayer

Technische Universität Berlin, Berlín, Berlin, Germany

Are you Klaus Obermayer?

Claim your profile

Publications (242)275.23 Total impact

  • Source
    Dataset: talk
    Brijnesh J. Jain, Klaus Obermayer
  • Source
    Dataset: talk
    Brijnesh J. Jain, Klaus Obermayer
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Spike sorting, i.e., the separation of the firing activity of different neurons from extracellular measurements, is a crucial but often error-prone step in the analysis of neuronal responses. Usually, three different problems have to be solved: the detection of spikes in the extracellular recordings, the estimation of the number of neurons and their prototypical (template) spike wave-forms, and the assignment of individual spikes to those putative neurons. If the template spike waveforms are known, template matching can be used to solve the detection and classification problem. Here, we show that for the colored Gaussian noise case the optimal template matching is given by a form of linear filter-ing, which can be derived via linear discriminant analysis. This provides a Bayesian interpretation for the well-known matched filter output. Moreover, with this approach it is possible to com-pute a spike detection threshold analytically. The method can be implemented by a linear filter bank derived from the templates, and can be used for online spike sorting of multielectrode record-ings. It may also be applicable to detection and classification problems of transient signals in general. Its application signifi-cantly decreases the error rate on two publicly available spike-sorting benchmark data sets in comparison to state-of-the-art template matching procedures. Finally, we explore the possibility to resolve overlapping spikes using the template matching out-puts and show that they can be resolved with high accuracy.
    Journal of Computational Neuroscience 02/2015; 38(3):439-459. DOI:10.1007/s10827-015-0547-7 · 2.09 Impact Factor
  • Source
    Journal of Computational Neuroscience 02/2015; 38(3):439–459. DOI:10.1007/s10827-015-0555-7 · 2.09 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article reviews an emerging field that aims for autonomous reinforcement learning (RL) directly on sensor-observations. Straightforward end-to-end RL has recently shown remarkable success, but relies on large amounts of samples. As this is not feasible in robotics, we review two approaches to learn intermediate state representations from previous experiences: deep auto-encoders and slow-feature analysis. We analyze theoretical properties of the representations and point to potential improvements.
    01/2015; DOI:10.1007/s13218-015-0356-1
  • Source
    Wendelin Böhmer, Klaus Obermayer
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a novel regression-algorithm based on factored functions. We analyze the regression problem with sample- and label-noise, and derive a regularization term from a Taylor approximation of the cost function. The regularization can be efficiently exploited by a greedy optimization scheme to learn factored basis functions during training. The novel algorithm performs competitively to Gaussian processes (GP), but is less susceptible to the curse of dimensionality. Learned linear factored functions (LFF) are on average represented by only 4-9 factored bases, which is considerably more compact than a GP.
  • Johannes Mohr, Jong-Han Park, Klaus Obermayer
    [Show abstract] [Hide abstract]
    ABSTRACT: Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency.
    Neural Networks 09/2014; 60. DOI:10.1016/j.neunet.2014.08.010 · 2.08 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We derive a family of risk-sensitive reinforcement learning methods for agents, who face sequential decision-making tasks in uncertain environments. By applying a utility function to the temporal difference (TD) error, nonlinear transformations are effectively applied not only to the received rewards but also to the true transition probabilities of the underlying Markov decision process. When appropriate utility functions are chosen, the agents' behaviors express key features of human behavior as predicted by prospect theory (Kahneman & Tversky, 1979), for example, different risk preferences for gains and losses, as well as the shape of subjective probability curves. We derive a risk-sensitive Q-learning algorithm, which is necessary for modeling human behavior when transition probabilities are unknown, and prove its convergence. As a proof of principle for the applicability of the new framework, we apply it to quantify human behavior in a sequential investment task. We find that the risk-sensitive variant provides a significantly better fit to the behavioral data and that it leads to an interpretation of the subject's responses that is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals shows a significant correlation of the risk-sensitive TD error with BOLD signal change in the ventral striatum. In addition we find a significant correlation of the risk-sensitive Q-values with neural activity in the striatum, cingulate cortex, and insula that is not present if standard Q-values are used.
    Neural Computation 04/2014; 26(7). DOI:10.1162/NECO_a_00600 · 1.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Introduction: Brain stimulation is emerging as a fundamental tool in the clinical repertoire of a neurologist. Whereas invasive approaches are well established in clinical practice, non-invasive approaches are quickly gaining on importance. Independent of the type of stimulation, it is becoming remarkably clear that a better understanding of the neurophysiological mechanisms of interactions between patterns of stimulation and patterns of subject specific neural activity is necessary. The aim of this pilot study is to address if short periods of stimulation can entrain brain-rhythms. More explicitly, due to striking neurophysiological similarities between “photic driving” and “transorbital alternating current stimulation”, we compare short term photic- and electric stimulation. The hypothesis is that 30 seconds of bandwidth confined stimulation will evoke entrainment of the central alpha rhythm. Methods: To address this question, we stimulated 10 healthy subjects with retinofugal alternating current stimulation at 10 Hz for 30 seconds. In direct comparison, we induced steady-state visual evoked potentials at 10 Hz for 30 seconds. Sessions were applied in randomized order with baseline EEG recordings prior, during and after stimulation. EEG analyses were defined by clinical standards to identify “photic driving”. Results: In this framework we investigated: if a subject was susceptible to 10 Hz photic stimulation (DRIVING), if carry over effects exist for visual (VIS POST) and electric (ELC POST) stimulation. Results show that entrainment (DRIVING) could be induced and that alpha-entrainment persisted in both VIS POST and ELC POST conditions. All effects were significant in one-sided paired t-tests against baseline (p<0.05). Discussion: These findings show that short terms of brief stimulation can evoke significant entrainment of central rhythms. Remarkably, this was the case for both electric and photic stimulation. This provides a method to investigate quick changes in central rhythms induced by stimulation. One perspective is Brain-Computer-Interface driven stimulus optimization (DFG grant Nr: BR 1691/8-1).
    International Congress of Clinical Neurophysiology (ICCN), Berlin, Germany; 03/2014
  • Source
    Yun Shen, Wilhelm Stannat, Klaus Obermayer
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce the Lyapunov approach to optimal control problems of risk-sensitive Markov control processes on general Borel spaces equipped with risk maps, especially, with strictly convex risk maps like the entropic map. To ensure the existence and uniqueness of a solution to the associated nonlinear Poisson equation, we propose a new set of conditions: 1) Lyapunov-type conditions on both risk maps and cost functions that control the growth speed of iterations, and 2) Doeblin's conditions that generalize the known conditions for Markov chains. In the special case of the entropic map, we show that the above conditions can be replaced by the existence of a Lyapunov function, a local Doeblin's condition for the underlying Markov chain, and a growth condition for cost functions.
  • Source
    Josef Ladenbauer, Moritz Augustin, Klaus Obermayer
    [Show abstract] [Hide abstract]
    ABSTRACT: Many types of neurons exhibit spike rate adaptation, mediated by intrinsic slow K+ currents, which effectively inhibit neuronal responses. How these adaptation currents change the relationship between in vivo like fluctuating synaptic input, spike rate output, and the spike train statistics, however, is not well understood. In this computational study we show that an adaptation current that primarily depends on the subthreshold membrane voltage changes the neuronal input-output relationship (I-O curve) subtractively, thereby increasing the response threshold, and decreases its slope (response gain) for low spike rates. A spike-dependent adaptation current alters the I-O curve divisively, thus reducing the response gain. Both types of an adaptation current naturally increase the mean interspike interval (ISI), but they can affect ISI variability in opposite ways. A subthreshold current always causes an increase of variability while a spike-triggered current decreases high variability caused by fluctuation-dominated inputs and increases low variability when the average input is large. The effects on I-O curves match those caused by synaptic inhibition in networks with asynchronous irregular activity, for which we find subtractive and divisive changes caused by external and recurrent inhibition, respectively. Synaptic inhibition, however, always increases the ISI variability. We analytically derive expressions for the I-O curve and ISI variability, which demonstrate the robustness of our results. Furthermore, we show how the biophysical parameters of slow K+ conductances contribute to the two different types of an adaptation current and find that Ca2+ activated K+ currents are effectively captured by a simple spike-dependent description, while muscarine-sensitive or Na+ activated K+ currents show a dominant subthreshold component.
    Journal of Neurophysiology 03/2014; 111(5):939-953. DOI:10.1152/jn.00586.2013 · 3.04 Impact Factor
  • Source
    Robert Pröpper, Klaus Obermayer
    [Show abstract] [Hide abstract]
    ABSTRACT: Spyke Viewer is an open source application designed to help researchers analyze data from electrophysiological recordings or neural simulations. It provides a graphical data browser and supports finding and selecting relevant subsets of the data. Users can interact with the selected data using an integrated Python console or plugins. Spyke Viewer includes plugins for several common visualizations and allows users to easily extend the program by writing their own plugins. New plugins are automatically integrated with the graphical interface. Additional plugins can be downloaded and shared on a dedicated website.
    Frontiers in Neuroinformatics 11/2013; 7:26. DOI:10.3389/fninf.2013.00026
  • [Show abstract] [Hide abstract]
    ABSTRACT: According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers.
    Addiction Biology 11/2013; 18(6):883-896. DOI:10.1111/adb.12109 · 5.93 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We analyze zero-lag and cluster synchrony of delay-coupled nonsmooth dynamical systems by extending the master stability approach, and apply this to networks of adaptive threshold-model neurons. For a homogeneous population of excitatory and inhibitory neurons we find (i) that subthreshold adaptation stabilizes or destabilizes synchrony depending on whether the recurrent synaptic excitatory or inhibitory couplings dominate, and (ii) that synchrony is always unstable for networks with balanced recurrent synaptic inputs. If couplings are not too strong, synchronization properties are similar for very different coupling topologies, i.e., random connections or spatial networks with localized connectivity. We generalize our approach for two subpopulations of neurons with nonidentical local dynamics, including bursting, for which activity-based adaptation controls the stability of cluster states, independent of a specific coupling topology.
    Physical Review E 10/2013; 88(4-1):042713. DOI:10.1103/PhysRevE.88.042713 · 2.33 Impact Factor
  • Source
    Moritz Augustin, Josef Ladenbauer, Klaus Obermayer
    BMC Neuroscience; 07/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: Derivation of the transition conditions for the variational equations for zero-lag and cluster synchrony.
  • Source
  • Source
    Moritz Augustin, Josef Ladenbauer, Klaus Obermayer
    [Show abstract] [Hide abstract]
    ABSTRACT: Neural mass signals from recordings often show oscillations with frequencies ranging from <1 to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network-based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance-based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network-based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
    Frontiers in Computational Neuroscience 02/2013; 7:9. DOI:10.3389/fncom.2013.00009 · 2.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: 1 1 RFP-expressing PV+ neuronswere targeted for loose-patch recordings under two-photon guidance, with a patch pipette containing green dye (Alexa 488). (B) The spikes recorded from RFP+ neurons (red, n = 29) and RFP-neurons (black, n = 12) were averaged and normalized by their maximum voltage. Spikes recorded from the RFP+ neurons show the characteristic shape of fast-spiking PV+ neurons. (C) The spike shapes of RFP+ neurons and RFP-neurons are distinct. The ratio of peak amplitude and valley amplitude (p<.05), repolarization rate (p>.1), and spike width (p<.01) are plotted for RFP+ and RFP-neurons. (D) The orientation selectivity of PV+ neurons shows a multimodal distri-bution. Because of a large group of untuned RFP+ neurons, the mean orientation selectivity index (OSI) of RFP+ neuronsislower than that of RFP-neurons(p<05); however, asecond mode centered around OSI = 0.8-1.0 in the RFP+ distribution suggestsasecond subtype of PV+ neuronswith high selectivity. (E)The most highly 2.8 3.0 3.2 3.4 3.6 −240 −180 −120 −60 0 60 0 20 40 60 80 R e p o la r iz a t io n R a t e (V / s e c)One of the most prominent stimulus specific output features that gets encoded in the primary visual cortex (V1) s the orientation selectivity and tuning of the input. Several recent in-vivo experimental studies on mouse visual cortex have found that the inhibitory cells of all subtypes are broadly tuned for orientation, contrasting the findings of many other studies in higher mammals and rodents, which have shown the existence of inhibitory neurons that are as sharply tuned as excitatory neurons. Two very critical questions naturally emerge as a result of these contrasting findings: (1) How similar is the output responses such as orientation selectivity compare with that in previously described species? (2) What is the synaptic and network mechanism behind the sharpening of orientation selectivity in the mouse visual cortex? Here, we investigate the above questions in a computational framework with a recurrent network model of rodent primary visual cortex which lacks functional map. The synapses with and without astrocytic mechanisms are incorporated independently in a recurrent network model consists of an excitatory and inhibitory populations with orientation tuning organized in a "salt-and-pepper" manner. Further, we have incorporated differential afferent input to inhibitory cells motivated from new experimental findings of differential output responses of soma-targeting subtypes. Layer 2/3 excitatory cells are connected preferentially to neighboring cells with similar orientation tuning. Network simulation reveals combined feedforward drive with precise fine scale lateral excitation and inhibition predicts a range of orientation tuning for both excitatory and inhibitory neurons placed in layer 2/3 of primary visual cortex. In order to further constrain our network parameters we estimate the p-values using Kolmogorov-Smirnov test (K-S test) over the entire range of recurrent excitation and inhibition values. Based on the estimated p-Values we infer that there are several points in different operational regimes of this network under sensory drive which commensurate well with several recent experimental observations. In particular, there are several points in the recurrent regime of this network which gives significant p-values, an operational regime, where network parameters most likely generate sharp orientation tuning particularly within orientation representations with diverse local neighborhoods. Afferent input specifity could explain sharp tuning among a subtype of inhibitory cells, Feature specific lateral connectivity combined with afferent specificity provides network parameter which commensurates well with experimental OSI distributions for membrane potential and conductance selectivity. Moreover, astrocytic modulatory mechanisms such as differentiated glutamate decay times for both connections can lead to enhanced response at preferred orientation and broadening of tuning for neurons.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Primary visual cortex (V1) provides crucial insights into the selectivity and emergence of specific output features such as orientation tuning. Tuning and selectivity of cortical neurons in mouse visual cortex is not equivocally resolved so far. While many in-vivo experimental studies found inhibitory neurons of all subtypes to be broadly tuned for orientation other studies report inhibitory neurons that are as sharply tuned as excitatory neurons. These diverging findings about the selectivity of excitatory and inhibitory cortical neurons prompted us to ask the following questions: (1) How different or similar is the cortical computation with that in previously described species that relies on map? (2) What is the network mechanism underlying the sharpening of orientation selectivity in the mouse primary visual cortex? Here, we investigate the above questions in a computational framework with a recurrent network composed of Hodgkin-Huxley (HH) point neurons. Our cortical network with random connectivity alone could not account for all the experimental observations, which led us to hypothesize, (a) Orientation dependent connectivity (b) Feedforward afferent specificity to understand orientation selectivity of V1 neurons in mouse. Using population (orientation selectivity index) OSI as a measure of neuronal selectivity to stimulus orientation we test each hypothesis separately and in combination against experimental data. Based on our analysis of orientation selectivity (OS) data we find a good fit of network parameters in a model based on afferent specificity and connectivity that scales with feature similarity. We conclude that this particular model class best supports data sets of orientation selectivity of excitatory and inhibitory neurons in layer 2/3 of primary visual cortex of mouse.

Publication Stats

3k Citations
275.23 Total Impact Points

Institutions

  • 1970–2014
    • Technische Universität Berlin
      • • Department of Software Engineering and Theoretical Computer Science
      • • School IV Electrical Engineering and Computer Science
      Berlín, Berlin, Germany
  • 2005–2013
    • Bernstein Center for Computational Neuroscience Berlin
      Berlín, Berlin, Germany
  • 1995
    • Salk Institute
      La Jolla, California, United States
  • 1994
    • Bielefeld University
      • Faculty of Technology
      Bielefeld, North Rhine-Westphalia, Germany