Frontiers in Computational Neuroscience

Publisher: Frontiers Research Foundation, Frontiers Research Foundation


  • Impact factor
  • 5-year impact
  • Cited half-life
  • Immediacy index
  • Eigenfactor
  • Article influence
  • ISSN
  • OCLC
  • Material type
    Document, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Frontiers Research Foundation

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Authors own copyright
    • Published source must be acknowledged
    • Publisher's version/PDF must be used for post-print
    • Set statement to accompany [This Document is Protected by copyright and was first published by Frontiers. All rights reserved. it is reproduced with permission.]
  • Classification
    ​ green

Publications in this journal

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these "conventional" graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks.
    Frontiers in Computational Neuroscience 01/2014; 7:189.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In a previous study, Harris et al. (2002) found disruption of vibrotactile short-term memory after applying single-pulse transcranial magnetic stimulation (TMS) to primary somatosensory cortex (SI) early in the maintenance period, and suggested that this demonstrated a role for SI in vibrotactile memory storage. While such a role is compatible with recent suggestions that sensory cortex is the storage substrate for working memory, it stands in contrast to a relatively large body of evidence from human EEG and single-cell recording in primates that instead points to prefrontal cortex as the storage substrate for vibrotactile memory. In the present study, we use computational methods to demonstrate how Harris et al.'s results can be reproduced by TMS-induced activity in sensory cortex and subsequent feedforward interference with memory traces stored in prefrontal cortex, thereby reconciling discordant findings in the tactile memory literature.
    Frontiers in Computational Neuroscience 01/2014; 8:23.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Top-down attention has often been separately studied in the contexts of either optimal population coding or biasing of visual search. Yet, both are intimately linked, as they entail optimally modulating sensory variables in neural populations according to top-down goals. Designing experiments to probe top-down attentional modulation is difficult because non-linear population dynamics are hard to predict in the absence of a concise theoretical framework. Here, we describe a unified framework that encompasses both contexts. Our work sheds light onto the ongoing debate on whether attention modulates neural response gain, tuning width, and/or preferred feature. We evaluate the framework by conducting simulations for two tasks: (1) classification (discrimination) of two stimuli s a and s b and (2) searching for a target T among distractors D. Results demonstrate that all of gain, tuning, and preferred feature modulation happen to different extents, depending on stimulus conditions and task demands. The theoretical analysis shows that task difficulty (linked to difference Δ between s a and s b , or T, and D) is a crucial factor in optimal modulation, with different effects in discrimination vs. search. Further, our framework allows us to quantify the relative utility of neural parameters. In easy tasks (when Δ is large compared to the density of the neural population), modulating gains and preferred features is sufficient to yield nearly optimal performance; however, in difficult tasks (smaller Δ), modulating tuning width becomes necessary to improve performance. This suggests that the conflicting reports from different experimental studies may be due to differences in tasks and in their difficulties. We further propose future electrophysiology experiments to observe different types of attentional modulation in a same neuron.
    Frontiers in Computational Neuroscience 01/2014; 8:34.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The recurrent interaction among orientation-selective neurons in the primary visual cortex (V1) is suited to enhance contours in a noisy visual scene. Motion is known to have a strong pop-up effect in perceiving contours, but how motion-sensitive neurons in V1 support contour detection remains vastly elusive. Here we suggest how the various types of motion-sensitive neurons observed in V1 should be wired together in a micro-circuitry to optimally extract contours in the visual scene. Motion-sensitive neurons can be selective about the direction of motion occurring at some spot or respond equally to all directions (pandirectional). We show that, in the light of figure-ground segregation, direction-selective motion neurons should additively modulate the corresponding orientation-selective neurons with preferred orientation orthogonal to the motion direction. In turn, to maximally enhance contours, pandirectional motion neurons should multiplicatively modulate all orientation-selective neurons with co-localized receptive fields. This multiplicative modulation amplifies the local V1-circuitry among co-aligned orientation-selective neurons for detecting elongated contours. We suggest that the additive modulation by direction-specific motion neurons is achieved through synaptic projections to the somatic region, and the multiplicative modulation by pandirectional motion neurons through projections to the apical region of orientation-specific pyramidal neurons. For the purpose of contour detection, the V1-intrinsic integration of motion information is advantageous over a downstream integration as it exploits the recurrent V1-circuitry designed for that task.
    Frontiers in Computational Neuroscience 01/2014; 8:67.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In everyday life, humans and animals often have to base decisions on infrequent relevant stimuli with respect to frequent irrelevant ones. When research in neuroscience mimics this situation, the effect of this imbalance in stimulus classes on performance evaluation has to be considered. This is most obvious for the often used overall accuracy, because the proportion of correct responses is governed by the more frequent class. This imbalance problem has been widely debated across disciplines and out of the discussed treatments this review focusses on performance estimation. For this, a more universal view is taken: an agent performing a classification task. Commonly used performance measures are characterized when used with imbalanced classes. Metrics like Accuracy, F-Measure, Matthews Correlation Coefficient, and Mutual Information are affected by imbalance, while other metrics do not have this drawback, like AUC, d-prime, Balanced Accuracy, Weighted Accuracy and G-Mean. It is pointed out that one is not restricted to this group of metrics, but the sensitivity to the class ratio has to be kept in mind for a proper choice. Selecting an appropriate metric is critical to avoid drawing misled conclusions.
    Frontiers in Computational Neuroscience 01/2014; 8:43.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Associative learning of temporally disparate events is of fundamental importance for perceptual and cognitive functions. Previous studies of the neural mechanisms of such association have been mainly focused on individual neurons or synapses, often with an assumption that there is persistent neural firing activity that decays slowly. However, experimental evidence supporting such firing activity for associative learning is still inconclusive. Here we present a novel, alternative account of associative learning in the context of classical conditioning, demonstrating that it is an emergent property of a spatially extended, spiking neural circuit with spike-timing dependent plasticity and short term synaptic depression. We show that both the conditioned and unconditioned stimuli can be represented by spike sequences which are produced by wave patterns propagating through the network, and that the interactions of these sequences are timing-dependent. After training, the occurrence of the sequence encoding the conditioned stimulus (CS) naturally regenerates that encoding the unconditioned stimulus (US), therefore resulting in association between them. Such associative learning based on interactions of spike sequences can happen even when the timescale of their separation is significantly larger than that of individual neurons. In particular, our network model is able to account for the temporal contiguity property of classical conditioning, as observed in behavioral studies. We further show that this emergent associative learning in our network model is quite robust to noise perturbations. Our results therefore demonstrate that associative learning of temporally disparate events can happen in a distributed way at the level of neural circuits.
    Frontiers in Computational Neuroscience 01/2014; 8:79.

Related Journals