Frontiers in Computational Neuroscience Journal Impact Factor & Information

Publisher: Frontiers Research Foundation, Frontiers

Journal description

Current impact factor: 2.23

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 2.233
2012 Impact Factor 2.481
2011 Impact Factor 2.147
2010 Impact Factor 2.586

Impact factor over time

Impact factor
Year

Additional details

5-year impact 2.61
Cited half-life 2.10
Immediacy index 0.81
Eigenfactor 0.00
Article influence 1.17
ISSN 1662-5188
OCLC 250614660
Material type Document, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publisher details

Frontiers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author cannot archive a post-print version
  • Conditions
    • On open access repositories
    • Authors retain copyright
    • Creative Commons Attribution License
    • Published source must be acknowledged
    • Publisher's version/PDF must be used for post-print
    • Set statement to accompany [This Document is Protected by copyright and was first published by Frontiers. All rights reserved. it is reproduced with permission.]
    • Articles are placed in PubMed Central immediately on behalf of authors.
    • Publisher last contacted on 04/10/2013
    • All titles are open access journals
  • Classification
    ​ green

Publications in this journal

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.
    Frontiers in Computational Neuroscience 03/2015; 9. DOI:10.3389/fncom.2015.00035
  • [Show abstract] [Hide abstract]
    ABSTRACT: A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.
    Frontiers in Computational Neuroscience 03/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Learning is a complex brain function operating on different time scales, from milliseconds to years, which induces enduring changes in brain dynamics. The brain also undergoes continuous "spontaneous" shifts in states, which, amongst others, are characterized by rhythmic activity of various frequencies. Besides the most obvious distinct modes of waking and sleep, wake-associated brain states comprise modulations of vigilance and attention. Recent findings show that certain brain states, particularly during sleep, are essential for learning and memory consolidation. Oscillatory activity plays a crucial role on several spatial scales, for example in plasticity at a synaptic level or in communication across brain areas. However, the underlying mechanisms and computational rules linking brain states and rhythms to learning, though relevant for our understanding of brain function and therapeutic approaches in brain disease, have not yet been elucidated. Here we review known mechanisms of how brain states mediate and modulate learning by their characteristic rhythmic signatures. To understand the critical interplay between brain states, brain rhythms, and learning processes, a wide range of experimental and theoretical work in animal models and human subjects from the single synapse to the large-scale cortical level needs to be integrated. By discussing results from experiments and theoretical approaches, we illuminate new avenues for utilizing neuronal learning mechanisms in developing tools and therapies, e.g., for stroke patients and to devise memory enhancement strategies for the elderly.
    Frontiers in Computational Neuroscience 02/2015; 9:1. DOI:10.3389/fncom.2015.00001
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recent innovations in neuroimaging technology have provided opportunities for researchers to investigate connectivity in the human brain by examining the anatomical circuitry as well as functional relationships between brain regions. Existing statistical approaches for connectivity generally examine resting-state or task-related functional connectivity (FC) between brain regions or separately examine structural linkages. As a means to determine brain networks, we present a unified Bayesian framework for analyzing FC utilizing the knowledge of associated structural connections, which extends an approach by Patel et al. (2006a) that considers only functional data. We introduce an FC measure that rests upon assessments of functional coherence between regional brain activity identified from functional magnetic resonance imaging (fMRI) data. Our structural connectivity (SC) information is drawn from diffusion tensor imaging (DTI) data, which is used to quantify probabilities of SC between brain regions. We formulate a prior distribution for FC that depends upon the probability of SC between brain regions, with this dependence adhering to structural-functional links revealed by our fMRI and DTI data. We further characterize the functional hierarchy of functionally connected brain regions by defining an ascendancy measure that compares the marginal probabilities of elevated activity between regions. In addition, we describe topological properties of the network, which is composed of connected region pairs, by performing graph theoretic analyses. We demonstrate the use of our Bayesian model using fMRI and DTI data from a study of auditory processing. We further illustrate the advantages of our method by comparisons to methods that only incorporate functional information.
    Frontiers in Computational Neuroscience 02/2015; 9:22. DOI:10.3389/fncom.2015.00022
  • [Show abstract] [Hide abstract]
    ABSTRACT: Tuning curves and receptive fields are widely used to describe the selectivity of sensory neurons, but the relationship between firing rates and information is not always intuitive. Neither high firing rates nor high tuning curve gradients necessarily mean that stimuli at that part of the tuning curve are well represented by a neuron. Recent research has shown that trial-to-trial variability (noise) and population size can strongly affect which stimuli are most precisely represented by a neuron in the context of a population code (the best-encoded stimulus), and that different measures of information can give conflicting indications. Specifically, the Fisher information is greatest where the tuning curve gradient is greatest, such as on the flanks of peaked tuning curves, but the stimulus-specific information (SSI) is greatest at the tuning curve peak for small populations with high trial-to-trial variability. Previous research in this area has focussed upon unimodal (peaked) tuning curves, and in this article we extend these analyses to monotonic tuning curves. In addition, we examine how stimulus spacing in forced choice tasks affects the best-encoded stimulus. Our results show that, regardless of the tuning curve, Fisher information correctly predicts the best-encoded stimulus for large populations and where the stimuli are closely spaced in forced choice tasks. In smaller populations with high variability, or in forced choice tasks with widely-spaced choices, the best-encoded stimulus falls at the peak of unimodal tuning curves, but is more variable for monotonic tuning curves. Task, population size and variability all need to be considered when assessing which stimuli a neuron represents, but the best-encoded stimulus can be estimated on a case-by case basis using commonly available computing facilities.
    Frontiers in Computational Neuroscience 02/2015; 9:18. DOI:10.3389/fncom.2015.00018