Measuring sparseness in the brain: comment on Bowers (2009).

Department of Engineering, University of Leicester, LE17RH, England.
Psychological Review (Impact Factor: 7.72). 01/2010; 117(1):291-7. DOI: 10.1037/a0016917
Source: PubMed

ABSTRACT Bowers challenged the common view in favor of distributed representations in psychological modeling and the main arguments given against localist and grandmother cell coding schemes. He revisited the results of several single-cell studies, arguing that they do not support distributed representations. We praise the contribution of Bowers (2009) for joining evidence from psychological modeling and neurophysiological recordings, but we disagree with several of his claims. In this comment, we argue that distinctions between distributed, localist, and grandmother cell coding can be troublesome with real data. Moreover, these distinctions seem to be lying within the same continuum, and we argue that it may be sensible to characterize coding schemes with a sparseness measure. We further argue that there may not be a unique coding scheme implemented in all brain areas and for all possible functions. In particular, current evidence suggests that the brain may use distributed codes in primary sensory areas and sparser and invariant representations in higher areas.

1 Follower
  • [Show abstract] [Hide abstract]
    ABSTRACT: A key insight from 50 years of neurophysiology is that some neurons in cortex respond to information in a highly selective manner. Why is this? We argue that selective representations support the coactivation of multiple "things" (e.g., words, objects, faces) in short-term memory, whereas nonselective codes are often unsuitable for this purpose. That is, the coactivation of nonselective codes often results in a blend pattern that is ambiguous; the so-called superposition catastrophe. We show that a recurrent parallel distributed processing network trained to code for multiple words at the same time over the same set of units learns localist letter and word codes, and the number of localist codes scales with the level of the superposition. Given that many cortical systems are required to coactivate multiple things in short-term memory, we suggest that the superposition constraint plays a role in explaining the existence of selective codes in cortex. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
    Psychological Review 02/2014; 121(2). DOI:10.1037/a0035943 · 7.72 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Interactive activation and competition models (IAMs) can not only account for behavioral data from implicit memory tasks, but also for brain data. We start by a discussion of standards for developing and evaluating cognitive models, followed by example demonstrations. In doing so, we relate IAM representations to word length, sequence, frequency, repetition, and orthographic neighborhood effects in behavioral, electrophysiological, and neuroimaging studies along the ventral visual stream. We then examine to what extent lexical competition can account for anterior cingulate cortex (ACC) activation and the N2/N400 complex. The subsequent section presents the Associative Read-Out Model (AROM), which extends the scope of IAMs by introducing explicit memory and semantic representations. Thereby, it can account for false memories, and familiarity and recollection - explaining why memory signal variances are greater for studied than non-studied items. Since the AROM captures associative spreading across semantic long-term memory, it can also account for different temporal lobe functions, and allows for item-level predictions of the left inferior frontal gyrus' BOLD response. Finally, we use the AROM to examine whether semantic cohesiveness can account for effects previously ascribed to affective word features, i.e. emotional valence, and show that this is the case for positive, but not for negative valence.
    Neuroscience & Biobehavioral Reviews 07/2014; DOI:10.1016/j.neubiorev.2014.06.011 · 10.28 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Recordings in the human medial temporal lobe have found many neurons that respond to pictures (and related stimuli) of just one particular person out of those presented. It has been proposed that these are concept cells, responding to just a single concept. However, a direct experimental test of the concept cell idea appears impossible, because it would need the measurement of the response of each cell to enormous numbers of other stimuli. Here we propose a new statistical method for analysis of the data, that gives a more powerful way to analyze how close data are to the concept-cell idea. It exploits the large number of sampled neurons, to give sensitivity to situations where the average response sparsity is to much less than one response for the number of presented stimuli. We show that a conventional model where a single sparsity is postulated for all neurons gives an extremely poor fit to the data. In contrast a model with two dramatically different populations give an excellent fit to data from the hippocampus and entorhinal cortex. In the hippocampus, one population has 7% of the cells with a 2.6% sparsity. But a much larger fraction 93% respond to only 0.1% of the stimuli. This results in an extreme bias in the reported responsive of neurons compared with a typical neuron. Finally, we show how to allow for the fact that some of reported identified units correspond to multiple neurons, and find that our conclusions at the neural level are quantitatively changed but strengthened, with an even stronger difference between the two populations.

Full-text (2 Sources)

Available from
May 21, 2014