A cell assembly transmits the full likelihood distribution via an atemporal combinatorial spike code
Summary. The “neuron doctrine” says the individual neuron is the functional unit of meaning. For a single
source neuron, spike coding schemes can be based on spike rate or precise spike time(s). Both are fundamentally
temporal codes with two key limitations. 1) Assuming M different messages from (activation levels of) the source
neuron are possible, the decode window duration, T, must be ≥ M times a single spike’s duration. 2) Only one
message (activation level) can be sent at a time. If instead, we define the cell assembly (CA), i.e., a set of co-
active neurons, as the functional unit of meaning, then a message is carried by the set of spikes simultaneously
propagating in the bundle of axons leading from the CA’s neurons. This admits a more efficient, faster,
fundamentally atemporal, combinatorial coding scheme, which removes the above limitations. First, T becomes
independent of M, in principle, shrinking to a single spike duration. Second, multiple messages, in fact, the entire
similarity (thus, likelihood) distribution over all items stored in the coding field can be sent simultaneously. This
requires defining CAs as sets of fixed cardinality, Q, which allows the similarity structure over a set of items
stored as CAs to be represented by their intersection structure. Moreover, when any one CA is fully active (all Q
of its neurons are active), all other CAs stored in the coding field are partially active proportional to their
intersections with the fully active CA. If M concepts are stored, there are M! possible similarity orderings. Thus,
sending any one of those orderings sends log2(M!) bits, far exceeding the log2M bits sent by any single message
using temporal spike codes. This marriage of a fixed-size CA representation and atemporal coding scheme may
explain the speed and efficiency of probabilistic computation in the brain.
For a single source neuron, two types of spike codes are possible, rate (frequency) (Fig. 1b), and latency (e.g.,
of spike(s) relative to an event, e.g., phase of gamma) (Fig. 1c). Both are fundamentally temporal, requiring a
decode window duration T much longer than a single spike. [n.b. no relation between T and axon length intended.]
Further, T must grow with the number of unique values that need to be reliably sent/decoded. But if information
is represented by Hebbian cell assemblies (CAs), a particular kind of distributed code wherein items are
represented by sets of co-active neurons chosen from a (typically much larger) coding field, then messages are
carried by sets of spikes propagating in bundles of axons. N.b.: Some population-based models remain
fundamentally temporal: the signal depends on spike rates of the afferent axons, e.g., [1-4] (not shown in Fig. 1).
Figure 1. Temporal vs. non-temporal spike coding concepts.
Distributed coding allows a fundamentally atemporal coding scheme where the signal is encoded in the
pattern of instantaneous sums of spikes arriving simultaneously via multiple afferent synapses onto the target
field neurons, in principle, allowing T to shrink to the duration of a single, i.e., first, spike. Fig. 1d illustrates one
such scheme  in which the fraction of active neurons in a source population carries the message [input
Non-Temporal Population Codes
e) Fixed-Size CA
b) Rate c) Latency d) Variable-Size
Target Field 1
5 4 3 2
4 5 4 3
3 4 5 4
2 3 4 5
summation (number of simultaneous spikes) shown next to the target cell for each of the four signals values].
This variable-size population (a.k.a. “thermometer”) coding scheme has the benefit that all signals are sent in the
same, short, time, but it is not combinatorial in nature, and has many limitations: a) uneven usage of the neurons;
b) different messages (signals) require different energies; c) individual neurons represent localistically, e.g., each
source field neuron represents a specific increment of a scalar encoded variable; d) the max number of
representable values (items/concepts) is the number of units comprising the coding field; and most importantly,
e) any single message sent represents only one value, e.g., a single value (level) of a scalar variable, implying that
any one message carries log2M bits, where M is the number of possible messages (values).
In contrast, consider the fixed-size CA representation of Fig. 1e. This CA coding field is organized as Q=5
WTA competitive modules (CMs), each with K=4 binary units. Thus, all codes, are of the same fixed size, Q=5.
As explained in  and illustrated by charts at right of Fig. 1, if the learning algorithm preserves similarity, i.e.,
maps more similar values to more highly intersecting CAs, then any single CA,
, represents (encodes) the
similarity distribution over all items (values) stored in the field. Note: blue denotes active units not in the
. If we can further assume that value (e.g., input pattern) similarity correlates with likelihood
(as is reasonable for vast portions of input spaces having natural statistics), then any
encodes not just the single
item to which it was assigned during learning, but the detailed likelihood distribution over all items stored in the
field. By “detailed distribution”, we mean the individual likelihoods of all stored items [including items learned
], not just a few higher-order statistics describing the distribution. Rinkus [7, 8] described a
neurally-plausible, unsupervised, single-trial, on-line, similarity-preserving learning algorithm that runs in fixed-
time, i.e., the number of steps to learn (i.e., store) an item remains fixed as the number of stored items grows, as
does the number of steps needed to retrieve the most similar (likely) stored input.
Crucially, since any one CA encodes the entire likelihood distribution, the set of single spikes sent from such
a code simultaneously transmits that entire distribution: the instantaneous sums at the target cells carry the
information. Note: whenever any CA,
, is sent, 20 lines (axons) are active (black), meaning that all four target
cells will have Q (=5) active inputs. Thus, due to the combinatorial nature of the fixed-size CA code, the specific
values of the binary weights are essential to describing the code, unlike the other codes where the weights could
all be assumed to be 1. This emphasizes that for the combinatorial, fixed-size CA code, we need to view the
“channel”, i.e., the weight matrix, as having an internal structure that is changed during learning (by the signals
that traverse the weights), whereas for the other codes, the channel can be viewed as a “neutral bus”. Thus, for
the example of Fig. 1e, we assume: a) all weights are initially 0; b) the four associations,
1target cell 1
2target cell 2
, etc., were previously learned with single trials; and c) on those trials, coactive pre-post synapses
were increased to wt=1. Thus, if
is reactivated, target cell 1’s input sum will be 5 and other cells’ sums will be
as shown (to left of target cells). If
is reactivated, target cell 2’s input sum will be 5, etc. [black line: active,
increased wt; dotted line: non-active increased wt; gray line: non-increased wt.] The four target cells could be
embedded in a recurrent field with inhibitory infrastructure that would allow them to be read out sequentially in
descending input summation order. That implies that the full similarity (likelihood) order information over all
four stored items is sent in each of the four cases. Since there are 4! orderings of the four items, then each such
message, each a volley of 20 simultaneous spikes sent from five active CA neurons, sends log2(4!)=4.58 bits. I
suggest this marriage of fixed-size CAs and an atemporal first-spike coding scheme is a crucial advance beyond
prior population-based models, i.e., the “distributional encoding” models (see [9-11] for reviews), and may be
key to explaining the speed and efficiency of probabilistic computation in the brain.
1. Georgopoulos, A.P., A.B. Schwartz, and R.E. Kettner, Neuronal population coding of movement direction. Science, 1986. 233.
2. Stewart, T.C., T. Bekolay, and C. Eliasmith, Neural representations of compositional structures: representing and manipulating
vector spaces with spiking neurons. Connection Science, 2011. 23(2): p. 145-153.
3. Jazayeri, M. and J.A. Movshon, Optimal representation of sensory information by neural populations. Nat Neurosci, 2006. 9(5).
4. Zemel, R., P. Dayan, and A. Pouget, Probabilistic interpretation of population codes. Neural Comput., 1998. 10: p. 403-430.
5. Gerstner, W., et al., Neuronal Dynamics: From single neurons to networks and models of cognition. 2014, NY: Cambridge U. Press.
6. Rinkus, G., Quantum Computing via Sparse Distributed Representation. NeuroQuantology, 2012. 10(2): p. 311-315.
7. Rinkus, G., A Combinatorial Neural Network Exhibiting Episodic and Semantic Memory Properties for Spatio-Temporal Patterns,
in Cognitive & Neural Systems. 1996, Boston U.: Boston.
8. Rinkus, G., A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality. FIN, 2010. 4.
9. Pouget, A., et al., Probabilistic brains: knowns and unknowns. Nat Neurosci, 2013. 16(9): p. 1170-1178.
10. Pouget, A., P. Dayan, and R. Zemel, Information processing with population codes. Nat Rev Neurosci, 2000. 1(2): p. 125-132.
11. Pouget, A., P. Dayan, and R.S. Zemel, Inference and Computation with Population Codes. Ann. Rev. of Neuroscience, 2003. 26(1).