Content uploaded by Alessandro E. P. Villa

Author content

All content in this area was uploaded by Alessandro E. P. Villa on Dec 19, 2017

Content may be subject to copyright.

BioSystems 79 (2005) 11–20

Dynamics of pruning in simulated large-scale

spiking neural networks

Javier Iglesiasa,b,c,∗, Jan Erikssonb, Franc¸ois Grizea, Marco Tomassinia,

Alessandro E.P. Villab,c,d

aInformation Systems Department, University of Lausanne, Lausanne, Switzerland

bLaboratory of Neuroheuristics, University of Lausanne, Lausanne, Switzerland

cLaboratory of Neurobiophysics, University Joseph-Fourier, Grenoble, France

dNeuroheuristic Research Group, I.S.I. Foundation, Torino, Italy

Abstract

Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. This article studies the

synapticpruningthatoccursinlargenetworksofsimulatedspikingneuronsintheabsenceofspeciﬁcinputpatternsofactivity. The

evolution of connections between neurons were governed by an original bioinspired spike-timing-dependent synaptic plasticity

(STDP) modiﬁcation rule which included a slow decay term. The network reached a steady state with a bimodal distribution of

the synaptic weights that were either incremented to the maximum value or decremented to the lowest value. After 1 ×106time

steps the ﬁnal number of synapses that remained active was below 10% of the number of initially active synapses independently

of network size. The synaptic modiﬁcation rule did not introduce spurious biases in the geometrical distribution of the remaining

active projections. The results show that, under certain conditions, the model is capable of generating spontaneously emergent

cell assemblies.

© 2004 Elsevier Ireland Ltd. All rights reserved.

Keywords: Locally connected random network; Spike-timing-dependent synaptic plasticity; Spiking neural network; Large-scale simulation

1. Introduction

Massive synaptic pruning following over-growth

is a general feature of mammalian brain maturation

(Rakic et al., 1986; Zecevic and Rakic, 1991). Pruning

startsneartime of birthandis completed bytimeof sex-

∗Corresponding author. Tel.: +41 21 692 35 87;

fax: +41 21 692 35 85.

E-mail address: Javier.Iglesias@hec.unil.ch (J. Iglesias).

ual maturation. Biological mechanisms that regulate

pruning mechanisms involve complex neurochemical

pathways of cell signaling and are not intended to be

reviewed here. Trigger signals able to induce synaptic

pruning could be related to dynamic functions that de-

pend on the timing of action potentials. Spike-timing-

dependent synaptic plasticity (STDP) is a change in

the synaptic strength based on the ordering of pre- and

post-synaptic spikes. This mechanism has been pro-

posed to explain the origin of long-term potentiation

0303-2647/$ – see front matter © 2004 Elsevier Ireland Ltd. All rights reserved.

doi:10.1016/j.biosystems.2004.09.016

12 J. Iglesias et al. / BioSystems 79 (2005) 11–20

(LTP), i.e. a mechanism for reinforcement of synapses

repeatedly activated shortly before the occurrence of

a post-synaptic spike (Kelso et al., 1986; Bi and Poo,

1998; Froemke and Dan, 2002; Kepecs et al., 2002;

Markram et al., 1997). STDP has also been proposed

to explain long-term depression (LTD), which corre-

spondstotheweakeningof synapses strength whenever

the pre-synaptic cell is repeatedly activated shortly af-

ter the occurrence of a post-synaptic spike (Karmarkar

and Buonomano, 2002).

The glutamatergic NMDA receptors were initially

identiﬁedasthereceptorsitewith all biological features

compatible with LTP induced by coincident pre- and

post-synaptic cell discharges (Wigstrom and Gustafs-

son, 1986). The involvement of NMDA receptors in

timing-dependent long-term depression (tLTD) has

been recently described (Sjöström et al., 2003). Re-

centinvestigationssuggestthatglutamatergicreceptors

with AMPA channels and GABAergic receptors may

also undergo modiﬁcations of the corresponding post-

synaptic potentials as a function of the timing of pre-

andpost-synaptic activities (Engelet al., 2001; Woodin

et al., 2003). These studies suggest that several mecha-

nisms mediated by several neurotransmitters may exist

at the synaptic level for changing the post-synaptic po-

tential, either excitatory or inhibitory, as a function of

the relative timing of pre- and post-synaptic spikes.

The important consequences that changes in synap-

tic strength may produce for information transmission,

and subsequently for synaptic pruning, have raised an

interest to simulate the activity of neural networks with

embedded synapses characterized by STDP (Lumer et

al., 1997; Fusi et al., 2000; Hopﬁeld and Brody, 2004)

The relation between synaptic efﬁcacy and synaptic

pruning (Chechik et al., 1999; Mimura et al., 2003),

suggestthat the weak synapsesmay be modiﬁedandre-

moved through competitive “learning” rules. Compet-

itive synaptic modiﬁcation rules maintain the average

neuronal input to a post-synaptic neuron, but provoke

selective synaptic pruning in the sense that converg-

ing synapses are competing for control of the timing of

post-synapticaction potentials (Song et al., 2000; Song

and Abbott, 2001).

This article studies the synaptic pruning that oc-

curs in a large network of simulated spiking neurons

in the absence of speciﬁc input patterns. The original-

ity of our study stands on the size of the network, up to

10,000 units, the duration of the experiment, 1,000,000

time units (one time unit corresponding to the duration

of a spike), and the application of an original bioin-

spired STDP modiﬁcation rule compatible with hard-

ware implementation (Eriksson et al., 2003; Tyrrell et

al., 2003). The network is composed of a mixture of

excitatory and inhibitory connections that maintain a

balanced input locally connected in a random way.

STDP is considered an important mechanism that

modiﬁes the gain of several types of synapses in the

brain. In this study the synaptic modiﬁcation rule was

applied only to the excitatory–excitatory connections.

This plasticity rule might produce the strengthening of

the connections among neurons that belong to cell as-

semblies characterized by recurrent patterns of ﬁring.

Conversely, those connections that are not recurrently

activatedmightdecreaseinefﬁciencyandeventuallybe

eliminated. The main goal of our study is to determine

whether or not, and under which conditions, such cell

assemblies may emerge from a large neural network

receiving background noise and content-related input

organized in both temporal and spatial dimension. In

ordertoreach this goal theﬁrststep consisted incharac-

terizingthe dynamics of synaptic pruning in absence of

content-related input. This ﬁrst step is described here.

2. Models and methods

2.1. Network connectivity

The network is a 2D lattice folded as a torus to limit

the edge effect where the units near the boundary re-

ceived less input. The size of the network varies be-

tween 10 ×10 to 100 ×100 units. Several types of

units may be deﬁned. In this study we deﬁne two types,

q∈{1,2}, 80% of Type I (q=1) units and 20% of

Type II (q=2) units are uniformly distributed over

the network according to a space-ﬁlling quasi-random

Soboldistribution (Press et al., 1992, Fig. 7.7.1). A unit

of either type may project to a unit of either type, but

self-connections are not allowed.

Each unit is assumed to be at the center of a relative

2D map, with coordinates x=0, y=0 . The proba-

bility that another unit located at coordinates (x, y) re-

ceives a projection is provided by the following density

function

G(x, y)=α[q]exp −2π(x2+y2)

σ2

[q]+φ[q]

J. Iglesias et al. / BioSystems 79 (2005) 11–20 13

Fig. 1. Main features of the connectivity for Type I unit (upper row) and Type II unit (lower row). (a, e) Density function of the connectivity

for a unit located at coordinates 0,0 on a 100 ×100 2D lattice; (b, f) Example of two projecting units, one for each class, located at the center

of the 2D map. Each dot represents the location of a target unit connected by the projecting unit. (c, g) Orientation map of the projections of

the same example units with polar coordinates smoothed with a bin equal to 12◦. A circular line would represent a perfect pattern of isotropic

connections. (d, h) Cumulative distributions of the connections. Type I are assumed to represent excitatory units (e→) and Type II inhibitory

units (i→).

where α[q]is the scaling factor for maximal probability

ofestablishing aconnection withthe closestneighbors,

σ[q]is a scaling factors for the skewness and width of

the Gaussian shaped function, and φ[q]is an uniform

probability (Hill and Villa, 1997). The density function

deﬁning the probability of the connections is different

for each type of unit and is illustrated in Fig. 1a and

e. The values of the parameters used for the density

functions are indicated in Table 1.

Table 1

Parameter list of the main variables used for both types of units for 100 ×100 networks

Variable Type I Type II Short description

80 20 Proportion in network (%)

φ2 0 Uniform connection probability (%)

α60 20 Gaussian maximal probability (%)

σ10 75 Gaussian distribution width

P0.84 −1.60 Post-synaptic potential (mV)

Vrest −78 −78 Membrane resting potential (mV)

θi−40 −40 Membrane threshold potential (mV)

trefract 1 1 Absolute refractory period (ms)

τmem 7 7 Membrane time constant (ms)

τsyn 14 14 Synaptic plasticity time constant (ms)

τact 11000 11000 Activation time constant (ms)

See text for details.

The random selection of the target units is run inde-

pendently for each unit of either type. An example of

the spatial distribution of the projections of one Type

I unit, and of one Type II unit, is illustrated in Fig. 1b

andf, respectively.Inthis example, the TypeIunit(Fig.

1b) projects to 233 units and the Type II unit (Fig. 1f)

projects to 537 units overall. For each unit it is pos-

sible to illustrate the orientation of its connections in

the 2D lattice by plotting in polar coordinates the de-

14 J. Iglesias et al. / BioSystems 79 (2005) 11–20

viation from a perfect isotropic distribution. In case of

an isotropic distribution the orientations would be il-

lustrated by a circular line around the center. If such

line is not circular it shows that some orientations have

been selected preferentially by chance, as it may occur

in a random selection procedure. The orientations of

the projections of the two example units are illustrated

in Fig. 1c and g. It appears that at the single unit level

a large degree of anisotropy exists in the connection

topology.

Fig. 1d shows the cumulative distribution of all con-

nections established by Type I units projecting to either

type. The modes of the histograms show that on aver-

age one unit of Type I is projecting to 50 units of Type

II and to 190 units of Type I. Fig. 1h illustrates the

cumulative distribution of all connections established

by Type II units and shows that on average one unit of

Type II is projecting to 115 units of Type II and to 460

units of Type I.

2.2. Neuromimetic model

All units of the network are simulated by leaky

integrate-and-ﬁre neuromimes. At each time step, the

value of the membrane potential of the ith unit, Vi(t),

is calculated such that

Vi(t+1) =Vrest[q]+Bi(t)+(1 −Si(t))

×((Vi(t)−Vrest[q])kmem[q])+

j

wji(t)

where Vrest[q]corresponds to the value of the resting

potential for the units of class type [q], Bi(t)isthe

background activity arriving to the ith unit, Si(t)is

the state of the unit as expressed below, kmem[q]=

exp(−1/τmem[q]) is the constant associated to the cur-

rent of leakage for the units of class type [q], wji(t) are

the post-synaptic potentials of the jth units projecting

to the ith unit.

The state of a unit Si(t) is a function of the mem-

branepotentialVi(t)anda threshold potential θ[q]i,such

that Si(t)=H(Vi(t)−θ[q]i). His the Heaviside func-

tion, H(x)=0:x<0, H(x)=1:x≥0. In addition,

the state of the unit depends on the refractory period

trefract[q], such that

Si(t+t)=(trefract[q]−t)

trefract[q]Si(t)

for any t<t

refract[q]. For a refractory period equal to

1 time unit, the state Si(t) is a binary variable. In this

simulation we assume that the refractory period is the

samefor allunits ofeither type.It isassumed thata unit

can generate a spike only for Si(t)=1. The parameter

values used for the simulations are listed in Table 1.

2.3. Synaptic connections

The post-synaptic potential wji is a function of the

state of the pre-synaptic unit Sj, of the “type” of the sy-

napse P[qj,qi], and of the activation level of the synapse

Aji. This is expressed by the following equation

wji(t+1) =Sj(t)Aji (t)P[qj,qi].

Notice that the “type” of the synapse is a parameter

that depends on the types of units in the network. In

the current study we assume that P[1,1], i.e. (Type I →

Type I), and P[1,2] connections, i.e. (Type I →Type II),

are of the same kind. The same assumption was made

for P[2,1] and P[2,2] connections.

In order to maintain a balanced level of depolar-

ization (excitatory) and hyperpolarization (inhibitory)

the Type I unit was considered as excitatory and Type

II as inhibitory. We set P[1,1] =P[1,2] =0.84mV and

P[2,1] =P[2,2] =−1.6mV.

2.4. Synaptic modiﬁcation rule

It is assumed a priori that modiﬁable synapses are

characterized by activation levels [A] with Nattrac-

tor states [A1]<[A2]<··· <[AN]. Activation lev-

els of type [1,1] synapses are integer-valued lev-

els Aji(t), with Aji (t)∈{[A1]=0,[A2]=1,[A3]=

2,[A4]=4}. Index jis referred to as the pre-synaptic

unit and index ias the post-synaptic unit. We assume

that post-synaptic potentials generated by synapses of

type[1,1] correspond to synaptic currents mediated by

NMDA glutamatergic receptors. These discrete levels

could be interpreted as a combination of two factors:

the number of synaptic boutons between the pre- and

post-synaptic units and the changes in synaptic con-

ductance as a result of Ca2+inﬂux through the NMDA

receptors. In the current study we attributed a ﬁxed

activation level (that means no synaptic modiﬁcation)

Aji(t)=1, to exc →inh,inh →exc, and inh →inh

synapses.

J. Iglesias et al. / BioSystems 79 (2005) 11–20 15

Areal-valued variable Lji(t) is used to imple-

ment the spike-timing dependent plasticity rule for

Aji(t), with integration of the timing of the pre-

and post-synaptic activities. The variables Lji(t) are

user-deﬁnedboundariesofattractionL0<L

1<L

2<

··· <L

N−1<L

Nsatisfying Lk−1<[Ak]<L

kfor

k=1,...,N. This means that whenever Lji >L

kthe

activationvariableAji jumpsfromstate [Ak]to[Ak+1].

Similarly,ifLji <L

ktheactivationvariableAji jumps

from state [Ak+1]to[Ak]. Moreover, after a jump of

activation level [A] occurred at time tthe real-valued

variable Lij is reset to Lij (t+1) =Lk+Lk+1/2.

Spike-timing dependent plasticity (STDP) deﬁnes

how the value of Lji at time tis changed by the ar-

rival of pre-synaptic spikes, by the generation of post-

synaptic spikes and by the correlation existing between

theseevents.On thegeneration of a post-synaptic spike

(i.e., when Si=1), the value Lji receives an increment

which is a decreasing function of the elapsed time from

the previous pre-synaptic spike at that synapse. Simi-

larly, when a spike arrives at the synapse, the variable

Lji receivesadecrementwhich is likewiseadecreasing

function of the elapsed time from the previous post-

synaptic spike (i.e., when Sj=1). This rule is summa-

rized by the following equation: Lji(t+1) =Lji(t)+

(Si(t)Mj(t)) −(Sj(t)Mi(t)), where Si(t),S

j(t) are the

state variables of the ith and jth units and Mi(t),M

j(t)

are interspike decay functions. Mi(t) may be viewed as

a “memory” of the latest interspike interval,

Mi(t+1) =Si(t)Mmax[qi]

+(1 −Si(t))(Mi(t) exp(−t/τsyn[qi]))

Fig. 2. Pruning dynamics. The real-valued variable Lji is increased or decreased according to the STDP rule. If the value Lji reaches one of

the Lkuser-deﬁned boundaries a jump occurs in the integer-valued variable [Ak]. At the begin all (e→e) synapses have been set at activation

level [A3]. (a) Example of potentiation with an increase in synaptic strength that is stabilized on the long term. (b) Example of depression with

a fast decrease in synaptic activation level down to its minimal level, [A1]=0 which provokes the elimination of the synapse. (c) Example of a

synaptic link which neither affected by potentiation nor by depression, but its efﬁcacy decays down to [A1]=0 according to the time constant

τact.

where τsyn[qi]is the synaptic plasticity time constant

characteristic of each type of unit and Mmax[qi]was set

Mmax[qi]=2 for all units of either type in this study. In

the case that neither the pre- nor the post-synaptic unit

is ﬁring a spike, the real-valued variable will decay

with a time constant kact[qj,qi]=exp (−1/τact[qj,qi])

characteristic for each type of synapse, such that the

ﬁnal equation is the following:

Lji(t+1) =Lji (t)kact[qj,qi]

+(Si(t)Mj(t)) −(Sj(t)Mi(t)).

In the present study the differences between the

user deﬁned boundaries Lkwere all equal, such that

Lk=Lk−Lk−1=20 for any attractor state [Ak].

At the begin of the simulation all modiﬁable synapses

were set to activation level [A3]=2. Fig. 2a illustrates

a case when the synaptic link receives a potentiation

determined by the STPD rule described above. The

activation variable jumps from [A3]to[A4] and sta-

bilizes at highest activation level. Fig. 2b illustrates a

case when the synapse is continuously depressed such

that the activation variable jumps from [A3]to[A2],

and then from [A2]to[A1] faster than its spontaneous

decay, determined by time constant kact[qj,qi].Fig. 2c

illustrates a case when the synapse is neither depressed

nor potentiated and the activation level spontaneously

decay down to the minimal level.

2.5. Synaptic pruning

No generation of new projections is allowed in the

present study, although speciﬁc rules could be deﬁned

16 J. Iglesias et al. / BioSystems 79 (2005) 11–20

to this purpose. Synaptic pruning occurs when the ac-

tivation level of a synapse reaches a value of zero. This

means that synaptic pruning may occur only for synap-

tic connections of type [1,1], which are also the most

abundant, when the activation level Aji decreases to its

minimal value, i.e. [A1]=0. In this case the synapse

[i, j] is eliminated from the network connectivity.

2.6. Background activity

The background activity Bi(t) is used to simulate

the input of afferents to the ith unit that are not ex-

plicitly simulated within the network. Let us assume

that each type of unit receives next[qi]external affer-

ents. In the present study we simplify by setting that all

units receive the same number of external projections

and that all of them are excitatory. Namely, we assume

ni≡n≡50and that the post-synaptic potentialgener-

ated by these external afferents is ﬁxed to a value equal

to P[1,1]. In the current case (see Table 1) each external

afferent generates an excitatory post-synaptic potential

equal to 0.84mV.

We assume that the external afferents are correlated

among them. This means that each time a unit is re-

ceiving a correlated input from 50 external afferents its

membranepotential is depolarizedto an extentthat will

generate a spike. Such external input is distributed ac-

cording to a Poisson process which is independent for

each unit and with mean rate λi. The rate of external

background activity is a critical parameter. We found

that with all previous parameters being kept constant a

rate of background activity λi<8 spikes/s is unable to

sustain any activity at all. In the present study we have

set the Poisson input to a rate λi=10spikes/s.

2.7. Network size

We investigated the pruning dynamics with net-

works of different sizes. The smallest network was de-

ﬁned by 10 ×10 units and the largest network stud-

ied here was 100 ×100 units, i.e. (10 ×N)2, with

N∈{1,...,10}. To compensate for the changes in

the balance between excitation and inhibition induced

by the change of size, we introduced the scaling fac-

tor f=3

104/(10 ×N)2, where Nis the size as de-

scribed before. The uniform probability for an excita-

tory unit to project to any other unit of the network was

scaled according to φ∗

[1] =fφ[1], leading to a larger

Table 2

The scaled parameter values for each network size N

NSize φ∗

[1] (%) P∗

[1,1] (mV)

110×10 9.28 2.36

220×20 5.84 1.64

330×30 4.46 1.35

440×40 3.68 1.19

550×50 3.17 1.08

660×60 2.81 1.01

770×70 2.53 0.95

880×80 2.32 0.90

990×90 2.14 0.87

10 100 ×100 2.00 0.84

See text for details.

number of excitatory connections at the beginning of

the simulations for smaller networks. The level of

post-synaptic depolarization for excitatory–excitatory

synapses was scaled as P∗

[1,1] =(1 +f−1/2)P[1,1],

so that the strength of these connections was larger for

smaller networks. Table 2 lists the scaled values of φ∗

[1]

and P∗

[1,1] we used. Note that the values for N=10

correspond to those listed in Table 1.

2.8. Simulation tools

The simulator was a custom developed, Open

Source,C program thatrelies on the GNUScientiﬁc Li-

brary (GSL) for random number generation and quasi-

randomSoboldistributionimplementations.AMySQL

database back end stored all the conﬁguration details

and logs for later reference. This information was ex-

tracted and formatted by a set of PHP Web scripts to

monitor the status of the simulations and create new

ones. With our current implementation at the Univer-

sityof Lausanne, a 10,000 units network simulation for

a durationof 1 ×106timestepslastedapproximatively

8h, depending on the network global activity.

3. Results

All synapses of type [1,1], i.e. (exc →exc) were

initializedwithAij (t=0) =[A2].In presence of back-

ground activity only, most synapses were character-

ized by a decrement of the activation level. After a

long time, t=tsteady, the network activity is stabilized

and STDP does not modify any more the activation

level of the synapses. At time t=tsteady most modiﬁ-

J. Iglesias et al. / BioSystems 79 (2005) 11–20 17

Fig.3. Pruning dynamics averagedovern=10 simulations for each

network size. With the proposed size compensation factor, the prun-

ing dynamics are comparable for network sizes N∈{4,...,10}.

Simulations for N=1 and 2 saturated, suggesting that the compen-

sation factor was too large for those two speciﬁc sizes.

able synapses were eliminated and almost all remain-

ing active synapses were characterized by the highest

possible activation level, i.e. [A4]. We observed that

t=tsteady could be as long as t=1×106in several

simulation runs.

3.1. Network size

It is interesting to notice that the ﬁnal ratio of active

synapses (Rmax[A]) represented only few percents of

the initial number of synapses (Fig. 3). In addition, it

is important to notice that those connections that reach

the maximal activation level do not necessarily remain

active until t=tsteady. Several synapses reached level

[A4] after some delay, then their activation level de-

creased down to [A1]=0, at variable speed, and the

synapse was eventually eliminated.

It is important that a network be attuned to work

in a range such that background activity is unable to

create spurious attractors by STDP. This means that

background activity alone should not create stable con-

nections that would shape the topology of cell assem-

blies. The size of the network is critical if the goal is to

detect the emergence of chains of interconnected units

embeddedin a largenetwork.Fig. 3 showsthatthe ratio

of active synapses with activation level equal to [A4]

couldbe as highas 50%of all initialsynapses. The ﬁnal

percentage of active units is much less variable and at

tsteady it is always less than 10% for networks that did

not saturate.

3.2. “Seed” effect

A simulation study that relies on large use of ran-

domly generated numbers may fall into local minima

or spurious attractors simply by chance. It was nec-

essary to assess the effect of the seed of the random

number generator on our simulation. The most critical

effect of the randomization may occur at the very be-

ginning, when the initial network topology is created

according to the density functions of the connections

of the different types of units. The very same simula-

tion, with the parameter set described in Table 1 was

repeated 100 times using different random seeds with

the largest network size, i.e. 100 ×100 units.

The choice of the seed had a signiﬁcant impact on

the value of Rmax[A]at time tsteady, as it could vary in

therange [1.30,6.03]%.However,asshownby the dis-

tribution of Rmax[A](Fig. 4), about 90% of these values

were comprised between 3.0 and 6.0%. Moreover, we

never observed cases with absence of convergence at

delays as large as t=1×106. This indicates that a

“seed effect” exists but this does not cause changes in

the overall dynamics of synaptic pruning.

Abiasin the orientationofthe connections could oc-

cur by random choices. In order to test this hypothesis

two cases of extreme values of Rmax[A]observed in the

distributionofFig. 4 wereselected. The ﬁrst casecorre-

sponds to Rmax[A]as low as R1=1.97%. The second

case corresponds to Rmax[A]as high as R2=6.03%.

Fig. 4. Random seed effect on the number of synapses that remain

afterpruning.Aftern=100 simulations that used differentseedsthe

distribution of Rmax[A]at time t=tsteady =1×106, with bin = 0.5,

shows that in the majority of the runs synaptic pruning left 3.0–6.0%

active synapses at the maximum level [A4].

18 J. Iglesias et al. / BioSystems 79 (2005) 11–20

For both cases we calculated the deviation from an

isotropic connection, as deﬁned previously for Fig. 1c

and g, for all active synapses, i.e. with an activation

level not equal to [A1]=0. Then we calculated an av-

erage deviation plot that corresponds to the mean of

the orientations computed over all active synapses at

given times t, namely at t1=1×105,t2=2×105,

and t=tsteady =1×106.

Fig. 5a shows the evolution of the orientation map

in the case R1, when the network stabilizes with

a low level of active connections. In this example

the initial number nof connections at time t0was

n=1,517,240. At t1330,920 synapses remained,

at t2172,503 synapses remained, and eventually the

Fig. 5. Random seed effect on the orientation and length of active connections. (a) Average orientation map. A circular line indicates an isotropic

orientation of the projections of a unit ideally located at the center marked by a cross. This simulation correspond to run R1of Fig. 4. The average

deviation from isotropy for all active connections is plotted at various times. The last line correspond to tsteady : 30,864 synapses remained active,

all with active state [A4], which corresponded to 1.97% of the initial number of synapses at time t0. (b) Normalized histogram of the length of

the source-to-target projections measured in Euclidean distance in the 2D lattice for simulation run R1. A ﬂat line at ratio =1 indicates that the

distances are totally predicted by the modiﬁed 2d Gaussian distribution function described in the text (cf. Section 2.1). (c) Average orientation

map corresponding to run R2of Fig. 4.Attsteady 93,346 synapses remained active, all with active state [A4], which corresponded to 6.03% of the

initial number of synapses at time t0. (d) Normalized histogram of the length of the source-to-target projections measured in Euclidean distance

in the 2D lattice for simulation run R2.

network stabilized with 30,864 excitatory–excitatory

synapses. The orientation map shows that the devia-

tions from an isotropic distribution were equally dis-

tributed in all directions. Another factor that could be

affected by the random choice is the distance from

source-to-target (calculated as an Euclidean distance

over the 2D lattice) of the remaining projections. The

histogram of the distribution of these distances Fig 5b

was normalized with respect to the probability distri-

bution of establishing a connection. In this normalized

histogram a ratio of 1 means that the count is perfectly

determined by the probability distribution. In the case

of R1we observe that there was a tendency to some

deviation from the original probability function, but

J. Iglesias et al. / BioSystems 79 (2005) 11–20 19

this variance was the same for any source-to-target dis-

tance.Inthe case R2, thenumberof initial synapses was

n=1,512,634 and at tsteady 93,346 active synapses

remained. This analysis shows that the “seed” effect

does not introduce signiﬁcant biases neither in the ori-

entation, nor in the length of the connections that were

selected by pruning.

4. Discussion

We assumed a number of simpliﬁed hypotheses

aboutthepresenceofonlytwotypesofunits, their leaky

integrate-and-ﬁre dynamics, their distribution and the

dynamics of the transfer functions of the synapses that

connect these units. With all these assumptions we ob-

served that the network reached a steady state when

the synaptic weights were either incremented to the

maximum value or decremented to the lowest value.

Ourresult is inagreement with the bimodal distribution

of synaptic strengths observed with a different STDP-

based model (Chechik et al., 1999; Song et al., 2000;

SongandAbbott, 2001). Thiseffectisinterpreted as the

effect of the STDP rule that leads pre-synaptic neurons

to compete for the capacity to drive the post-synaptic

unit to produce an all-or-none output signal, akin to an

action potential.

The choice of a 2D lattice topology allowed us to

study the effect of incrementing the network size from

10 ×10 to 100 ×100 units. It was interesting to ob-

serve that the ﬁnal ratio of synapses that remained ac-

tive (that we labeled here Rmax[A]) was below 10% of

thenumberofinitiallyactivesynapses.Thisratiovaried

only slightly with changes in network size (MacGregor

et al., 1995) and the effect of different random seeds at

the initialization was also limited. On the other hand,

we observed that the ratio of active synapses character-

ized by the maximum weight could transiently reach

a proportion as high as 50% of all initial synapses.

This could indicate that a very large network may not

be necessary for recurrent networks to emerge. Inter-

connected sizable “modules” up to 50 ×50 or 60 ×60

units embedded in larger networks may offer a more

efﬁcient way to recruit active synapses that compete

for generating a post-synaptic spike.

Abias inthe geometrical orientation of the synapses

at the network level might produce important effects

on the global dynamics as it could introduce singulari-

ties in the network topology. These singularities could

sustainattractorswithunbalancedexcitatory/inhibitory

inputs if they were the consequence of content-related

inputs (Hill and Villa, 1997). In presence of only back-

ground noise these attractors would be spurious and

could mask input-related features. We observed that

the reinforcement of few synapses occurred without

geometric distortions in both direction and source-to-

target distance over the 2D lattice. Synaptic pruning

proceededinan homogeneous andisotropic way across

the network. This result suggests that the implementa-

tion of the current STDP rule is equivalent to random

pruning and does not introduce spurious biases.

The present work is currently extended in two di-

rections that will be reported in future articles. The

ﬁrst direction consists in studying the effect of dif-

ferent synaptic transfer functions without changing

the other parameters of the simulation. This would

introduce temporally asymmetrical STDP (Bi and

Wang, 2002), where both time window and efﬁ-

cacy changes are different for LTD than for LTP

(Stuart and Hausser, 2001). In addition to the mod-

iﬁable synaptic rule it would be interesting to in-

troduce more realistic transfer functions in all types

of synapse, in particular in the inhibitory synapses,

to account for modulation frequencies and pre-

synaptic spike interval distribution (Segundo et al.,

1995a,b).

Thesecondextensionofthisworkistheintroduction

ofcontent-relatedinputs,i.e. spatiotemporal patterns of

dischargesassociatedtoselectedstimuli(Abeles,1991;

Hopﬁeld and Brody, 2000; Villa, 2000). Preliminary

observations carried out with a simpliﬁed version of

this simulation have demonstrated that these types of

network may store the traces of time-varying stimuli

such that similar stimuli blurred with noise can evoke

an activity pattern close to the original one (Eriksson

et al., 2003; Torres et al., 2003).

Acknowledgements

The authors thank Dr. Yoshiyuki Asai for discus-

sions and comments on the manuscript. This work is

partiallyfunded bythe Futureand EmergingTechnolo-

gies program (ISTFET) of the European Community,

under grant IST-2000-28027 (POETIC), and under

grant OFES 00.0529-2 by the Swiss Government.

20 J. Iglesias et al. / BioSystems 79 (2005) 11–20

References

Abeles, M., 1991. Corticonics: Neural Circuits of the Cerebral Cor-

tex. Cambridge University Press.

Bi, G.Q., Poo, M.M., 1998. Synaptic modiﬁcations in cultured

hippocampal neurons: dependence on spike timing, synaptic

strength, and postsynaptic cell type. J. Neurosci. 18 (24), 10464–

10472.

Bi, G.Q., Wang, H.X., 2002. Temporal asymmetry in spike timing-

dependent synaptic plasticity. Physiol. Behav. 77 (4/5), 551–555.

Chechik, G., Meilijson, I., Ruppin, E., 1999. Neuronal regulation: a

mechanism for synaptic pruning during brain maturation. Neural

Computation 11, 2061–2080.

Engel, D., Pahner, I., Schulze, K., Frahm, C., Jarry, H., Ahnert-

Hilger, G., Draguhn, A., 2001. Plasticity of rat central inhibitory

synapses through GABA metabolism. J. Physiol. 535 (2), 473–

482.

Eriksson, J., Torres, O., Mitchell, A., Tucker, G., Lindsay, K., Rosen-

berg, J., Moreno, J.-M., Villa, A.E.P., 2003. Spiking neural net-

works for reconﬁgurable POEtic tissue. Lecture Notes Comput.

Sci. 2606.

Froemke, R.C., Dan, Y., 2002. Spike-timing-dependent synaptic

modiﬁcation induced by natural spike trains. Nature 416 (6879),

433–438.

Fusi, S., Annunziato, M., Badoni, D., Salamon, A., Amit, D.J., 2000.

Spike-drivensynapticplasticity:theory, simulation, VLSI imple-

mentation. Neural Comput. 12, 2227–2258.

Hill, S.L., Villa, A.E.P., 1997. Dynamic transitions in global network

activity inﬂuenced by the balance of excitation and inhibition.

Network: Computat. Neural Syst. 8, 165–184.

Hopﬁeld, J.J., Brody, C.D., 2000. What is a moment? “Cortical”

sensory integration over a brief interval. Proc. Natl. Acad. Sci.

USA 97 (25), 13919–13924.

Hopﬁeld, J.J., Brody, C.D., 2004. Learning rules and network repair

in spike-timing-based computation networks. Proc. Natl. Acad.

Sci. USA 101 (1), 337–342.

Karmarkar, U.R., Buonomano, D.V., 2002. A model of spike-timing

dependent plasticity: one or two coincidence detectors? J. Neu-

rophysiol. 88 (1), 507–513.

Kelso, S.R., Ganong, A.H., Brown, T.H., 1986. Hebbian synapses in

hippocampus. Proc. Natl. Acad. Sci. USA 83 (14), 5326–5330.

Kepecs, A., van Rossum, M.C.W., Song, S., Tegner, J., 2002. Spike-

timing-dependent plasticity: common themes and divergent vis-

tas. Biol. Cybernet. 87, 446–458.

Lumer, E.D., Edelman, G.M., Tononi, G., 1997. Neural dynamics

in a model of the thalamocortical system. ii. The role of neural

synchrony tested through perturbations of spike timing. Cerebral

Cortex 7 (3), 228–236.

MacGregor, R.J., Ascarrunz, F.G., Kisley, M.A., 1995. Characteri-

zation, scaling, and partial representation of neural junctions and

coordinatedﬁringpatternsbydynamicsimilarity. Biol. Cybernet.

73 (2), 155–166.

Markram, H., Lubke, J., Frotscher, M., Sakmann, B., 1997. Regula-

tion of synaptic efﬁcacy by coincidence of postsynaptic APs and

EPSPs. Science 275 (5297), 213–215.

Mimura,K.,Kimoto,T.,Okada,M.,2003.Synapse efﬁciencydiverge

duetosynapticpruningfollowingover-growth. Phys. Rev.EStat.

Nonlin. Soft. Matter. Phys. 68.

Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T., 1992.

NumericalRecipesinC:The Art of Scientiﬁc Computing,second

ed. Cambridge University Press.

Rakic, P., Bourgeois, J.P., Eckenhoff, M.F., Zecevic, N., Goldman-

Rakic, P.S., 1986. Concurrent overproduction of synapses in di-

verse regions of the primate cerebral cortex. Science 232 (4747),

232–235.

Segundo, J.P., Stiber, M., Vibert, J.F., Hanneton, S., 1995a. Period-

ically modulated inhibition and its postsynaptic consequences.

i. General features, inﬂuence of modulation frequency. Neuro-

science 68 (3), 657–692.

Segundo, J.P., Stiber, M., Vibert, J.F., Hanneton, S., 1995b. Period-

ically modulated inhibition and its postsynaptic consequences.

ii. Inﬂuence of modulation slope, depth, range, noise and of

postsynaptic natural discharges. Neuroscience 68 (3), 693–

719.

Sjöström, P.J., Turrigiano, G.G., Nelson, S.B., 2003. Neocorti-

cal LTD via coincident activation of presynaptic NMDA and

cannabinoid receptors. Neuron 39, 641–654.

Song, S., Abbott, L.F., 2001. Cortical development and remapping

through spike timing-dependent plasticity. Neuron 32 (2), 339–

350.

Song, S., Miller, K.D., Abbott, L.F., 2000. Competitive Hebbian

learningthroughspike-timing-dependentsynapticplasticity.Nat.

Neurosci. 3, 919–926.

Stuart, G.J., Hausser, M., 2001. Dendritic coincidence detection of

epsps and action potentials. Nat. Neurosci. 4 (1), 63–71.

Torres, O., Eriksson, J., Moreno, J.M., Villa, A.E.P., 2003. Hard-

ware optimization of a novel spiking neuron model for

the POEtic tissue. Lecture Notes Comput. Sci. 2687, 113–

120.

Tyrrell, A.M., Sanchez, E., Floreano, D., Tempesti, G., Mange, D.,

Moreno, J.-M., Rosenberg, J., Villa, A.E.P., 2003. POEtic: An

integrated architecture for bio-inspired hardware. Lecture Notes

Comput. Sci. 2606.

Villa,A.E.P.,2000.Timeandthebrain. In: Miller, R.(Ed.),Empirical

Evidence About Temporal Structure in Multi-Unit Recordings,

vol. 2. Harwood Academic Publishers, pp. 1–51.

Wigstrom, H., Gustafsson, B., 1986. Postsynaptic control of hip-

pocampal long-term potentiation. J. Physiol. 81 (4), 228–

236.

Woodin,M.A., Ganguly,K.,Poo,M.,2003. Coincident pre- andpost-

synaptic activity modiﬁes GABAergic synapses by postsynaptic

changes in Cl−transporter activity. Neuron 39, 807–820.

Zecevic, N., Rakic, P., 1991. Synaptogenesis in monkey somatosen-

sory cortex. Cerebral Cortex 1 (6), 510–523.