ArticlePDF Available

Dynamics of pruning in simulated large-scale spiking neural networks

Authors:

Abstract and Figures

Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. This article studies the synaptic pruning that occurs in large networks of simulated spiking neurons in the absence of specific input patterns of activity. The evolution of connections between neurons were governed by an original bioinspired spike-timing-dependent synaptic plasticity (STDP) modification rule which included a slow decay term. The network reached a steady state with a bimodal distribution of the synaptic weights that were either incremented to the maximum value or decremented to the lowest value. After 1x10(6) time steps the final number of synapses that remained active was below 10% of the number of initially active synapses independently of network size. The synaptic modification rule did not introduce spurious biases in the geometrical distribution of the remaining active projections. The results show that, under certain conditions, the model is capable of generating spontaneously emergent cell assemblies.
Content may be subject to copyright.
BioSystems 79 (2005) 11–20
Dynamics of pruning in simulated large-scale
spiking neural networks
Javier Iglesiasa,b,c,, Jan Erikssonb, Franc¸ois Grizea, Marco Tomassinia,
Alessandro E.P. Villab,c,d
aInformation Systems Department, University of Lausanne, Lausanne, Switzerland
bLaboratory of Neuroheuristics, University of Lausanne, Lausanne, Switzerland
cLaboratory of Neurobiophysics, University Joseph-Fourier, Grenoble, France
dNeuroheuristic Research Group, I.S.I. Foundation, Torino, Italy
Abstract
Massive synaptic pruning following over-growth is a general feature of mammalian brain maturation. This article studies the
synapticpruningthatoccursinlargenetworksofsimulatedspikingneuronsintheabsenceofspecificinputpatternsofactivity. The
evolution of connections between neurons were governed by an original bioinspired spike-timing-dependent synaptic plasticity
(STDP) modification rule which included a slow decay term. The network reached a steady state with a bimodal distribution of
the synaptic weights that were either incremented to the maximum value or decremented to the lowest value. After 1 ×106time
steps the final number of synapses that remained active was below 10% of the number of initially active synapses independently
of network size. The synaptic modification rule did not introduce spurious biases in the geometrical distribution of the remaining
active projections. The results show that, under certain conditions, the model is capable of generating spontaneously emergent
cell assemblies.
© 2004 Elsevier Ireland Ltd. All rights reserved.
Keywords: Locally connected random network; Spike-timing-dependent synaptic plasticity; Spiking neural network; Large-scale simulation
1. Introduction
Massive synaptic pruning following over-growth
is a general feature of mammalian brain maturation
(Rakic et al., 1986; Zecevic and Rakic, 1991). Pruning
startsneartime of birthandis completed bytimeof sex-
Corresponding author. Tel.: +41 21 692 35 87;
fax: +41 21 692 35 85.
E-mail address: Javier.Iglesias@hec.unil.ch (J. Iglesias).
ual maturation. Biological mechanisms that regulate
pruning mechanisms involve complex neurochemical
pathways of cell signaling and are not intended to be
reviewed here. Trigger signals able to induce synaptic
pruning could be related to dynamic functions that de-
pend on the timing of action potentials. Spike-timing-
dependent synaptic plasticity (STDP) is a change in
the synaptic strength based on the ordering of pre- and
post-synaptic spikes. This mechanism has been pro-
posed to explain the origin of long-term potentiation
0303-2647/$ – see front matter © 2004 Elsevier Ireland Ltd. All rights reserved.
doi:10.1016/j.biosystems.2004.09.016
12 J. Iglesias et al. / BioSystems 79 (2005) 11–20
(LTP), i.e. a mechanism for reinforcement of synapses
repeatedly activated shortly before the occurrence of
a post-synaptic spike (Kelso et al., 1986; Bi and Poo,
1998; Froemke and Dan, 2002; Kepecs et al., 2002;
Markram et al., 1997). STDP has also been proposed
to explain long-term depression (LTD), which corre-
spondstotheweakeningof synapses strength whenever
the pre-synaptic cell is repeatedly activated shortly af-
ter the occurrence of a post-synaptic spike (Karmarkar
and Buonomano, 2002).
The glutamatergic NMDA receptors were initially
identifiedasthereceptorsitewith all biological features
compatible with LTP induced by coincident pre- and
post-synaptic cell discharges (Wigstrom and Gustafs-
son, 1986). The involvement of NMDA receptors in
timing-dependent long-term depression (tLTD) has
been recently described (Sjöström et al., 2003). Re-
centinvestigationssuggestthatglutamatergicreceptors
with AMPA channels and GABAergic receptors may
also undergo modifications of the corresponding post-
synaptic potentials as a function of the timing of pre-
andpost-synaptic activities (Engelet al., 2001; Woodin
et al., 2003). These studies suggest that several mecha-
nisms mediated by several neurotransmitters may exist
at the synaptic level for changing the post-synaptic po-
tential, either excitatory or inhibitory, as a function of
the relative timing of pre- and post-synaptic spikes.
The important consequences that changes in synap-
tic strength may produce for information transmission,
and subsequently for synaptic pruning, have raised an
interest to simulate the activity of neural networks with
embedded synapses characterized by STDP (Lumer et
al., 1997; Fusi et al., 2000; Hopfield and Brody, 2004)
The relation between synaptic efficacy and synaptic
pruning (Chechik et al., 1999; Mimura et al., 2003),
suggestthat the weak synapsesmay be modifiedandre-
moved through competitive “learning” rules. Compet-
itive synaptic modification rules maintain the average
neuronal input to a post-synaptic neuron, but provoke
selective synaptic pruning in the sense that converg-
ing synapses are competing for control of the timing of
post-synapticaction potentials (Song et al., 2000; Song
and Abbott, 2001).
This article studies the synaptic pruning that oc-
curs in a large network of simulated spiking neurons
in the absence of specific input patterns. The original-
ity of our study stands on the size of the network, up to
10,000 units, the duration of the experiment, 1,000,000
time units (one time unit corresponding to the duration
of a spike), and the application of an original bioin-
spired STDP modification rule compatible with hard-
ware implementation (Eriksson et al., 2003; Tyrrell et
al., 2003). The network is composed of a mixture of
excitatory and inhibitory connections that maintain a
balanced input locally connected in a random way.
STDP is considered an important mechanism that
modifies the gain of several types of synapses in the
brain. In this study the synaptic modification rule was
applied only to the excitatory–excitatory connections.
This plasticity rule might produce the strengthening of
the connections among neurons that belong to cell as-
semblies characterized by recurrent patterns of firing.
Conversely, those connections that are not recurrently
activatedmightdecreaseinefficiencyandeventuallybe
eliminated. The main goal of our study is to determine
whether or not, and under which conditions, such cell
assemblies may emerge from a large neural network
receiving background noise and content-related input
organized in both temporal and spatial dimension. In
ordertoreach this goal thefirststep consisted incharac-
terizingthe dynamics of synaptic pruning in absence of
content-related input. This first step is described here.
2. Models and methods
2.1. Network connectivity
The network is a 2D lattice folded as a torus to limit
the edge effect where the units near the boundary re-
ceived less input. The size of the network varies be-
tween 10 ×10 to 100 ×100 units. Several types of
units may be defined. In this study we define two types,
q∈{1,2}, 80% of Type I (q=1) units and 20% of
Type II (q=2) units are uniformly distributed over
the network according to a space-filling quasi-random
Soboldistribution (Press et al., 1992, Fig. 7.7.1). A unit
of either type may project to a unit of either type, but
self-connections are not allowed.
Each unit is assumed to be at the center of a relative
2D map, with coordinates x=0, y=0 . The proba-
bility that another unit located at coordinates (x, y) re-
ceives a projection is provided by the following density
function
G(x, y)=α[q]exp 2π(x2+y2)
σ2
[q]+φ[q]
J. Iglesias et al. / BioSystems 79 (2005) 11–20 13
Fig. 1. Main features of the connectivity for Type I unit (upper row) and Type II unit (lower row). (a, e) Density function of the connectivity
for a unit located at coordinates 0,0 on a 100 ×100 2D lattice; (b, f) Example of two projecting units, one for each class, located at the center
of the 2D map. Each dot represents the location of a target unit connected by the projecting unit. (c, g) Orientation map of the projections of
the same example units with polar coordinates smoothed with a bin equal to 12. A circular line would represent a perfect pattern of isotropic
connections. (d, h) Cumulative distributions of the connections. Type I are assumed to represent excitatory units (e) and Type II inhibitory
units (i).
where α[q]is the scaling factor for maximal probability
ofestablishing aconnection withthe closestneighbors,
σ[q]is a scaling factors for the skewness and width of
the Gaussian shaped function, and φ[q]is an uniform
probability (Hill and Villa, 1997). The density function
defining the probability of the connections is different
for each type of unit and is illustrated in Fig. 1a and
e. The values of the parameters used for the density
functions are indicated in Table 1.
Table 1
Parameter list of the main variables used for both types of units for 100 ×100 networks
Variable Type I Type II Short description
80 20 Proportion in network (%)
φ2 0 Uniform connection probability (%)
α60 20 Gaussian maximal probability (%)
σ10 75 Gaussian distribution width
P0.84 1.60 Post-synaptic potential (mV)
Vrest 78 78 Membrane resting potential (mV)
θi40 40 Membrane threshold potential (mV)
trefract 1 1 Absolute refractory period (ms)
τmem 7 7 Membrane time constant (ms)
τsyn 14 14 Synaptic plasticity time constant (ms)
τact 11000 11000 Activation time constant (ms)
See text for details.
The random selection of the target units is run inde-
pendently for each unit of either type. An example of
the spatial distribution of the projections of one Type
I unit, and of one Type II unit, is illustrated in Fig. 1b
andf, respectively.Inthis example, the TypeIunit(Fig.
1b) projects to 233 units and the Type II unit (Fig. 1f)
projects to 537 units overall. For each unit it is pos-
sible to illustrate the orientation of its connections in
the 2D lattice by plotting in polar coordinates the de-
14 J. Iglesias et al. / BioSystems 79 (2005) 11–20
viation from a perfect isotropic distribution. In case of
an isotropic distribution the orientations would be il-
lustrated by a circular line around the center. If such
line is not circular it shows that some orientations have
been selected preferentially by chance, as it may occur
in a random selection procedure. The orientations of
the projections of the two example units are illustrated
in Fig. 1c and g. It appears that at the single unit level
a large degree of anisotropy exists in the connection
topology.
Fig. 1d shows the cumulative distribution of all con-
nections established by Type I units projecting to either
type. The modes of the histograms show that on aver-
age one unit of Type I is projecting to 50 units of Type
II and to 190 units of Type I. Fig. 1h illustrates the
cumulative distribution of all connections established
by Type II units and shows that on average one unit of
Type II is projecting to 115 units of Type II and to 460
units of Type I.
2.2. Neuromimetic model
All units of the network are simulated by leaky
integrate-and-fire neuromimes. At each time step, the
value of the membrane potential of the ith unit, Vi(t),
is calculated such that
Vi(t+1) =Vrest[q]+Bi(t)+(1 Si(t))
×((Vi(t)Vrest[q])kmem[q])+
j
wji(t)
where Vrest[q]corresponds to the value of the resting
potential for the units of class type [q], Bi(t)isthe
background activity arriving to the ith unit, Si(t)is
the state of the unit as expressed below, kmem[q]=
exp(1mem[q]) is the constant associated to the cur-
rent of leakage for the units of class type [q], wji(t) are
the post-synaptic potentials of the jth units projecting
to the ith unit.
The state of a unit Si(t) is a function of the mem-
branepotentialVi(t)anda threshold potential θ[q]i,such
that Si(t)=H(Vi(t)θ[q]i). His the Heaviside func-
tion, H(x)=0:x<0, H(x)=1:x0. In addition,
the state of the unit depends on the refractory period
trefract[q], such that
Si(t+t)=(trefract[q]t)
trefract[q]Si(t)
for any t<t
refract[q]. For a refractory period equal to
1 time unit, the state Si(t) is a binary variable. In this
simulation we assume that the refractory period is the
samefor allunits ofeither type.It isassumed thata unit
can generate a spike only for Si(t)=1. The parameter
values used for the simulations are listed in Table 1.
2.3. Synaptic connections
The post-synaptic potential wji is a function of the
state of the pre-synaptic unit Sj, of the “type” of the sy-
napse P[qj,qi], and of the activation level of the synapse
Aji. This is expressed by the following equation
wji(t+1) =Sj(t)Aji (t)P[qj,qi].
Notice that the “type” of the synapse is a parameter
that depends on the types of units in the network. In
the current study we assume that P[1,1], i.e. (Type I
Type I), and P[1,2] connections, i.e. (Type I Type II),
are of the same kind. The same assumption was made
for P[2,1] and P[2,2] connections.
In order to maintain a balanced level of depolar-
ization (excitatory) and hyperpolarization (inhibitory)
the Type I unit was considered as excitatory and Type
II as inhibitory. We set P[1,1] =P[1,2] =0.84mV and
P[2,1] =P[2,2] =−1.6mV.
2.4. Synaptic modification rule
It is assumed a priori that modifiable synapses are
characterized by activation levels [A] with Nattrac-
tor states [A1]<[A2]<··· <[AN]. Activation lev-
els of type [1,1] synapses are integer-valued lev-
els Aji(t), with Aji (t)∈{[A1]=0,[A2]=1,[A3]=
2,[A4]=4}. Index jis referred to as the pre-synaptic
unit and index ias the post-synaptic unit. We assume
that post-synaptic potentials generated by synapses of
type[1,1] correspond to synaptic currents mediated by
NMDA glutamatergic receptors. These discrete levels
could be interpreted as a combination of two factors:
the number of synaptic boutons between the pre- and
post-synaptic units and the changes in synaptic con-
ductance as a result of Ca2+influx through the NMDA
receptors. In the current study we attributed a fixed
activation level (that means no synaptic modification)
Aji(t)=1, to exc inh,inh exc, and inh inh
synapses.
J. Iglesias et al. / BioSystems 79 (2005) 11–20 15
Areal-valued variable Lji(t) is used to imple-
ment the spike-timing dependent plasticity rule for
Aji(t), with integration of the timing of the pre-
and post-synaptic activities. The variables Lji(t) are
user-definedboundariesofattractionL0<L
1<L
2<
··· <L
N1<L
Nsatisfying Lk1<[Ak]<L
kfor
k=1,...,N. This means that whenever Lji >L
kthe
activationvariableAji jumpsfromstate [Ak]to[Ak+1].
Similarly,ifLji <L
ktheactivationvariableAji jumps
from state [Ak+1]to[Ak]. Moreover, after a jump of
activation level [A] occurred at time tthe real-valued
variable Lij is reset to Lij (t+1) =Lk+Lk+1/2.
Spike-timing dependent plasticity (STDP) defines
how the value of Lji at time tis changed by the ar-
rival of pre-synaptic spikes, by the generation of post-
synaptic spikes and by the correlation existing between
theseevents.On thegeneration of a post-synaptic spike
(i.e., when Si=1), the value Lji receives an increment
which is a decreasing function of the elapsed time from
the previous pre-synaptic spike at that synapse. Simi-
larly, when a spike arrives at the synapse, the variable
Lji receivesadecrementwhich is likewiseadecreasing
function of the elapsed time from the previous post-
synaptic spike (i.e., when Sj=1). This rule is summa-
rized by the following equation: Lji(t+1) =Lji(t)+
(Si(t)Mj(t)) (Sj(t)Mi(t)), where Si(t),S
j(t) are the
state variables of the ith and jth units and Mi(t),M
j(t)
are interspike decay functions. Mi(t) may be viewed as
a “memory” of the latest interspike interval,
Mi(t+1) =Si(t)Mmax[qi]
+(1 Si(t))(Mi(t) exp(t/τsyn[qi]))
Fig. 2. Pruning dynamics. The real-valued variable Lji is increased or decreased according to the STDP rule. If the value Lji reaches one of
the Lkuser-defined boundaries a jump occurs in the integer-valued variable [Ak]. At the begin all (ee) synapses have been set at activation
level [A3]. (a) Example of potentiation with an increase in synaptic strength that is stabilized on the long term. (b) Example of depression with
a fast decrease in synaptic activation level down to its minimal level, [A1]=0 which provokes the elimination of the synapse. (c) Example of a
synaptic link which neither affected by potentiation nor by depression, but its efficacy decays down to [A1]=0 according to the time constant
τact.
where τsyn[qi]is the synaptic plasticity time constant
characteristic of each type of unit and Mmax[qi]was set
Mmax[qi]=2 for all units of either type in this study. In
the case that neither the pre- nor the post-synaptic unit
is firing a spike, the real-valued variable will decay
with a time constant kact[qj,qi]=exp (1act[qj,qi])
characteristic for each type of synapse, such that the
final equation is the following:
Lji(t+1) =Lji (t)kact[qj,qi]
+(Si(t)Mj(t)) (Sj(t)Mi(t)).
In the present study the differences between the
user defined boundaries Lkwere all equal, such that
Lk=LkLk1=20 for any attractor state [Ak].
At the begin of the simulation all modifiable synapses
were set to activation level [A3]=2. Fig. 2a illustrates
a case when the synaptic link receives a potentiation
determined by the STPD rule described above. The
activation variable jumps from [A3]to[A4] and sta-
bilizes at highest activation level. Fig. 2b illustrates a
case when the synapse is continuously depressed such
that the activation variable jumps from [A3]to[A2],
and then from [A2]to[A1] faster than its spontaneous
decay, determined by time constant kact[qj,qi].Fig. 2c
illustrates a case when the synapse is neither depressed
nor potentiated and the activation level spontaneously
decay down to the minimal level.
2.5. Synaptic pruning
No generation of new projections is allowed in the
present study, although specific rules could be defined
16 J. Iglesias et al. / BioSystems 79 (2005) 11–20
to this purpose. Synaptic pruning occurs when the ac-
tivation level of a synapse reaches a value of zero. This
means that synaptic pruning may occur only for synap-
tic connections of type [1,1], which are also the most
abundant, when the activation level Aji decreases to its
minimal value, i.e. [A1]=0. In this case the synapse
[i, j] is eliminated from the network connectivity.
2.6. Background activity
The background activity Bi(t) is used to simulate
the input of afferents to the ith unit that are not ex-
plicitly simulated within the network. Let us assume
that each type of unit receives next[qi]external affer-
ents. In the present study we simplify by setting that all
units receive the same number of external projections
and that all of them are excitatory. Namely, we assume
nin50and that the post-synaptic potentialgener-
ated by these external afferents is fixed to a value equal
to P[1,1]. In the current case (see Table 1) each external
afferent generates an excitatory post-synaptic potential
equal to 0.84mV.
We assume that the external afferents are correlated
among them. This means that each time a unit is re-
ceiving a correlated input from 50 external afferents its
membranepotential is depolarizedto an extentthat will
generate a spike. Such external input is distributed ac-
cording to a Poisson process which is independent for
each unit and with mean rate λi. The rate of external
background activity is a critical parameter. We found
that with all previous parameters being kept constant a
rate of background activity λi<8 spikes/s is unable to
sustain any activity at all. In the present study we have
set the Poisson input to a rate λi=10spikes/s.
2.7. Network size
We investigated the pruning dynamics with net-
works of different sizes. The smallest network was de-
fined by 10 ×10 units and the largest network stud-
ied here was 100 ×100 units, i.e. (10 ×N)2, with
N∈{1,...,10}. To compensate for the changes in
the balance between excitation and inhibition induced
by the change of size, we introduced the scaling fac-
tor f=3
104/(10 ×N)2, where Nis the size as de-
scribed before. The uniform probability for an excita-
tory unit to project to any other unit of the network was
scaled according to φ
[1] =[1], leading to a larger
Table 2
The scaled parameter values for each network size N
NSize φ
[1] (%) P
[1,1] (mV)
110×10 9.28 2.36
220×20 5.84 1.64
330×30 4.46 1.35
440×40 3.68 1.19
550×50 3.17 1.08
660×60 2.81 1.01
770×70 2.53 0.95
880×80 2.32 0.90
990×90 2.14 0.87
10 100 ×100 2.00 0.84
See text for details.
number of excitatory connections at the beginning of
the simulations for smaller networks. The level of
post-synaptic depolarization for excitatory–excitatory
synapses was scaled as P
[1,1] =(1 +f1/2)P[1,1],
so that the strength of these connections was larger for
smaller networks. Table 2 lists the scaled values of φ
[1]
and P
[1,1] we used. Note that the values for N=10
correspond to those listed in Table 1.
2.8. Simulation tools
The simulator was a custom developed, Open
Source,C program thatrelies on the GNUScientific Li-
brary (GSL) for random number generation and quasi-
randomSoboldistributionimplementations.AMySQL
database back end stored all the configuration details
and logs for later reference. This information was ex-
tracted and formatted by a set of PHP Web scripts to
monitor the status of the simulations and create new
ones. With our current implementation at the Univer-
sityof Lausanne, a 10,000 units network simulation for
a durationof 1 ×106timestepslastedapproximatively
8h, depending on the network global activity.
3. Results
All synapses of type [1,1], i.e. (exc exc) were
initializedwithAij (t=0) =[A2].In presence of back-
ground activity only, most synapses were character-
ized by a decrement of the activation level. After a
long time, t=tsteady, the network activity is stabilized
and STDP does not modify any more the activation
level of the synapses. At time t=tsteady most modifi-
J. Iglesias et al. / BioSystems 79 (2005) 11–20 17
Fig.3. Pruning dynamics averagedovern=10 simulations for each
network size. With the proposed size compensation factor, the prun-
ing dynamics are comparable for network sizes N∈{4,...,10}.
Simulations for N=1 and 2 saturated, suggesting that the compen-
sation factor was too large for those two specific sizes.
able synapses were eliminated and almost all remain-
ing active synapses were characterized by the highest
possible activation level, i.e. [A4]. We observed that
t=tsteady could be as long as t=1×106in several
simulation runs.
3.1. Network size
It is interesting to notice that the final ratio of active
synapses (Rmax[A]) represented only few percents of
the initial number of synapses (Fig. 3). In addition, it
is important to notice that those connections that reach
the maximal activation level do not necessarily remain
active until t=tsteady. Several synapses reached level
[A4] after some delay, then their activation level de-
creased down to [A1]=0, at variable speed, and the
synapse was eventually eliminated.
It is important that a network be attuned to work
in a range such that background activity is unable to
create spurious attractors by STDP. This means that
background activity alone should not create stable con-
nections that would shape the topology of cell assem-
blies. The size of the network is critical if the goal is to
detect the emergence of chains of interconnected units
embeddedin a largenetwork.Fig. 3 showsthatthe ratio
of active synapses with activation level equal to [A4]
couldbe as highas 50%of all initialsynapses. The final
percentage of active units is much less variable and at
tsteady it is always less than 10% for networks that did
not saturate.
3.2. “Seed” effect
A simulation study that relies on large use of ran-
domly generated numbers may fall into local minima
or spurious attractors simply by chance. It was nec-
essary to assess the effect of the seed of the random
number generator on our simulation. The most critical
effect of the randomization may occur at the very be-
ginning, when the initial network topology is created
according to the density functions of the connections
of the different types of units. The very same simula-
tion, with the parameter set described in Table 1 was
repeated 100 times using different random seeds with
the largest network size, i.e. 100 ×100 units.
The choice of the seed had a significant impact on
the value of Rmax[A]at time tsteady, as it could vary in
therange [1.30,6.03]%.However,asshownby the dis-
tribution of Rmax[A](Fig. 4), about 90% of these values
were comprised between 3.0 and 6.0%. Moreover, we
never observed cases with absence of convergence at
delays as large as t=1×106. This indicates that a
“seed effect” exists but this does not cause changes in
the overall dynamics of synaptic pruning.
Abiasin the orientationofthe connections could oc-
cur by random choices. In order to test this hypothesis
two cases of extreme values of Rmax[A]observed in the
distributionofFig. 4 wereselected. The first casecorre-
sponds to Rmax[A]as low as R1=1.97%. The second
case corresponds to Rmax[A]as high as R2=6.03%.
Fig. 4. Random seed effect on the number of synapses that remain
afterpruning.Aftern=100 simulations that used differentseedsthe
distribution of Rmax[A]at time t=tsteady =1×106, with bin = 0.5,
shows that in the majority of the runs synaptic pruning left 3.0–6.0%
active synapses at the maximum level [A4].
18 J. Iglesias et al. / BioSystems 79 (2005) 11–20
For both cases we calculated the deviation from an
isotropic connection, as defined previously for Fig. 1c
and g, for all active synapses, i.e. with an activation
level not equal to [A1]=0. Then we calculated an av-
erage deviation plot that corresponds to the mean of
the orientations computed over all active synapses at
given times t, namely at t1=1×105,t2=2×105,
and t=tsteady =1×106.
Fig. 5a shows the evolution of the orientation map
in the case R1, when the network stabilizes with
a low level of active connections. In this example
the initial number nof connections at time t0was
n=1,517,240. At t1330,920 synapses remained,
at t2172,503 synapses remained, and eventually the
Fig. 5. Random seed effect on the orientation and length of active connections. (a) Average orientation map. A circular line indicates an isotropic
orientation of the projections of a unit ideally located at the center marked by a cross. This simulation correspond to run R1of Fig. 4. The average
deviation from isotropy for all active connections is plotted at various times. The last line correspond to tsteady : 30,864 synapses remained active,
all with active state [A4], which corresponded to 1.97% of the initial number of synapses at time t0. (b) Normalized histogram of the length of
the source-to-target projections measured in Euclidean distance in the 2D lattice for simulation run R1. A flat line at ratio =1 indicates that the
distances are totally predicted by the modified 2d Gaussian distribution function described in the text (cf. Section 2.1). (c) Average orientation
map corresponding to run R2of Fig. 4.Attsteady 93,346 synapses remained active, all with active state [A4], which corresponded to 6.03% of the
initial number of synapses at time t0. (d) Normalized histogram of the length of the source-to-target projections measured in Euclidean distance
in the 2D lattice for simulation run R2.
network stabilized with 30,864 excitatory–excitatory
synapses. The orientation map shows that the devia-
tions from an isotropic distribution were equally dis-
tributed in all directions. Another factor that could be
affected by the random choice is the distance from
source-to-target (calculated as an Euclidean distance
over the 2D lattice) of the remaining projections. The
histogram of the distribution of these distances Fig 5b
was normalized with respect to the probability distri-
bution of establishing a connection. In this normalized
histogram a ratio of 1 means that the count is perfectly
determined by the probability distribution. In the case
of R1we observe that there was a tendency to some
deviation from the original probability function, but
J. Iglesias et al. / BioSystems 79 (2005) 11–20 19
this variance was the same for any source-to-target dis-
tance.Inthe case R2, thenumberof initial synapses was
n=1,512,634 and at tsteady 93,346 active synapses
remained. This analysis shows that the “seed” effect
does not introduce significant biases neither in the ori-
entation, nor in the length of the connections that were
selected by pruning.
4. Discussion
We assumed a number of simplified hypotheses
aboutthepresenceofonlytwotypesofunits, their leaky
integrate-and-fire dynamics, their distribution and the
dynamics of the transfer functions of the synapses that
connect these units. With all these assumptions we ob-
served that the network reached a steady state when
the synaptic weights were either incremented to the
maximum value or decremented to the lowest value.
Ourresult is inagreement with the bimodal distribution
of synaptic strengths observed with a different STDP-
based model (Chechik et al., 1999; Song et al., 2000;
SongandAbbott, 2001). Thiseffectisinterpreted as the
effect of the STDP rule that leads pre-synaptic neurons
to compete for the capacity to drive the post-synaptic
unit to produce an all-or-none output signal, akin to an
action potential.
The choice of a 2D lattice topology allowed us to
study the effect of incrementing the network size from
10 ×10 to 100 ×100 units. It was interesting to ob-
serve that the final ratio of synapses that remained ac-
tive (that we labeled here Rmax[A]) was below 10% of
thenumberofinitiallyactivesynapses.Thisratiovaried
only slightly with changes in network size (MacGregor
et al., 1995) and the effect of different random seeds at
the initialization was also limited. On the other hand,
we observed that the ratio of active synapses character-
ized by the maximum weight could transiently reach
a proportion as high as 50% of all initial synapses.
This could indicate that a very large network may not
be necessary for recurrent networks to emerge. Inter-
connected sizable “modules” up to 50 ×50 or 60 ×60
units embedded in larger networks may offer a more
efficient way to recruit active synapses that compete
for generating a post-synaptic spike.
Abias inthe geometrical orientation of the synapses
at the network level might produce important effects
on the global dynamics as it could introduce singulari-
ties in the network topology. These singularities could
sustainattractorswithunbalancedexcitatory/inhibitory
inputs if they were the consequence of content-related
inputs (Hill and Villa, 1997). In presence of only back-
ground noise these attractors would be spurious and
could mask input-related features. We observed that
the reinforcement of few synapses occurred without
geometric distortions in both direction and source-to-
target distance over the 2D lattice. Synaptic pruning
proceededinan homogeneous andisotropic way across
the network. This result suggests that the implementa-
tion of the current STDP rule is equivalent to random
pruning and does not introduce spurious biases.
The present work is currently extended in two di-
rections that will be reported in future articles. The
first direction consists in studying the effect of dif-
ferent synaptic transfer functions without changing
the other parameters of the simulation. This would
introduce temporally asymmetrical STDP (Bi and
Wang, 2002), where both time window and effi-
cacy changes are different for LTD than for LTP
(Stuart and Hausser, 2001). In addition to the mod-
ifiable synaptic rule it would be interesting to in-
troduce more realistic transfer functions in all types
of synapse, in particular in the inhibitory synapses,
to account for modulation frequencies and pre-
synaptic spike interval distribution (Segundo et al.,
1995a,b).
Thesecondextensionofthisworkistheintroduction
ofcontent-relatedinputs,i.e. spatiotemporal patterns of
dischargesassociatedtoselectedstimuli(Abeles,1991;
Hopfield and Brody, 2000; Villa, 2000). Preliminary
observations carried out with a simplified version of
this simulation have demonstrated that these types of
network may store the traces of time-varying stimuli
such that similar stimuli blurred with noise can evoke
an activity pattern close to the original one (Eriksson
et al., 2003; Torres et al., 2003).
Acknowledgements
The authors thank Dr. Yoshiyuki Asai for discus-
sions and comments on the manuscript. This work is
partiallyfunded bythe Futureand EmergingTechnolo-
gies program (ISTFET) of the European Community,
under grant IST-2000-28027 (POETIC), and under
grant OFES 00.0529-2 by the Swiss Government.
20 J. Iglesias et al. / BioSystems 79 (2005) 11–20
References
Abeles, M., 1991. Corticonics: Neural Circuits of the Cerebral Cor-
tex. Cambridge University Press.
Bi, G.Q., Poo, M.M., 1998. Synaptic modifications in cultured
hippocampal neurons: dependence on spike timing, synaptic
strength, and postsynaptic cell type. J. Neurosci. 18 (24), 10464–
10472.
Bi, G.Q., Wang, H.X., 2002. Temporal asymmetry in spike timing-
dependent synaptic plasticity. Physiol. Behav. 77 (4/5), 551–555.
Chechik, G., Meilijson, I., Ruppin, E., 1999. Neuronal regulation: a
mechanism for synaptic pruning during brain maturation. Neural
Computation 11, 2061–2080.
Engel, D., Pahner, I., Schulze, K., Frahm, C., Jarry, H., Ahnert-
Hilger, G., Draguhn, A., 2001. Plasticity of rat central inhibitory
synapses through GABA metabolism. J. Physiol. 535 (2), 473–
482.
Eriksson, J., Torres, O., Mitchell, A., Tucker, G., Lindsay, K., Rosen-
berg, J., Moreno, J.-M., Villa, A.E.P., 2003. Spiking neural net-
works for reconfigurable POEtic tissue. Lecture Notes Comput.
Sci. 2606.
Froemke, R.C., Dan, Y., 2002. Spike-timing-dependent synaptic
modification induced by natural spike trains. Nature 416 (6879),
433–438.
Fusi, S., Annunziato, M., Badoni, D., Salamon, A., Amit, D.J., 2000.
Spike-drivensynapticplasticity:theory, simulation, VLSI imple-
mentation. Neural Comput. 12, 2227–2258.
Hill, S.L., Villa, A.E.P., 1997. Dynamic transitions in global network
activity influenced by the balance of excitation and inhibition.
Network: Computat. Neural Syst. 8, 165–184.
Hopfield, J.J., Brody, C.D., 2000. What is a moment? “Cortical”
sensory integration over a brief interval. Proc. Natl. Acad. Sci.
USA 97 (25), 13919–13924.
Hopfield, J.J., Brody, C.D., 2004. Learning rules and network repair
in spike-timing-based computation networks. Proc. Natl. Acad.
Sci. USA 101 (1), 337–342.
Karmarkar, U.R., Buonomano, D.V., 2002. A model of spike-timing
dependent plasticity: one or two coincidence detectors? J. Neu-
rophysiol. 88 (1), 507–513.
Kelso, S.R., Ganong, A.H., Brown, T.H., 1986. Hebbian synapses in
hippocampus. Proc. Natl. Acad. Sci. USA 83 (14), 5326–5330.
Kepecs, A., van Rossum, M.C.W., Song, S., Tegner, J., 2002. Spike-
timing-dependent plasticity: common themes and divergent vis-
tas. Biol. Cybernet. 87, 446–458.
Lumer, E.D., Edelman, G.M., Tononi, G., 1997. Neural dynamics
in a model of the thalamocortical system. ii. The role of neural
synchrony tested through perturbations of spike timing. Cerebral
Cortex 7 (3), 228–236.
MacGregor, R.J., Ascarrunz, F.G., Kisley, M.A., 1995. Characteri-
zation, scaling, and partial representation of neural junctions and
coordinatedfiringpatternsbydynamicsimilarity. Biol. Cybernet.
73 (2), 155–166.
Markram, H., Lubke, J., Frotscher, M., Sakmann, B., 1997. Regula-
tion of synaptic efficacy by coincidence of postsynaptic APs and
EPSPs. Science 275 (5297), 213–215.
Mimura,K.,Kimoto,T.,Okada,M.,2003.Synapse efficiencydiverge
duetosynapticpruningfollowingover-growth. Phys. Rev.EStat.
Nonlin. Soft. Matter. Phys. 68.
Press, W.H., Flannery, B.P., Teukolsky, S.A., Vetterling, W.T., 1992.
NumericalRecipesinC:The Art of Scientific Computing,second
ed. Cambridge University Press.
Rakic, P., Bourgeois, J.P., Eckenhoff, M.F., Zecevic, N., Goldman-
Rakic, P.S., 1986. Concurrent overproduction of synapses in di-
verse regions of the primate cerebral cortex. Science 232 (4747),
232–235.
Segundo, J.P., Stiber, M., Vibert, J.F., Hanneton, S., 1995a. Period-
ically modulated inhibition and its postsynaptic consequences.
i. General features, influence of modulation frequency. Neuro-
science 68 (3), 657–692.
Segundo, J.P., Stiber, M., Vibert, J.F., Hanneton, S., 1995b. Period-
ically modulated inhibition and its postsynaptic consequences.
ii. Influence of modulation slope, depth, range, noise and of
postsynaptic natural discharges. Neuroscience 68 (3), 693–
719.
Sjöström, P.J., Turrigiano, G.G., Nelson, S.B., 2003. Neocorti-
cal LTD via coincident activation of presynaptic NMDA and
cannabinoid receptors. Neuron 39, 641–654.
Song, S., Abbott, L.F., 2001. Cortical development and remapping
through spike timing-dependent plasticity. Neuron 32 (2), 339–
350.
Song, S., Miller, K.D., Abbott, L.F., 2000. Competitive Hebbian
learningthroughspike-timing-dependentsynapticplasticity.Nat.
Neurosci. 3, 919–926.
Stuart, G.J., Hausser, M., 2001. Dendritic coincidence detection of
epsps and action potentials. Nat. Neurosci. 4 (1), 63–71.
Torres, O., Eriksson, J., Moreno, J.M., Villa, A.E.P., 2003. Hard-
ware optimization of a novel spiking neuron model for
the POEtic tissue. Lecture Notes Comput. Sci. 2687, 113–
120.
Tyrrell, A.M., Sanchez, E., Floreano, D., Tempesti, G., Mange, D.,
Moreno, J.-M., Rosenberg, J., Villa, A.E.P., 2003. POEtic: An
integrated architecture for bio-inspired hardware. Lecture Notes
Comput. Sci. 2606.
Villa,A.E.P.,2000.Timeandthebrain. In: Miller, R.(Ed.),Empirical
Evidence About Temporal Structure in Multi-Unit Recordings,
vol. 2. Harwood Academic Publishers, pp. 1–51.
Wigstrom, H., Gustafsson, B., 1986. Postsynaptic control of hip-
pocampal long-term potentiation. J. Physiol. 81 (4), 228–
236.
Woodin,M.A., Ganguly,K.,Poo,M.,2003. Coincident pre- andpost-
synaptic activity modifies GABAergic synapses by postsynaptic
changes in Cltransporter activity. Neuron 39, 807–820.
Zecevic, N., Rakic, P., 1991. Synaptogenesis in monkey somatosen-
sory cortex. Cerebral Cortex 1 (6), 510–523.
... Connection pruning is a compression technique that has been applied to ANN [10,11] and SNN [12][13][14][15][16][17][18] to reduce the network complexity and energy consumption. It has been shown that more than half of the connections in a well-performing neural network can be removed with minimal impact on the classification accuracy [10,12]. ...
... Connection pruning is a compression technique that has been applied to ANN [10,11] and SNN [12][13][14][15][16][17][18] to reduce the network complexity and energy consumption. It has been shown that more than half of the connections in a well-performing neural network can be removed with minimal impact on the classification accuracy [10,12]. The connection pruning can be performed either on a pre-trained network [13,15] or during the network learning [14,[16][17][18]. ...
... Our proposed connection pruning approach helps to eliminate 92.83% connections in the network without incurring any loss in the classification accuracy, as presented in Figure 6a. This is consistent with the result in [12] that more than 90% of the connections can be eliminated after one million iterations of the STDP-based learning, regardless of the network size. However, in the work in [12], the connections are eliminated after a large number of iterations (one million iterations). ...
Preprint
Long training time hinders the potential of the deep Spiking Neural Network (SNN) with the online learning capability to be realized on the embedded systems hardware. Our work proposes a novel connection pruning approach that can be applied during the online Spike Timing Dependent Plasticity (STDP)-based learning to optimize the learning time and the network connectivity of the SNN. Our connection pruning approach was evaluated on a deep SNN with the Time To First Spike (TTFS) coding and has successfully achieved 2.1x speed-up in the online learning and reduced the network connectivity by 92.83%. The energy consumption in the online learning was saved by 64%. Moreover, the connectivity reduction results in 2.83x speed-up and 78.24% energy saved in the inference. Meanwhile, the classification accuracy remains the same as our non-pruning baseline on the Caltech 101 dataset. In addition, we developed an event-driven hardware architecture on the Field Programmable Gate Array (FPGA) platform that efficiently incorporates our proposed connection pruning approach while incurring as little as 0.56% power overhead. Moreover, we performed a comparison between our work and the existing works on connection pruning for SNN to highlight the key features of each approach. To the best of our knowledge, our work is the first to propose a connection pruning algorithm that can be applied during the online STDP-based learning for a deep SNN with the TTFS coding.
... It is obvious that such structural/functional modifications in the NNs should be subjected to the rule "the weaker pain, the more expedient the motor act," and this should be performed via strengthening of transsynaptic transmission in the respective synapses. Those synapses, which are functionally not in demand (are excessive) should undergo "pruning" [16] and eliminated via apoptosis. We believe that this is the principle of formation of the above aspect of sensorimotor memory, and it differs significantly from the formation of memory for events unrelated directly to movements (this is performed in the hippocampus) [3,5]. ...
... It should be realized, however, that such a functional property of the synapses, which allowed them to be collectively "subjected" to integrated information accumulated in mental image-like phenomena, has not been identified. Simultaneously, it should be mentioned that thousands of experimental works present in the neurophysiological literature (see [5,16,17]) have described a clearly collective activity of the neurons and their networks, simultaneously initiated in the absence of a satisfactory description of the mechanism of such collective activity. Nevertheless, we see (at least at present) that the only possibility to describe the performance of motor acts based on the earlier integrated experience is by taking into account precisely such a possibility to put in order physical activity of NNs by integrated information fixed in mental images. ...
Article
Full-text available
The formation of a structured behavioral motor response to an external painful (nociceptive) stimulus is performed in certain cerebral neuronal networks (NNs), and this is related to specific mental phenomena (experience of pain, mental pain images). We state that the description of the respective NNs (a biologically expedient pain neuromatrix) is impossible without taking into account mental pain-related phenomena. Structural and functional features of the respective NNs, responsible for the performance of pain-related behavior, are discussed. The specificities of functioning of definite synapses in the process of regulation of such behavior are also discussed. In fact, the proposed article deals with functional (as well as causal) relationships between objective and subjective processes providing the formation of novel information in the brain; this complex process is based on the integration of the earlier experience in the brain in the course of formation of a "competent" (expedient, structured by the above experience) motor act. Keywords: neuronal networks (NNs), mental phenomena, functionally specific synapses, integration of information, pain as an operator of information processing, formation of the structure of a motor act, information continuum of the NNs.
... We have previously shown that individual differences in the gross morphology of the AC are extremely stable over time and are likely to be mediated by genetic dispositions and/or by prenatal and early environmental influences (Seither-Preisler et al., 2014;Serrallach et al., 2016). The enlarged PTs seen in the patient groups (ADHD and ADD) may originate from diminished or delayed pruning (Iglesias et al., 2005), which potentially leads to oversized anatomical structures and functionally inefficient neural networks (Seither-Preisler et al., 2014;Serrallach et al., 2016;Groß et al., 2022). These morphological anomalies could hinder the build-up of reliable interconnections between bilaterally homotopic regions via the corpus callosum (Westerhausen et al., 2009). ...
Article
Full-text available
Attention deficit (hyperactivity) disorder (AD(H)D) is one of the most common neurodevelopmental disorders in children with up to 60% probability of prevailing into adulthood. AD(H)D has far-fetching negative impacts on various areas of life. Until today, no observer-independent diagnostic biomarker is available for AD(H)D, however recent research found evidence that AD(H)D is reflected in auditory dysfunctions. Furthermore, the official diagnostic classification systems, being mainly the ICD-10 in Europe and the DSM-5 in the United States, are not entirely consistent. The neuro-auditory profiles of 82 adults (27 ADHD, 30 ADD, 25 controls) were measured via structural magnetic resonance imaging (MRI) and magnetoencephalography (MEG) to determine gray matter volumes and activity of auditory subareas [Heschl's gyrus (HG) and planum temporale (PT)]. All three groups (ADHD, ADD, and controls) revealed distinct neuro-auditory profiles. In the left hemisphere, both ADHD and ADD showed reduced gray matter volumes of the left HG, resulting in diminished left HG/PT ratios. In the right hemisphere, subjects with ADHD were characterized by lower right HG/PT ratios and ADD by a similar right HG/PT ratio compared to controls. Controls and ADD had well-balanced hemispheric response patterns, ADHD a left-right asynchrony. With this study, we present the structural and functional differences in the auditory cortex of adult patients with AD(H)D.
... Also, the hardware implementation of the pruning algorithm is cumbersome because it requires an individual weight reading circuit and a cell transistor to deactivate the pruning synapse. [20][21][22] In this regard, we developed a hardware-implementable, energy-saving method referred to as a stashing system, which was inspired by the neuromodulation of the brain. The stashing system includes an algorithm (i.e., stashing algorithm) that defines the operating criteria and sequence and a circuit for realizing the algorithm. ...
Article
Full-text available
Neuromorphic engineering aims to mimic brain functions to achieve energy‐efficient artificial intelligence. Since researchers have indicated that memristors can mimic synapses and neurons, various studies have demonstrated the operation of neural networks using memristive dot product engine (MDPE) hardware. However, although several feasible implementations of synapse and neuron behaviors have been reported, few studies have demonstrated the system‐level energy‐efficient operation on the hardware. This work proposes a novel system inspired by the neuromodulation of the brain, referred to as a “stashing system.” In the system, the trained synapses are stashed temporarily during the training of the spiking neural network and then merged for inferencing. The software simulation first confirmed the working principle of the stashing system. Then, a hardware demonstration is performed at an integrated 32 × 32 MDPE embodying a self‐rectifying and electroforming‐free memristor cell to validate the system. The results confirm that energy consumption in the memristor array is reduced by 37% for the unsupervised learning of the MNIST dataset.
... Notably, the discharging switching controller consists of data buffers and capacitors were used, which can delay the disconnection of the switch. This design mimics the refractory period in biosystems [52]. The refractory period is a short period that the neurons are irresponsive to inputs singles after firing. ...
Article
Full-text available
To address the von Neumann bottleneck, artificial neural networks (ANNs) are aroused to construct neuromorphic computing systems. The artificial neuron is one of the essential components that collect the weight updating information of artificial synapses. Leaky-Integrate-and-Fire (LIF) neuron mimicking the cell membrane of biological neurons is a promising neural model due to its simplicity. To adjust the performances of artificial neurons, multiple resistors with different resistive values need to be integrated into the circuit. Whereas more components mean higher manufacturing costs, more complex circuits, and more complicated control systems. In this work, the first adjustable LIF neuron was developed, which can further simplify the circuits. To achieve adjustable fashions, a memristor-coupled capacitor with binary intrinsic resistant states was employed to integrate input signals. The intrinsic tunable resistance can modify the charge leaking rate, which determines the neural spiking features. Another contribution of this work is to overcome the hinder of credible circuit design using novel memristor-coupled capacitors with entangled capacitive and memristive effects. The genetic algorithm (GA) was utilized to detach the entanglement of memristive and capacitive effects, which is crucial for circuit design. This method can be generalized to other entangled physical behaviors, facilitating the development of novel circuits. The results will not only strengthen neuromorphic computing capability but also provides a methodology to mathematically decode electronic devices with entangled physical behaviors for novel circuits.
... The mammalian brain is initially formed through an initial rapid proliferation of synapses. Synaptic density thus reaches a peak during early infancy and from then on it begins a steady decline down to about half this value later in life, in a process known as synaptic pruning (Chechik, Meilijson, & Ruppin, 1998;Iglesias, Eriksson, Grize, Tomassini, & Villa, 2005). It is believed that the reason for reducing synaptic density is becoming more energetically efficient (Chechik, Meilijson, & Ruppin, 1999;Stepanyants, Hof, & Chklovskii, 2002). ...
Article
The interplay between structure and function affects the emerging properties of many natural systems. Here we use an adaptive neural network model that couples activity and topological dynamics and reproduces the experimental temporal profiles of synaptic density observed in the brain. We prove that the existence of a transient period of relatively high synaptic connectivity is critical for the development of the system under noise circumstances, such that the resulting network can recover stored memories. Moreover, we show that intermediate synaptic densities provide optimal developmental paths with minimum energy consumption, and that ultimately it is the transient heterogeneity in the network that determines its evolution. These results could explain why the pruning curves observed in actual brain areas present their characteristic temporal profiles and they also suggest new design strategies to build biologically inspired neural networks with particular information processing capabilities.
... While weight pruning has been widely applied in different ANNs, the benefits that weight pruning could provide for SNNs have yet to be explored. Limited works have reported applying weight pruning in SNNs so far (Iglesias et al., 2005;Rathi et al., 2019;Shi et al., 2019). Rathi et al. proposed a spike-timing-dependent plasticity (STDP) based online synaptic pruning method, which sets non-critical weights to zero during the training phase and removes the weights below a certain threshold at the end of training (Rathi et al., 2019). ...
Article
Full-text available
To tackle real-world challenges, deep and complex neural networks are generally used with a massive number of parameters, which require large memory size, extensive computational operations, and high energy consumption in neuromorphic hardware systems. In this work, we propose an unsupervised online adaptive weight pruning method that dynamically removes non-critical weights from a spiking neural network (SNN) to reduce network complexity and improve energy efficiency. The adaptive pruning method explores neural dynamics and firing activity of SNNs and adapts the pruning threshold over time and neurons during training. The proposed adaptation scheme allows the network to effectively identify critical weights associated with each neuron by changing the pruning threshold dynamically over time and neurons. It balances the connection strength of neurons with the previous layer with adaptive thresholds and prevents weak neurons from failure after pruning. We also evaluated improvement in the energy efficiency of SNNs with our method by computing synaptic operations (SOPs). Simulation results and detailed analyses have revealed that applying adaptation in the pruning threshold can significantly improve network performance and reduce the number of SOPs. The pruned SNN with 800 excitatory neurons can achieve a 30% reduction in SOPs during training and a 55% reduction during inference, with only 0.44% accuracy loss on MNIST dataset. Compared with a previously reported online soft pruning method, the proposed adaptive pruning method shows 3.33% higher classification accuracy and 67% more reduction in SOPs. The effectiveness of our method was confirmed on different datasets and for different network sizes. Our evaluation showed that the implementation overhead of the adaptive method regarding speed, area, and energy is negligible in the network. Therefore, this work offers a promising solution for effective network compression and building highly energy-efficient neuromorphic systems in real-time applications.
Article
Artificial neural networks (ANNs) can be employed as controllers for robotic agents. Their structure is often complex, with many neurons and connections, especially when the robots have many sensors and actuators distributed across their bodies and/or when high expressive power is desirable. Pruning (removing neurons or connections) reduces the complexity of the ANN, thus increasing its energy efficiency, and has been reported to improve the generalization capability, in some cases. In addition, it is well-known that pruning in biological neural networks plays a fundamental role in the development of brains and their ability to learn. In this study, we consider the evolutionary optimization of neural controllers for the case study of Voxel-based soft robots, a kind of modular, bio-inspired soft robots, applying pruning during fitness evaluation. For a locomotion task, and for centralized as well as distributed controllers, we experimentally characterize the effect of different forms of pruning on after-pruning effectiveness, life-long effectiveness, adaptability to new terrains, and behavior. We find that incorporating some forms of pruning in neuroevolution leads to almost equally effective controllers as those evolved without pruning, with the benefit of higher robustness to pruning. We also observe occasional improvements in generalization ability.
Article
Full-text available
It is clear to all, after a moments thought, that nature has much wemight be inspired by when designing our systems, for example: robustness, adaptability and complexity, to name a few. The implementation of bio-inspired systems in hardware has however been limited, and more often than not been more a matter of artistry than engineering. The reasons for this are many, but one of the main problems has always been the lack of a universal platform, and of a proper methodology for the implementation of such systems. The ideas presented in this paper are early results of a new research project, "Reconfigurable POEtic Tissue". The goal of the project is the development of a hardware platform capable of implementing systems inspired by all three major axes (phylogenesis, ontogenesis, and epigenesis) of bio-inspiration, in digital hardware.
Chapter
Full-text available
The brain is a highly interconnected network of neurones, in which the activity in any neurone is necessarily related to the combined activity in the neurones that are afferent to it. Due to the widespread presence of reciprocal connections between brain areas, re- entrant activity through chains of neurones is likely to occur. Certain pathways through the network may be favoured by inhomogeneity in the number or efficacy of synaptic interactions between the neural elements as a consequence of developmental and/or learning processes. In cell assemblies interconnected in this way, some ordered sequences of intervals within spike trains of individual neurones, and across spike trains recorded from different neurones, will recur. Such recurring, ordered, and precise (in the order of few ms) interspike interval relationships are referred to as “spatiotemporal patterns” of discharges. This term encompasses both their precision in time, and the fact that they can occur across different neurones, even recorded from separate electrodes. This chapter introduces the fundamental assumptions and algorithms that lead to the detection of complex patterns of neural discharges and introduces a way of interpreting this activity within the framework of non-linear dynamics. Empirical results of experimental and simulation studies are provided in different sections of the chapter.
Article
Full-text available
The conditions under which spontaneous activity is self-sustaining in network models may be important for understanding information processing in brain activity. We explored the influence of the membrane time-constant and cellular threshold potential on spontaneous network activity through parameter searches in a large-scale neural network of horizontally interconnected excitatory and inhibitory units. We show here that a wide range of activity patterns and behaviours emerge in a network where only the threshold potential and membrane time-constant vary. These simulations revealed a region within the parameter space for membrane time-constant and threshold potential where the balance of postsynaptic excitation and inhibition enables the network to make transitions rapidly between different activity states. Cross-correlograms showed the influence of global excitation and inhibition on the interactions of pairs of cells within a subnetwork. These cross-correlations indicate that, when a balance of excitation and inhibition exists, the contribution of the postsynaptic potentials to the membrane potential and the integration time determined by the membrane time-constant may play a key role in forming spatio-temporal representations.
Conference Paper
Full-text available
In this paper we describe the hardware implementation of a spiking neuron model, which uses a spike time dependent synaptic (STDS) plasticity rule that allows synaptic changes by discrete time steps. For this purpose it is used an integrate-and-fire neuron with recurrent local connections. The connectivity of this model has been set to 24-neighbour, so there is a high degree of parallelism. After obtaining good results with the hardware implementation of the model, we proceed to simplify this hardware description, trying to keep the same behaviour. Some experiments using dynamic grading patterns have been used in order to test the learning capabilities of the model.
Book
Preface 1. Anatomy of the cerebral cortex 2. The probability for synaptic contact between neurons in the cortex 3. Processing of spikes by neural networks 4. Relations between membrane potential and the synaptic response curve 5. Models of neural networks 6. Transmission through chains of neurons 7. Synchronous transmission Appendix Index.