PreprintPDF Available

Spiking Neuron Model Computational Performance

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Spiking neuron models can be dramatically more efficient than traditional ANN models and this paper analyzed the computational performance on a 100million neuron multicore server and a 16-billion neuron multi-server neocortex emulation.
Spiking Neuron Model Computational
Performance
Charles Simon
FutureAI
Washingtone DC, USA
ORCID 0000-0003-2858-6501
AbstractThis paper contributes to an estimate of the
computer power required to emulate the entire human neocortex
by implement ing a biologically plausible spiking neural algorithm
and measuring the performance in a single multicore computer
and in a cluster of networked computers. The results are
extrapolated to the scale of the neocortex based on measurement
of the computational performance on the single machine and the
network traffic needed for server-to-server transfers. Algorithmic
improvements are identified for future implementation.
The spiking neural model is based on observations of biological
neurons and differs from most ANN algorithms in two important
ways: 1) the array of synapses for any neuron is sparse and 2)
signifi cant proce ssi ng is only needed for neurons which fire. These
both contribute to the performance achieved on a single CPU
which is RAM-speed limited. On the other hand, the sparse
synapse array makes this algorithm less amenable to GPU
acceleration. The impact of learning on performance is also
discussed.
Computational performance scales linearly with the number
of active synapses because the number of synapses is large relative
to the number of neurons. Importantly, although computational
and RAM requirements scale linearly with the number of synapses
per neuron, network data requirements for machine-to-machine
transfers generally scale with the number of neurons on the server.
The greater the capacity of each individual server, the lower the
network requirement.
The computational requirement is further reduced because the
spiking model need not allocate and process synapses with near-
zero weights. Most ANN models rely on representing tiny weights
as these are necessary to the linear models. The brain, similarly,
develops huge numbers of these synapses but it is because
allocating new neurons is extremely slow while adjusting the
weight of an existing neuron is quick. The computer with a spiking
model need not simulate these synapses.
The overall conclusion is that a model of the complete
neocortex could be implemented on today’s hardware. The
specific number of machines required depends on a number of
assumptions and whether the intent is to emulate in real time, or
slower or faster by some factor. Sample calculation is done for 160
servers.
KeywordsSpiking neural network, multicomputer, CPU
performance
I. BACKGROUND:
While this paper is focused on the performance of algorithms
in multicore and multicomputer configurations, some
neuroscience information is necessary to describe the scope of
the problem. Overall, the brain exceeds the performance of any
single CPU for the foreseeable future so this paper estimates the
issues in processing across multiple parallel CPUs. The same
issues are also relevant to a VLSI implementation although
processing and data rates will be different.
Throughout the paper it should be noted that most biological
measurements are approximations with only one or at best two
significant digits. This section also describes values selected for
subsequent estimates to help define the scope of variability in
the estimates.
A. Neuron Function
A schematic drawing of a neuron is shown in Fig. 1.
Fig. 1. Diagram of a neuron showing “Inputs” and “Outputs” which are
synapses and may count many thousands. The myelin sheath on the axon
is only present on long axons which may be 100mm long in the neocortex
so this drawing is not to scale by several orders of magnitude. Shorter
axons in the brain are not myelinated which makes them slower but more
densely packed and still often hundreds of times longer than the cell body
diameter.. Diagram by Egm4313.s12 at English Wikipedia / CC BY-SA
(https://creativecommons.org/licenses/by-sa/3.0)
The neuron receives inputs (xn) from other neurons via
synaptic connections to dendrites. Each synapse may contribute
ionic neurotransmitters which accumulate as a charge in the
neuron. The amount of charge contributed by a synapse is
considered its “weight” which varies from one synapse to
another and can be varied to facilitate learning. When the neuron
charge reaches a threshold the neuron fires which sends a neural
spike along the axon to the synapses (yn) which contribute
charge to other neurons. A synapse from a firing neuron will be
referred to as “active” as it will be shown that active synapses
are the primary component of computer simulation load.
The neuron is essentially a digital device in that neural spikes
are about the same size and variations in spike shape are
considered noise. Relative spike timing is usually considered its
only variable feature, and this is also subject to a great deal of
noise (jitter). The amount of charge contributed by a synapse is
limited to approximately 100 discrete values [Montgomery].
Although neurons are typically described in terms of the
continuous mathematical functions relating to membrane
diffusion, exponential charge decay, etc., discrete
approximations for these functions are used in this presentation
which likely exceed the accuracy of biological neurons because
of the high noise levels in the brain [Faisal].
B. Neuron Performance
Although the function of a neuron can be measured
electronically it is misleading to think of the neuron as an
electronic device. Instead it relies on the physical transport of
ions or changes in their orientation and thus works in timeframes
of millisecondsa billion times slower than today’s electronic
components. The maximum expected firing rate for a neuron is
about 250Hz but this is not sustained as neurons in the neocortex
are estimated to fire only once every 6s on average [Grace,
Lennie]. This low average firing rate will be important in
calculating the number of neurons which fire vs. the number
emulated on a single server.
The length of the axon is variable and in neurons which
transfer signals to the human body, may be over a meter in
length. Within the brain axon lengths can be loosely grouped
into “long” with lengths averaging 100mm and “short” with
lengths averaging 10mm [Braitenberg]. This categorization will
become important in estimating the number of axons in a
computer model which cross a boundary between one physical
computer and another.
Nerve conduction velocity for unmyelinated short axons is
also quite slow at just a few m/s. This means that the signal
propagation from the cell body to the destination synapses may
take several milliseconds and this should be taken into account
when estimating the necessary timescale resolution of the
simulation. 2ms per neuron cycle is used in estimates.
Learning in biological neurons is not fully understood
although Hebbian learning is known to adjust synapse weights
based on near-concurrent firing of connected neurons. Other
learning mechanisms may also exist but learning likely only
affects a tiny portion of synapses at any given time. For example,
once learned, the synapses involved in reading or understanding
language cannot change substantially or one could forget these
abilities rapidly if they were not used/reinforced. While one
might learn new words, most learned words, the recognition of
characters, etc. are seldom modified.
C. Useful Synapses
At its destination, the axon branches out into as many as
10,000 synapses. The computer can allocate new synapses
quickly while the brain cannot. Biological synapse weights can
be modified in tens of milliseconds while synapse creation and
migration happen over periods of hours and days. This means
the brain must include a large number of near-zero-weight
“synapses-in-waiting” to be used when the need arises by
adjusting the weight. The computer need not simulate these
because additional synapses can be allocated quickly when
needed. The distribution of synapse weights within the
neocortex would help to determine the actual number of
synapses needed for the simulation but this is not presently
known. It is also likely (but not yet observed) that multiple
parallel synapses are needed to create an effective high synapse
weight (again, the distribution of synapse weights would be
useful). In simulation, these multiple synapses can be
consolidated into a single synapse with a weight equal to their
sum. For the simulations, a factor of 100 is used meaning that
instead of 10,000 synapses per neuron, only 100 are simulated.
The effect of this factor is clearly stated so adjustment can be
made easily to the overall estimates.
Although not addressed in this paper, a similar factor may
likewise be applied to the number of neurons to be simulated.
The brain contains many redundant neurons for reliability while
the computer can ignore these because the computer is much
more reliable. Further, the computer may be able to simulate
clusters of neurons easily to eliminate the need for substantial
numbers of individual neurons. One might conclude that a full
neocortex simulation might be accomplished with many times
fewer neurons than the brain possesses.
D. The brain
The human brain can be divided into three parts: the
brainstem which is largely responsible for autonomic functions;
the cerebellum which is responsible for muscular coordination;
and the neocortex which is responsible for higher level
functions. This paper will focus specifically on the neocortex.
The neocortex contains about 16 billion neurons which are
concentrated near the convoluted outer surface while the interior
consists of a mass of axonal connections. If unfolded the
neocortex would be a rough disk with an area of 2600cm2 (a
250mm radius) as shown in Fig. 2. In the neocortex, the neurons
are in several layers near the surface of the neocortex but for the
purpose of these calculations, the layering can be ignored with
all the neurons assumed to exist in a single layer.
The neuron density is therefore 16 billion/2600cm2 or
~60,000/mm2 or (linearly) ~250neurons/mm. With the average
short axon length of 10mm, we can expect the neurons routinely
connect to others 2,500 neurons away or more. The flattening
of the simulated neocortex will make alter synapse lengths,
10mm will continue to be used for estimation.
Fig. 2. The neocortex can be modeled as a disk of neurons with the two
hemispheres being largely independent. Each with 8 billion neurons, they
are connected by the 300 million fibers of the corpus callosum which
represent the ~100mm-long axons of their respective neurons. Within
each hemisphere, the number of axons crossing any particular boundary
can be estimated by considering a line of neurons forming a perpendicular
to the boundary and multiplying by the length of the boundary or about
200,000 axons/mm of boundary length (in each direction).
We can use these factors to estimate the number of axons
which cross any given boundary within the neocortex. The
likelihood that any randomly-oriented given 10mm axon crosses
a boundary is given by:
cos(/10)/ (1)
where d is the distance from the neuron to the boundary (in
mm). This is the portion of a circle of radius 10mm centered on
the neuron which crossed the boundary.
Since the neuron can be anywhere from 0 to the axon length
away from the boundary, summing these probabilities along a
line of neurons perpendicular to the boundary (as in the inset of
Fig. 2.) leads to the expectation that any row of neurons will
likely present approximately 800 axons crossing the boundary
or 200,000 axons/mm of boundary length.
800 cos(/2500)/

 (2)
In a neocortex hemisphere any radial slice through the
neocortex can be expected to be crossed by 50 million axons.
This figure will be used to estimate the amount of data to be
transferred from machine to machine if the neocortex were
subdivided into multiple sectors. Long connections serve to
increase the data transfer requirement.
II. THE SIMPLEST NEURAL ALGORITHM
The simplest neural algorithm is “Integrate and Fire”
[Abbott] which is given by equations 3, 4, and 5. Numerous
features could be added which make the algorithm more
biologically accurate [Gerstner] as will be discussed later.
The algorithm is split into two phases so the calculation
becomes independent of the order of the neuron calculation and
is more amenable to parallel computation. In the equations, time
t+ (calculated in Eq. 3) is the intermediate time between t and
t+1.  represents the intermediate value which is calculated
for each neuron. In the second phase, (Eqs. 4 & 5) the internal
value is updated for all neurons and if the threshold has been
reached, is reset to zero and a spike is transferred to the output.
 = +
, 0 (3)
 = , <
0,  (4)
 = 0,  <
1,  (5)
As an example of the issue this two-phase calculation
corrects: if a neuron receives two inputs with weights +1 and -1,
the order in which these are processed could change the
outcomeif the +1 arrives first, the neuron will fire, if the -1
arrives first, it will not. So in a multiprocessing implementation,
the output would be indeterminate. With the two-phase
approach, all summing is performed prior to threshold detection.
In the experimental implementation, the algorithm is
“inside-out” in that each neuron maintains a list of synapses
which are its outputs. If the neuron fires, it adds the synapse
weight to the internal charge of each target neuron. A
synchronization lock on each target neuron charge value allows
for multiprocessor operation without potential race conditions.
In practice, such collisions are extremely rare so these locks are
insignificant to performance.
It is important to note distinctions between this spiking
algorithm and more typical ANN algorithms. This algorithm’s
neurons output digital spikes as opposed to floating point
numbers. First, no multiplication is needed. The weight of the
synapse is added to the charge of the target neuron. Further,
processing is only required for neurons which are firing. Thus,
processing time goes up with the number of neurons which are
firing and the overall array size contributes only a slight
overhead. Based on the fact that a neuron fires only once every
6s on average and using a 2ms cycle time, an individual neuron
would be expected to fire once every 3,000 cycles. For an array
of 100M neurons processing is expected for only 33,000 neurons
in each cycle. If a 1ms cycle time is selected, the expected
number of firings is only 16,000.
Further, in this algorithm, synapses of a neuron can connect
directly to any other neuron in the network. In the brain, the
synapses connect to other neurons within the radius of the axon
length (10mm) so there can be 10,000 connections from a
possible 6.5M target neurons. This still represents such a sparse
array that this algorithm is much less amenable to the GPU
acceleration favored by ANN algorithms which rely on filled
arrays.
The focus of most ANN systems relates to backpropagation
learning. For this discussion, learning affects such a tiny portion
of synapses in any cycle that it is not included in this
performance analysis and so the analysis is feedforward only.
Although Hebbian learning has been implemented it is not
included in this analysis.
A departure from biological equivalence in this simulation is
that all synapses run direct from one neuron to any other.
Because biological synapses are clustered at the end of the axon,
improved efficiency may be possible, particularly in a
multicomputer implementation.
III. PERFORMANCE IN A MULTICORE ENVIRONMENT
In this section data is presented for processing performance
on a single server which can be used in estimating the number
of servers for the neocortex simulation and some configuration
requirements (RAM, cores, etc.) for each server. All timing
measurements are made using the system high-precision clock
which presents time in 100ns increments. Timings were then
calculated with a moving average of 100 readings to create
repeatable results.
Tests were performed on a 64-core AMD Ryzen 3990X CPU
running at 4.0 GHz with 128GB of quad-channel DDR4 RAM
running at 1045.5 MHz with Windows 10 Pro.
A. Sensitivity to number of neurons (overhead)
There are two components to the algorithm which
predominate with different configurations of network 1)
“overhead” and 2) “neuron processing.” As mentioned before,
processing is only required for neurons which fire but there is
some degree of overhead which scales more-or-less linearly
with the number of neurons. This was measured by allocating
neural arrays with no synapses and no neurons firing as shown
in Table I. This area of code has been optimized to minimize
RAM access and so is substantially faster with increasing
numbers of threads. At this stage of development, it appears that
overhead processing is intractable so any real time simulation
requirement is limited by overhead issues.
In further tests, overhead processing has not been subtracted
out but explains the mixed-slope processing times. Note also
that for 100M and 1G neurons, RAM limits on the test server
precluded allocation of substantive numbers of synapses per
neuron.
TABLE I. OVERHEAD TIMING MEASURMENTS
Number of neurons
1M
10M
100M
1G
Time per cycle (ms)
124 threads
0.70
1.8
3.7
26
Time per cycle (ms)
32 threa ds
0.52
1.3
7.6
62
Time pre cycle (ms)
16 threa ds
0.4
0.96
8.4
82
B. Sensitivity to number of threads
For these tests, an array of one million neurons was
allocated, each with 100 random synapses. These arbitrary
numbers were chosen to facilitate ease of testing. Random
synapse weights were adjusted so that approximately 33,000
neurons per cycle would be firing which is representative of the
number of expected neurons firing in an array of 100 million
neurons with a 2ms cycle time. If one were to decrease the cycle
time to 1ms, then only 16,000 neurons firing per cycle would be
needed and overall neuron processing time would not increase
but overhead processing would become more significant.
Fig. 3. This graph shows the observed processing time per neural cycle to
handle a million neurons firing each with 100 synapses set to fire 33,000
neurons per cycle. The total of 3.3 million synapses being handled in
10ms leads to the raw figure of 330M synapses/s.
In any neural network, the number of synapses is large
relative to the number of neurons and overshadows other factors
so that processing time goes up linearly with the number of
active synapses.
The 64-core machine is not processor limited as near-
maximum performance is achieved well short of all cores
processing fully. Examination of the disassembly with a
performance profiler showed that with large numbers of threads,
over 90% of the computer time is spent waiting on the single
instruction where the CPU must retrieve the target neuron
charge value from RAM to add the weight. Since the target is at
a random address relative to the current neuron, nearly every
access to a target neuron will result in a CPU cache miss and all
CPU cores must wait in line to retrieve their target neuron values
from RAM.
A side effect of being RAM-limited on synapse targets is that
neuron processing time is essentially irrelevant as long as it
depends on neuron values which are likely to be in the CPU
cache. With a more sophisticated neuron model, such as in
[Izhikovich] the CPU will spend time calculating the neuron
value which would otherwise be spent waiting for other threads.
As an example, a leakage factor was added which causes neuron
charge to decay exponentially. Not only did this not increase
processing time, but processing time decreased measurably. The
reason behind this experimental result has not yet been
determined.
C. Sensitivity to Synapse distance
It was observed that processing time decreases as axon
length decreases since nearby target neurons are more likely to
reside in the CPU cache. As the synapse list approaches a
continuous array, a six-fold increase in performance was
obtained. This has not been pursued as it is not biologically
plausible.
0
20
40
60
80
100
120
050 100 150
Processing per cycle (ms)
Number of Threads
Processing time vs.
Number of Threads
D. Conclusions for Server Configuration
As currently implemented each neuron requires 144 bytes
and each synapse requires 16 bytes of memory. While the
processor requirement goes up only with the number of neurons
and synapses which fire, the numbers of neurons and synapses
allocated dictate the RAM requirements.
TABLE II. RAM REQUIREMENTS
Synapses/
neuron
1G neurons
10
304GB
100
1.7GB
1,000
16TB
10,000
161TB
As the system performance is RAM-access limited, the
shaded areas of Table II would be useful. Further, the
performance improvement for more than 16 cores (32 threads)
is marginal.
As previously estimated, a server with 100 million neurons
and 100 synapses per neuron would be expected to process
33,000 active synapses per 2ms (real time) cycle and would
execute cycles in about 12ms (10ms measured +2ms estimated
additional overhead). Accordingly, the server would be running
at 1/6 real time. Any number of tradeoffs can be made but in
general, processing time will decrease with decreasing active
synapses.
IV. PERFORMANCE IN A MULTI-COMPUTER ENVIRONMENT
This section presents results of initial experimentation to
establish the performance characteristics of a multicomputer
implementation while the following section projects these
results to a complete neocortex emulation. Here, we consider the
ability to handle larger arrays of neurons without a prohibitive
loss in performance
For multicomputer testing, two additional computers were
used: An Intel i7 4565 CPU running at 2.4Ghz 16GB DDR4 dual
channel RAM running at 1198MHz and an Intel i7 6700 running
at 3.68 GHz with 16GB of dual-channel DDR4 RAM running at
1064 MHz. Note that these computers are substantially slower
than the one used in the previous section. All computers are
connected with a 1Gbps ethernet LAN.
Fig. 4. In a single-computer configuration (left), the user interface
communicates with the server engine directly through RAM. In a multi-
computer configuration (right), the same user interface and engine
communicate through a LAN with thin client and server wrappers.
Neuron servers send synapse firing information directly to each other.
Although neuron servers can communicate directly with any other server,
in this experiment, all synapse connections are “short” and will target an
adjacent server.
Each server runs the same neuron engine .dll as in the
previous tests as shown in Fig 4. and the Neuron Server layer
handles synapse references which extend outside the array on
the local machine (“boundary synapses”). When a boundary
synapse activates, its weight and destination are placed in a
queue. When the basic neuron engine cycle is complete for all
local neurons, boundary synapses are dequeued and sorted so
that firings can be clustered into data packets and sent to the
correct server. On the receiving end, each server listens for
incoming packets and makes the appropriate changes to the
target neuron internal charges. No significant effort has been
expended in optimizing this process as it is assumed that the data
transmission time will overshadow any computation time.
In this initial implementation, the client directs all servers to
execute a single neuron cycle and then waits for all servers to
complete the neuron cycle and then transfer any boundary
synapses with timing results shown in Table III. Because of the
synchronized nature of this implementation, the system runs at
the speed of the slowest computer in the network. This issue
could be avoided by using a cluster of matched, high-
performance servers.
TABLE III. TIMING FOR MULTIPLE SERVERS.
Number
of
servers
Total
Neurons
Total
Active
Neurons
Overall
cycle
time
Timing
Total
boundary
synapses
1
1 M
0
10
1.5/0
0
1
1 M
34,000
88
82/0
0
2
2 M
63,000
116
55/53
48/44
99K
3
3 M
93,000
115
51/49
44/50
11/47
146K
Table III shows that after the first server, cycle time is
independent of the number of synapses because the number of
boundary synapses is constant for each added server. The
“Timing” column shows the firing and transfer times for each
server. These can be subtracted from the overall cycle time to
estimate the overhead of running in a client/server configuration.
Fig. 5. Each Neuron Server reports performance data including the amount of
time spent in the firing algorithm vs. the amount of time in data transfer
along with the number of active boundary synapses.
Each server can transmit approximately 50K boundary
synapses in 50ms or ~1M synapses/s. Each boundary synapse
requires 9 bytes of information, the target neuron, the weight,
and a flag. These are packed into UDP datagrams with a
maximum of 1,500 bytes (the default maximum packet size) so
each datagram packet can send 166 active boundary synapses.
UDP is a full duplex protocol so servers can transmit and receive
simultaneously. UDP includes no reliability checking but in the
controlled environment of these tests it is error-free as the ~50K
synapses/s represent less than 1% of the network capacity.
A. Discussion
The result of this test indicate that any number of servers can
be added in order to simulate any desired size of neuron array.
In practice, other factors may emerge with larger numbers of
servers and further experimentation will be needed to identify
these. Overall, performance remains constant for 2 or more
servers because each server adds the computational and
transmission capacity needed to process its neurons and the
amount of server-to-server network traffic is constant between
any pair of adjacent servers.
As it stands, the network transfer implementation is far from
optimal even in terms of today’s hardware. Here are some
additions which could make it significantly faster:
Use a 10Gbps network. Estimated performance
improvement: 10x.
Create “virtual axons”. Rather than sending individual
active synapse weights, the output of a neuron can be
transferred to the receiving server where it is distributed
to multiple target neurons. Only a single number (5
bytes) representing the axon is must be transferred.
Estimated performance Improvement: equal to the
simulated number of synapses per neuron. (A side-effect
of this change is that learning can be implemented with
synapse data needed residing on individual servers rather
than crossing server boundaries.)
Overlap the transmission phase in parallel with neuron
processing. This introduces a one-cycle delay in signals
crossing machine boundaries which could be an issue.
Estimated performance improvement: can reduce the
network delay to near zero as neural processing will be
slower than network transfer.
V. SIMULATING THE ENTIRE NEOCORTEX
Based on the performance testing above, we can create an
improved estimate of the amount of computer power needed to
emulate the neocortex’s 16 billion neurons assuming the
improvements above are implemented. Conceptually, each
hemisphere could be subdivided radially across N servers
although some modification would be required to handle the
narrow ends of sectors. If the sectors are generally narrower than
the short axon length, the number of boundary synapses will
increase.
Short Connections: The number of axons crossing each
radial boundary is independent of N and is estimated at 50M.
(250 mm * 250 neurons/mm * 800 boundary synapses/neuron)
With an expected activity rate of once every 6s, the expected
data load would be 42MB/s (5 bytes/axon * 50M axons / 6s)
which is well within the expected performance of a 10Gbps
network.
Long connections: Axons which connect one hemisphere to
the other or elsewhere and represent as many as 300M fibers.
We assume that these connections will always cross a machine
boundary and must be added to any short-connection
calculations. We further assume that they will be distributed
evenly among the various machines meaning that each machine
would be burdened with an additional 300M/N connections.
Regardless of the activity rate, these turn out to be
inconsequential relative to the boundary axons.
Using the experimental data, a server simulating 100 million
neurons with 100 synapses each can run in 1/6th real time. 160
such servers would be required to simulate 16 billion neurons,
80 for each neocortex hemisphere. Each server would be
responsible for transferring 50 million short connections and 2
million long connections per cycle. Continuing to use a firing
rate of every 6s and an axon number of 8 bytes yields a data
transmission requirement of ~80MB/s.
Using more synapses per neuron scales the problem linearly.
That is, using 10x as many synapses will make the simulation
run 10x slower so one second of “thinking” would require one
minute of simulation. Increasing the number of servers will only
compensate up to the point where sectors become so small that
a short connection will span more than the adjacent sector
dramatically increasing the number of boundary connections.
These performance experiments indicate that creating a full-
neocortex simulation is feasible on today’s hardware with the
scale of the implementation based on various assumptions and
the outcome of future neuroscience discoveries. Chief among
these is an improved understanding of the actual synaptic
interconnection patterns and processes among neurons.
REFERENCES
[1] L. Abbott, “Lalique’s introduction of the integrate-and-fire model neuron
(1907),” Brain Research Bulletin, Vol. 50, Nos. 5-6, pp. 303304, 1999
[2] L. Camuñas-Mesa, B. Linares-Barranco, T. Serrano-Gotarredona,
“Neuromorphic spiking neural networks and their memristor-CMOS
hardware implementations,” MDPB Materials, August 2019.
DOI: 10.3390/ma12172745
[3] S. Dutta, V. Kumar, A. Shukla, N. Mohapatra, U. Ganguly, “Leaky
integrate and fire neuron by charge-discharge dynamics in floating-body
MOSFET,” Scientific Reports, 2017.DOI: 10.1038/s41598-017-07418-y
[4] A. Faisal, L. Selen, D. Wolpert, “Noise in the nervous system,” National
Review of Neuroscience. 2008 Apr; 9(4): 292303.
DOI: 10.1038/nrn2258
[5] K. Grace, ed, “Neuron firing rates in humans,” [Survey of related
research], AI Impacts, https://aiimpacts.org/rate-of-neuron-firing/
[6] E. Kandel, J. Schwartz, T.M Jessel, Principles of Neural Science (3rd ed.).
Elsevier. ISBN 978-0444015624.
[7] P. Lennie, “The cost of cortical computing,” Current Biology, March
2003 DOI: 10.1016/s0960-9822(03)00135-0
[8] M. D. McDonnell, K. Boahen, A. Ijspeert and T. J. Sejnowski,
"Engineering intelligent electronic systems based on computational
neuroscience," in Proceedings of the IEEE, vol. 102, no. 5, pp. 646-651,
May 2014, DOI: 10.1109/JPROC.2014.2314776.
[9] J.M.. Montgomery, D.V. Madison. “Discrete synaptic states define a
major mechanism of synapse plasticity. Trends in Neuroscience, Dec.
2004, 27(12):744-750. DOI:10.1016/j.tins.2004.10.006
[10] C. Simon, Brain Simulator II (Software). Available for download at
http://brainsim.org. Source:
https://github.com/FutureAIGuru/BrainSimII
[11] C. Simon, “New Brain Simulator II Op en-Source Software” Proceedings,
AGI20, in press.
[12] Z. Zeldenrust, W. Wadman, B. Englitz1, “Neural coding with bursts
current state and future perspectives,” Frontiers in Computational
Neuroscience, 2018, DOI: 10.3389/fncom.2018.00048
[13] Braitenberg, V, Schüz, A., Cortex: Statistics and Geometry of Neuronal
Connectiviey, Springer, 1998.
[14] Gerstner W.,Naud, R., Kis tler W., Paninski L., Neuronal Dynamics,
Cambidge University Press., 2014.
[15] Izhikovich, E., “Simple Model of Spiking Neurons”, IEEE Transactions
on Neural Networks, Nov, 2003.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal–Oxide–Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
Article
Full-text available
Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10¹¹ neuron based) large neural networks.
Article
Full-text available
This special issue focuses on elucidating computational neuroscience: an interdisciplinary field of scientific research in which one of the primary goals is to understand how electronic activity in brain cells and networks enables biological intelligence. The objective is to provide a selection of papers that expose and review research efforts in aspects of computational neuroscience that demonstrate its rapidly growing intersection with electrical, electronic and computer engineering, and the prospects for interaction in the near and long-term future.
Article
Full-text available
Noise--random disturbances of signals--poses a fundamental problem for information processing and affects all aspects of nervous-system function. However, the nature, amount and impact of noise in the nervous system have only recently been addressed in a quantitative manner. Experimental and computational methods have shown that multiple noise sources contribute to cellular and behavioural trial-to-trial variability. We review the sources of noise in the nervous system, from the molecular to the behavioural level, and show how noise contributes to trial-to-trial variability. We highlight how noise affects neuronal networks and the principles the nervous system applies to counter detrimental effects of noise, and briefly discuss noise's potential benefits.
Article
Electrophysiological recordings show that individual neurons in cortex are strongly activated when engaged in appropriate tasks, but they tell us little about how many neurons might be engaged by a task, which is important to know if we are to understand how cortex encodes information. For human cortex, I estimate the cost of individual spikes, then, from the known energy consumption of cortex, I establish how many neurons can be active concurrently. The cost of a single spike is high, and this severely limits, possibly to fewer than 1%, the number of neurons that can be substantially active concurrently. The high cost of spikes requires the brain not only to use representational codes that rely on very few active neurons, but also to allocate its energy resources flexibly among cortical regions according to task demand. The latter constraint explains the investment in local control of hemodynamics, exploited by functional magnetic resonance imaging, and the need for mechanisms of selective attention.
Article
Synapses can change their strength in response to afferent activity, a property that might underlie a variety of neural processes such as learning, network synaptic weighting, synapse formation and pruning. Recent work has shown that synapses change their strength by jumping between discrete mechanistic states, rather than by simply moving up and down in a continuum of efficacy. Coincident with this, studies have provided a framework for understanding the potential mechanistic underpinnings of synaptic plastic states. Synaptic plasticity states not only represent a new and fundamental property of CNS synapses, but also can provide a context for understanding outstanding issues in synaptic function, plasticity and development.
Neuron firing rates in humans
  • K Grace
K. Grace, ed, "Neuron firing rates in humans," [Survey of related research], AI Impacts, https://aiimpacts.org/rate-of-neuron-firing/
Available for download at
  • C Simon
C. Simon, Brain Simulator II (Software). Available for download at http://brainsim.org. Source: https://github.com/FutureAIGuru/BrainSimII