Article

The impulses produced by Sensory Nerve Endings

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... To date, various neural coding schemes have been applied to SNNs, such as rate [14] and temporal coding [10], [15], [16], [17]. Rate coding, which is a well-known and commonly used neural coding, utilizes firing rate to represent information [14]. ...
... To date, various neural coding schemes have been applied to SNNs, such as rate [14] and temporal coding [10], [15], [16], [17]. Rate coding, which is a well-known and commonly used neural coding, utilizes firing rate to represent information [14]. Many DNN-to-SNN conversion methods have adopted rate coding for its simple implementation and robustness [7], [8], [9]. ...
... There have been four neural coding schemes in deep SNNs: rate [7], [8], [9], phase [11], burst [10], and TTFS [12], [13], as depicted in Fig. 1. Rate coding has the advantages of simple implementation and robustness to errors by using firing rate, N T , where N is the total number of spikes in a given time window T [7], [8], [9], [14]. However, rate coding cannot utilize temporal information in spike trains and generates a large number of spikes, leading to high energy consumption and long inference latency. ...
Preprint
Full-text available
Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems. The deep neural network-to-SNN conversion approach has been widely studied to broaden the applicability of SNNs. Most previous studies, however, have not fully utilized spatio-temporal aspects of SNNs, which has led to inefficiency in terms of number of spikes and inference latency. In this paper, we present T2FSNN, which introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the aforementioned drawback. In addition, we propose gradient-based optimization and early firing methods to further increase the efficiency of the T2FSNN. According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding, which is the state-of-the-art result on the CIFAR-100.
... To date, various neural coding schemes have been applied to SNNs, such as rate [14] and temporal coding [10], [15], [16], [17]. Rate coding, which is a well-known and commonly used neural coding, utilizes firing rate to represent information [14]. ...
... To date, various neural coding schemes have been applied to SNNs, such as rate [14] and temporal coding [10], [15], [16], [17]. Rate coding, which is a well-known and commonly used neural coding, utilizes firing rate to represent information [14]. Many DNN-to-SNN conversion methods have adopted rate coding for its simple implementation and robustness [7], [8], [9]. ...
... There have been four neural coding schemes in deep SNNs: rate [7], [8], [9], phase [11], burst [10], and TTFS [12], [13], as depicted in Fig. 1. Rate coding has the advantages of simple implementation and robustness to errors by using firing rate, N T , where N is the total number of spikes in a given time window T [7], [8], [9], [14]. However, rate coding cannot utilize temporal information in spike trains and generates a large number of spikes, leading to high energy consumption and long inference latency. ...
... We are more concerned with the interactions of large numbers, and our problem is to find the way in which such interactions can take place.-Edward D. Adrian (1926) ll 66 Neuron 108, October 14, 2020 ª 2020 Elsevier Inc. ...
... These final lines from Lord Adrian's Nobel lecture (Adrian, 1926) illustrate the extraordinary prescience of this researcher who first discovered neuronal spiking. He anticipated that understanding brain computation is not likely to be achieved solely by studies of individual neurons but instead by observing coordinated interactions of neurons and their collective activity patterns. ...
Article
We propose a new paradigm for dense functional imaging of brain activity to surmount the limitations of present methodologies. We term this approach “integrated neurophotonics”; it combines recent advances in microchip-based integrated photonic and electronic circuitry with those from optogenetics. This approach has the potential to enable lens-less functional imaging from within the brain itself to achieve dense, large-scale stimulation and recording of brain activity with cellular resolution at arbitrary depths. We perform a computational study of several prototype 3D architectures for implantable probe-array modules that are designed to provide fast and dense single-cell resolution (e.g., within a 1-mm³ volume of mouse cortex comprising ∼100,000 neurons). We describe progress toward realizing integrated neurophotonic imaging modules, which can be produced en masse with current semiconductor foundry protocols for chip manufacturing. Implantation of multiple modules can cover extended brain regions.
... For spike rate coding, only the firing rate in an interval is concerned as a measurement for information carried. Rate coding is firstly motivated by the observation of the frog cutaneous receptors by Adrian et al. in 1926 that physiological neurons tend to fire more often for stronger stimuli [17]. Spike rate coding has been the main paradigm in artificial neural networks, such as sigmoidal neurons. ...
... For spike rate coding, only the firing rate in an interval is concerned as a measurement for information carried. Rate coding is firstly motivated by the observation of the frog cutaneous receptors by Adrian et al. in 1926 that physiological neurons tend to fire more often for stronger stimuli [17]. Spike rate coding has been the main paradigm in artificial neural networks, such as sigmoidal neurons. ...
Article
Full-text available
Word embeddings are the semantic representations of the words. They are derived from large corpus and work well on many natural language tasks, with one downside of costing large memory space. In this paper, we propose binary word embedding models based on inspirations from biological neuron coding mechanisms, converting the spike timing of neurons during specific time intervals into binary codes, reducing the space and speeding up computation. We build three types of models to post-process the original dense word embeddings, namely, the homogeneous Poission processing-based rate coding model, the leaky integrate-and-fire neuron-based model, and the Izhikevich’s neuron-based model. We test our binary embedding models on word similarity and text classification tasks of five public datasets. The experimental results show that the brain-inspired binary word embeddings (which reduce approximately 68.75% of the space) get similar results to original embeddings for word similarity task while better performance than traditional binary embeddings on text classification task.
... After Adrian's experiments (Adrian 1926) established that sensory neurons produce action potentials (spikes), it has been generally accepted that a spiking neuron communicates information through sequences of spikes called spike trains (Bialek et al. 1991;Nemenman et al. 2004). The entropy of spike trains was first estimated by MacKay and McCulloch (1952), which was probably the first application of Information Theory to the nervous system. ...
Article
Full-text available
To understand how anatomy and physiology allow an organism to perform its function, it is important to know how information that is transmitted by spikes in the brain is received and encoded. A natural question is whether the spike rate alone encodes the information about a stimulus (rate code), or additional information is contained in the temporal pattern of the spikes (temporal code). Here we address this question using data from the cat Lateral Geniculate Nucleus (LGN), which is the visual portion of the thalamus, through which visual information from the retina is communicated to the visual cortex. We analyzed the responses of LGN neurons to spatially homogeneous spots of various sizes with temporally random luminance modulation. We compared the Firing Rate with the Shannon Information Transmission Rate , which quantifies the information contained in the temporal relationships between spikes. We found that the behavior of these two rates can differ quantitatively. This suggests that the energy used for spiking does not translate directly into the information to be transmitted. We also compared Firing Rates with Information Rates for X-ON and X-OFF cells. We found that, for X-ON cells the Firing Rate and Information Rate often behave in a completely different way, while for X-OFF cells these rates are much more highly correlated. Our results suggest that for X-ON cells a more efficient “temporal code” is employed, while for X-OFF cells a straightforward “rate code” is used, which is more reliable and is correlated with energy consumption.
... However, it was Charles Overton who demonstrated that sodium ions were involved in the action potential overshoot (Overton, 1902). The electrical activity in single sensory fibers, and the encoding of stimulation intensity in muscles as the firing rate of the sensory fibers was reported in 1926 (Adrian, 1926;Adrian and Zotterman, 1926). This finding formed the basis for explaining how intensity could be encoded in the "all-or-none" triggering of what are today known as action potentials. ...
Article
Full-text available
Neuroelectrophysiology is an old science, dating to the 18th century when electrical activity in nerves was discovered. Such discoveries have led to a variety of neurophysiological techniques, ranging from basic neuroscience to clinical applications. These clinical applications allow assessment of complex neurological functions such as (but not limited to) sensory perception (vision, hearing, somatosensory function), and muscle function. The ability to use similar techniques in both humans and animal models increases the ability to perform mechanistic research to investigate neurological problems. Good animal to human homology of many neurophysiological systems facilitates interpretation of data to provide cause-effect linkages to epidemiological findings. Mechanistic cellular research to screen for toxicity often includes gaps between cellular and whole animal/person neurophysiological changes, preventing understanding of the complete function of the nervous system. Building Adverse Outcome Pathways (AOPs) will allow us to begin to identify brain regions, timelines, neurotransmitters, etc. that may be Key Events (KE) in the Adverse Outcomes (AO). This requires an integrated strategy, from in vitro to in vivo (and hypothesis generation, testing, revision). Scientists need to determine intermediate levels of nervous system organization that are related to an AO and work both upstream and downstream using mechanistic approaches. Possibly more than any other organ, the brain will require networks of pathways/AOPs to allow sufficient predictive accuracy. Advancements in neurobiological techniques should be incorporated into these AOP-base neurotoxicological assessments, including interactions between many regions of the brain simultaneously. Coupled with advancements in optogenetic manipulation, complex functions of the nervous system (such as acquisition, attention, sensory perception, etc.) can be examined in real time. The integration of neurophysiological changes with changes in gene/protein expression can begin to provide the mechanistic underpinnings for biological changes. Establishment of linkages between changes in cellular physiology and those at the level of the AO will allow construction of biological pathways (AOPs) and allow development of higher throughput assays to test for changes to critical physiological circuits. To allow mechanistic/predictive toxicology of the nervous system to be protective of human populations, neuroelectrophysiology has a critical role in our future.
... Regarding mechanical perception Edgar Douglas Adrian launched a series of seminal papers formulating the firing rate hypothesis based on a number of intriguing experiments as early as 1926. Thereby, he also established the term adaptation in the framework of biological signal processing [8][9][10] . Today, the adaptation to a stimulus is considered as a classical response function, which is not just restricted to the encoding of external signals into internal spiking representations. ...
Article
Full-text available
The ongoing research on and development of increasingly intelligent artificial systems propels the need for bio inspired pressure sensitive spiking circuits. Here we present an adapting and spiking tactile sensor, based on a neuronal model and a piezoelectric field-effect transistor (PiezoFET). The piezoelectric sensor device consists of a metal-oxide semiconductor field-effect transistor comprising a piezoelectric aluminium-scandium-nitride (AlxSc1−xN) layer inside of the gate stack. The so augmented device is sensitive to mechanical stress. In combination with an analogue circuit, this sensor unit is capable of encoding the mechanical quantity into a series of spikes with an ongoing adaptation of the output frequency. This allows for a broad application in the context of robotic and neuromorphic systems, since it enables said systems to receive information from the surrounding environment and provide encoded spike trains for neuromorphic hardware. We present numerical and experimental results on this spiking and adapting tactile sensor.
... • Rate encoding: Mean firing rate over time is a popular rate coding scheme that claims that the spiking frequency or rate is modulated with the stimulus intensity ( [40]). Rate-based features were obtained from this spike-train by computing the average spiking frequency of the neuron over a moving window. ...
Preprint
Full-text available
This paper builds upon our previously reported growth transform based optimization framework to present a novel spiking neuron model and demonstrate its application for spike-based auditory signal processing. Unlike conventional neuromorphic approaches, the proposed Growth Transform (GT) neuron model is tightly coupled to a system objective function, which results in network dynamics that are always stable and interpretable; and the process of spike generation and population dynamics is the result of minimizing an energy functional. We then extend the model to include axonal propagation delays in a manner that the optimized solution of the system or network objective function remains unaffected. The paper characterizes the model for different types of stimuli, and explores how changing different aspects of the cost function can reproduce known single neuron dynamics. We then investigate the properties of a coupled GT neural network that can generate spike-encoded auditory features corresponding to the output of a gammatone filterbank. We show that the discriminatory information is not only encoded in the traditional spike-rates and interspike-interval statistics, but is also encoded in the subthreshold response of GT neurons for inputs that are not strong enough to elicit spikes. We also demonstrate that while different forms of coupling between the neurons could produce compact and energy-efficient representation of the auditory features, the classification performance for a speaker recognition task remains invariant across different types of coupling. Thus, we believe that the proposed GT neuron model provides a flexible neuromorphic framework to systematically design large-scale spiking neural networks with stable and interpretable dynamics.
... This is also true for an output signal which must be converted from spikes to continuous values to control a motor. The known encoding mechanisms found in nature include individual spike rate (Adrian, 1926), FIGURE 1 | (A) An illustration of a simplified sensorimotor neuron pathway. A signal is received from the outside world and sent onwards from the sensor neuron to the NSI through current injection. ...
Article
Full-text available
Researchers working with neural networks have historically focused on either non-spiking neurons tractable for running on computers or more biologically plausible spiking neurons typically requiring special hardware. However, in nature homogeneous networks of neurons do not exist. Instead, spiking and non-spiking neurons cooperate, each bringing a different set of advantages. A well-researched biological example of such a mixed network is a sensorimotor pathway, responsible for mapping sensory inputs to behavioral changes. This type of pathway is also well-researched in robotics where it is applied to achieve closed-loop operation of legged robots by adapting amplitude, frequency, and phase of the motor output. In this paper we investigate how spiking and non-spiking neurons can be combined to create a sensorimotor neuron pathway capable of shaping network output based on analog input. We propose sub-threshold operation of an existing spiking neuron model to create a non-spiking neuron able to interpret analog information and communicate with spiking neurons. The validity of this methodology is confirmed through a simulation of a closed-loop amplitude regulating network inspired by the internal feedback loops found in insects for posturing. Additionally, we show that non-spiking neurons can effectively manipulate post-synaptic spiking neurons in an event-based architecture. The ability to work with mixed networks provides an opportunity for researchers to investigate new network architectures for adaptive controllers, potentially improving locomotion strategies of legged robots.
... 1) Rate-SIM: The spiking activity of a neuron over time is usually represented by a graph called the raster plot. Under the assumption that the neurons are independent, it has been proven that for a given input I(t) the firing rate is a stochastic process which causes irregular interspike intervals reflecting a random process [21], [22]. Then, the instantaneous spike rate (mean firing rate) can be obtained either by averaging the spikes of an individual neuron (spike count), or by averaging the firing rate over multiple repetitions of the same experiment (spike density) [3]. ...
Article
Full-text available
This paper introduces a novel coding/decoding mechanism that mimics one of the most important properties of the human visual system: its ability to enhance the visual perception quality in time. In other words, the brain takes advantage of time to process and clarify the details of the visual scene. This characteristic is yet to be considered by the state-of-the-art quantization mechanisms that process the visual information regardless the duration of time it appears in the visual scene. We propose a compression architecture built of neuroscience models; it first uses the leaky integrate-and-fire (LIF) model to transform the visual stimulus into a spike train and then it combines two different kinds of spike interpretation mechanisms (SIM), the time-SIM and the rate-SIM for the encoding of the spike train. The time-SIM allows a high quality interpretation of the neural code and the rate-SIM allows a simple decoding mechanism by counting the spikes. For that reason, the proposed mechanisms is called Dual-SIM quantizer (Dual-SIMQ). We show that (i) the time-dependency of Dual-SIMQ automatically controls the reconstruction accuracy of the visual stimulus, (ii) the numerical comparison of Dual-SIMQ to the state-of-the-art shows that the performance of the proposed algorithm is similar to the uniform quantization schema while it approximates the optimal behavior of the non-uniform quantization schema and (iii) from the perceptual point of view the reconstruction quality using the Dual-SIMQ is higher than the state-of-the-art.
... The Rate Code theory assumes that the mean firing rate of a neuron carries the most, maybe even all the information of a transmission and are referred to as the 'rate codes' [33]. This theory has inspired the classical perceptron approaches. ...
Thesis
The work presents a methodology to assess the problems behind static gene expression data modelling and analysis with machine learning techniques. As a case study, transcriptomic data collected during a longitudinal study on the effects of diet on the expression of oxidative phosphorylation genes was used. Data were collected from 60 abdominally overweight men and women after an observation period of eight weeks, whilst they were following three different diets. Real-valued static gene expression data were encoded into spike trains using Gaussian receptive fields for multinomial classification using an evolving spiking neural network (eSNN) model. Results demonstrated that the proposed methodology can be used for predictive modelling of static gene expression data and future works are proposed regarding the application of eSNNs for personalised modelling.
... Frequency coding [95], or rate coding, is one of the most used coding because it has been observed that some biological neurons emit spikes at a frequency proportional to the intensity of a stimuli (i.e. in the muscles [98], in the visual cortex [99]). Moreover, it is more straightforward to interpret the values in an ANN as a frequency of spikes. ...
Thesis
Computer vision is a strategic field, in consequence of its great number of potential applications which could have a high impact on society. This area has quickly improved over the last decades, especially thanks to the advances of artificial intelligence and more particularly thanks to the accession of deep learning. Nevertheless, these methods present two main drawbacks in contrast with biological brains: they are extremely energy intensive and they need large labeled training sets. Spiking neural networks are alternative models offering an answer to the energy consumption issue. One attribute of these models is that they can be implemented very efficiently on hardware, in order to build ultra low-power architectures. In return, these models impose certain limitations, such as the use of only local memory and computations. It prevents the use of traditional learning methods, for example the gradient back-propagation. STDP is a learning rule, observed in biology, which can be used in spiking neural networks. This rule reinforces the synapses in which local correlations of spike timing are detected. It also weakens the other synapses. The fact that it is local and unsupervised makes it possible to abide by the constraints of neuromorphic architectures, which means it can be implemented efficiently, but it also provides a solution to the data set labeling issue. However, spiking neural networks trained with the STDP rule are affected by lower performances in comparison to those following a deep learning process. The literature about STDP still uses simple data but the behavior of this rule has seldom been used with more complex data, such as sets made of a large variety of real-world images.The aim of this manuscript is to study the behavior of these spiking models, trained through the STDP rule, on image classification tasks. The main goal is to improve the performances of these models, while respecting as much as possible the constraints of neuromorphic architectures. The first contribution focuses on the software simulations of spiking neural networks. Hardware implementation being a long and costly process, using simulation is a good alternative in order to study more quickly the behavior of different models. Then, the contributions focus on the establishment of multi-layered spiking networks; networks made of several layers, such as those in deep learning methods, allow to process more complex data. One of the chapters revolves around the matter of frequency loss seen in several spiking neural networks. This issue prevents the stacking of multiple spiking layers. The center point then switches to a study of STDP behavior on more complex data, especially colored real-world image. Multiple measurements are used, such as the coherence of filters or the sparsity of activations, to better understand the reasons for the performance gap between STDP and the more traditional methods. Lastly, the manuscript describes the making of multi-layered networks. To this end, a new threshold adaptation mechanism is introduced, along with a multi-layer training protocol. It is proven that such networks can improve the state-of-the-art for STDP.
... Galvani's observations that the nerves and muscle generated their own innate or internal electricity were confirmed in the mid-19th century when the electrical impulses in nerves were observed through indirect methods with the first primitive amplifiers. Also, when the vacuum tube amplifier was invented in the early 20th century, it was unequivocally concluded that Galvani was correct (Adrian, 1926). The vacuum tube amplifier successfully showed the presence of individual electric oscillations in response to physiological stimulation of frogs. ...
Chapter
Full-text available
An increase in the demand for energy has resulted in the global availability of fossil fuel resources declining. In addition, the added pressure on the environment due to the emission of greenhouse gases has disturbed the ecological balance. This has led to the need for more cleaner and renewable energy sources. One such potential technology that can provide a clean energy source is a fuel cell. The operational principles of fuel cells date as far back as the 18th century. In 1791, a physician and a professor at the University of Bologna named Luigi Galvani presented his results in which he observed the leg of a frog twitching when he was investigating the nature and effect of electricity in the tissue of an animal (Kipnis, 1987). By passing a spark of an electrical current through the isolated frog, he observed that the muscles of the frog can move by exposing them to an electrical current and coined the term “animal electricity.” From which he concluded from further observations that the tissue of a frog also had an innate and vital “animal electricity” that activated the nerve and muscle when spread across with metal probes or touched by another frog’s nerve and muscle. This was one of the first reported observations of bioelectricity. Today, Galvani is known as the first electrochemist and considered a pioneer of bioelectricity and even the famous Galvanic cell is named after him.
... However, the firing rate, i.e., the average spike counts over a given time interval, is more stable and robust against noise such as the variability in interspike intervals. Using this temporal coarse-graining strategy, known as rate coding (Adrian, 1926;Maass and Bishop, 2001;Gerstner and Kistler, 2002;Stein et al., 2005;Panzeri et al., 2015), neurons can encode stimulus intensity by increasing or decreasing their firing rate (Kandel et al., 2000;Stein et al., 2005). The robustness of rate coding is a direct consequence of the many-to-one mapping (i.e., coarse-graining). ...
Article
Full-text available
Information processing in neural systems can be described and analyzed at multiple spatiotemporal scales. Generally, information at lower levels is more fine-grained but can be coarse-grained at higher levels. However, only information processed at specific scales of coarse-graining appears to be available for conscious awareness. We do not have direct experience of information available at the scale of individual neurons, which is noisy and highly stochastic. Neither do we have experience of more macro-scale interactions, such as interpersonal communications. Neurophysiological evidence suggests that conscious experiences co-vary with information encoded in coarse-grained neural states such as the firing pattern of a population of neurons. In this article, we introduce a new informational theory of consciousness: Information Closure Theory of Consciousness (ICT). We hypothesize that conscious processes are processes which form non-trivial informational closure (NTIC) with respect to the environment at certain coarse-grained scales. This hypothesis implies that conscious experience is confined due to informational closure from conscious processing to other coarse-grained scales. Information Closure Theory of Consciousness (ICT) proposes new quantitative definitions of both conscious content and conscious level. With the parsimonious definitions and a hypothesize, ICT provides explanations and predictions of various phenomena associated with consciousness. The implications of ICT naturally reconcile issues in many existing theories of consciousness and provides explanations for many of our intuitions about consciousness. Most importantly, ICT demonstrates that information can be the common language between consciousness and physical reality.
... Almost a century ago, Adrian and Zoltermann demonstrated that the firing rate of the stretch receptor in a muscle spindle of a frog increased with the strength of the stimulus [31], leading to the conclusion that firing rate is the primary "currency" of information exchange. This coding scheme is known as rate coding. ...
Article
Full-text available
This paper reviews recent developments in the still-off-the-mainstream information and data processing area of spiking neural networks (SNN)—the third generation of artificial neural networks. We provide background information about the functioning of biological neurons, discussing the most important and commonly used mathematical neural models. Most relevant information processing techniques, learning algorithms, and applications of spiking neurons are described and discussed, focusing on feasibility and biological plausibility of the methods. Specifically, we describe in detail the functioning and organization of the latest version of a 3D spatio-temporal SNN-based data machine framework called NeuCube, as well as it’s SNN-related submodules. All described submodules are accompanied with formal algorithmic formulations. The architecture is highly relevant for the analysis and interpretation of various types of spatio-temporal brain data (STBD), like EEG, NIRS, fMRI, but we highlight some of the recent both STBD- and non-STBD-based applications. Finally, we summarise and discuss some open research problems that can be addressed in the future. These include, but are not limited to: application in the area of EEG-based BCI through transfer learning; application in the area of affective computing through the extension of the NeuCube framework which would allow for a biologically plausible SNN-based integration of central and peripheral nervous system measures. Matlab implementation of the NeuCube’s SNN-related module is available for research and teaching purposes.
... Muscle spindles have been the subject of many studies since the early twentieth century, for instance, the studies of frog muscle spindle afferents uncovered the rate coding in the neurons and its dependence on the input stimulus [86]. The reproducible responses of the spindles are more sensitive to slowly changing stretch. ...
Thesis
Full-text available
We study the collective dynamics of diffusively coupled excitable elements in small tree networks with regular and random connectivity, which model sensory neurons with branched myelinated distal terminals. These neurons possess dendritic trees with myelinated branches and with nodes of Ranvier at every branching points. They may show spontaneous noisy periodic spiking. Examples of such neurons include touch receptors, muscle spindles afferents and some electroreceptors. A mathematical model of such a neuron is a system of excitable elements coupled on a tree network. We show that the mechanism of periodic firing is rooted in the synchronization of local activity of individual nodes, even though peripheral nodes may receive random independent inputs. We developed a theory that predicts the collective spiking activity in physiologically-relevant strong coupling limit. The structural variability in random tree networks translates into collective network dynamics leading to a wide range of firing rates and coefficients of variations, which is most pronounced in the strong coupling regime. We studied signal detection in regular and random trees. Our results indicate that the highest sensitivity occurs in specific optimum values of the input current for any given tree network. In the presence of a time-dependent uniform stimulus, we have shown that the highest information carried by spikes of the central node of a tree about the stimulus is attained for the strong coupling, even though the firing rate is at maximum for smaller values of coupling strength. Finally, we studied the effect of inhomogeneous inputs on the collective response of tree networks and showed that it leads to additional variability of collective firing.
... Early neurophysiological experiments demonstrated that stimulus strength was related to spiking frequency of a sensory neuron [120]. Conversely, one of the arguments against rate-based coding is the speed with which animals and humans process data [121]: encoding information such as images and sounds with a firing rate would require a significant level of redundancy in the system, and is estimated to take longer than the experimentally observed speeds [122,123]. ...
Conference Paper
Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system.
... However, the firing rate, i.e. the average spike counts over a given time interval, is more stable and robust against noise such as the variability in inter-spike intervals. Using this temporal coarse-graining strategy, known as rate coding (Adrian, 1926;Gerstner & Kistler, 2002;Maass & Bishop, 2001;Panzeri et al. , 2015;Stein et al. , 2005), neurons can encode stimulus intensity by increasing or decreasing the firing rate (Kandel et al. , 2000). (Stein et al. , 2005). ...
Preprint
Full-text available
Information processing in neural systems can be described and analysed at multiple spatiotemporal scales. Generally, information at lower levels is more fine-grained and can be coarse-grained in higher levels. However, information processed only at specific levels seems to be available for conscious awareness. We do not have direct experience of information available at the level of individual neurons, which is noisy and highly stochastic. Neither do we have experience of more macro-level interactions such as interpersonal communications. Neurophysiological evidence suggests that conscious experiences co-vary with information encoded in coarse-grained neural states such as the firing pattern of a population of neurons. In this article, we introduce a new informational theory of consciousness: Information Closure Theory of Consciousness (ICT). We hypothesise that conscious processes are processes which form non-trivial informational closure (NTIC) with respect to the environment at certain coarse-grained levels. This hypothesis implies that conscious experience is confined due to informational closure from conscious processing to other coarse-grained levels. ICT proposes new quantitative definitions of both conscious content and conscious level. With the parsimonious definitions and a hypothesise, ICT provides explanations and predictions of various phenomena associated with consciousness. The implications of ICT naturally reconciles issues in many existing theories of consciousness and provides explanations for many of our intuitions about consciousness. Most importantly, ICT demonstrates that information can be the common language between consciousness and physical reality.
... A classic study suggested that the intensity of a sensation is signaled by the frequency of action potentials in afferent Journal of Healthcare Engineering nerve fibers, and this is also true in recognoscibility of pain [40]. Stronger the noxious stimuli input, more the ion channels open, and therefore higher level the action potential frequency generated [14]. ...
Article
Full-text available
Neuropathic pain since early diabetes swamps patients’ lives, and diabetes mellitus has become an increasingly worldwide epidemic. No agent, so far, can terminate the ongoing diabetes. Therefore, strategies that delay the process and the further complications are preferred, such as diabetic neuropathic pain (DNP). Dysfunction of ion channels is generally accepted as the central mechanism of diabetic associated neuropathy, of which hyperpolarization-activated cyclic nucleotide-gated 2 (HCN2) ion channel has been verified the involvement of neuropathic pain in dorsal root ganglion (DRG) neurons. Riluzole is a benzothiazole compound with neuroprotective properties on intervention to various ion channels, including hyperpolarization-activated voltage-dependent channels. To investigate the effect of riluzole within lumbar (L3-5) DRG neurons from DNP models, streptozocin (STZ, 70 mg/kg) injection was recruited subcutaneously followed by paw withdrawal mechanical threshold (PWMT) and paw withdrawal thermal latency (PWTL), which both show significant reduction, whilst relieved by riluzole (4 mg/kg/d) administration, which was performed once daily for 7 consecutive days for 14 days. HCN2 expression was also decreased in line with alleviated behavioral tests. Our results indicate riluzole as the alleviator to STZ-induced DNP with involvement of downregulated HCN2 in lumbar DRG by continual systemic administration in rats.
... • Spike coding: Data is encoded into equal amplitude pulses (or digital signals), where information is carried by the frequency (rate-based) [26] of pulses or time of pulse emission (temporal) [27] [28]. It directly impacts computation requirements since the signals can now be represented with just 1s and 0s in the digital domain. ...
... Broadly speaking, information about the intensity of the sensory stimulus is encoded in the rate of spikes (Adrian, 1926;Zucker and Welker, 1969;Harvey et al., 2013;Lankarany et al., 2019). Spike rate can be defined in different ways such as (Brette, 2015): A) averaging the total number of spikes over time; B) averaging the total number of spikes over a population of neurons; ...
Thesis
Full-text available
Neurons use action potentials, or spikes, to process information. Different aspects of spiking, such as its rate or timing, are used to encode information about different features of an input. But noise can influence how robustly information is represented in each coding scheme. Additionally, information can be lost if neural representations are not transmitted with high fidelity to downstream neurons. In both cases, properties of the input (its amplitude and kinetics) and properties of the neuron (its spike initiation mechanism and excitability) impact neural information processing. In my thesis, I first investigated how axons are optimized to transmit spike-based representations. Using patch clamp electrophysiology combined with optogenetics, I showed that the axon of CA1 pyramidal neurons spikes transiently in response to sustained depolarization, in contrast to the soma and axon initial segment, which spike repetitively. These distinct spiking patterns are due to the differential expression of ion channels, supporting functional specialization of neuronal compartments. Specifically, low-threshold potassium channels (Kv1) cause the axon to behave as a high-pass filter, enabling high fidelity transmission of spike-based information so that the axon selectively responds to inputs with fast kinetics. Together with biophysical modeling, my findings demonstrate that spike initiation properties in each part of the neuron are well matched to the signals normally processed in that neuronal compartment. I then investigated how background synaptic activity (noise) affects rate and temporal coding of vibrotactile stimuli. Using patch clamp electrophysiology and dynamic clamp in vitro, I found that layer 2/3 pyramidal neurons in primary somatosensory cortex spike intermittently to inputs repeated at frequencies perceived as vibration. The fraction of inputs evoking a spike varies with input amplitude, enabling firing rate to encode stimulus intensity. Despite being small in amplitude, inputs are abrupt in onset, which allows them to evoke precisely timed spikes, even under noisy conditions. Unreliable spiking allows noise to produce irregular skipping, enabling spike times (patterns) to encode stimulus frequency. The reliability and precision of spikes depend on input amplitude and kinetics, respectively. With the help of simulations, my results show that noise helps multiplexed rate and temporal coding.
... • Spike coding: Data is encoded into equal amplitude pulses (or digital signals), where information is carried by the frequency (rate-based) [27] of pulses or time of pulse emission (temporal) [28] [29]. It directly impacts computation requirements since the signals can now be represented with just 1s and 0s in the digital domain. ...
Preprint
Full-text available
Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency. They do so by using asynchronous spike-based data flow, event-based signal generation, processing, and modifying the neuron model to resemble biological neurons closely. While some initial works have shown significant initial evidence of applicability to common deep learning tasks, they have not been applied to a real-world problem. In this work, we first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving. Secondly, we make a step-by-step demonstration of using a pre-trained convolutional neural network and modifying its architecture to simulate spiking behavior. We closely model essential aspects of spiking neural networks in simulation and achieve equivalent run-time and accuracy on a GPU. When the model is realized on a neuromorphic hardware, we expect to have significantly improved power efficiency.
... In fact, there has been a long debate concerning the nature of the neuronal code utilized by the brain to perform a variety of tasks. The spiking rate as a parameter that the brain might utilize to decode incoming information from sensory organs is just one of the several possible variables 38,39 ; reviewed by 35 . ...
Article
Full-text available
In the visual system, retinal ganglion cells (RGCs) of various subtypes encode preprocessed photoreceptor signals into a spike output which is then transmitted towards the brain through parallel feature pathways. Spike timing determines how each feature signal contributes to the output of downstream neurons in visual brain centers, thereby influencing efficiency in visual perception. In this study, we demonstrate a marked population-wide variability in RGC response latency that is independent of trial-to-trial variability and recording approach. RGC response latencies to simple visual stimuli vary considerably in a heterogenous cell population but remain reliable when RGCs of a single subtype are compared. This subtype specificity, however, vanishes when the retinal circuitry is bypassed via direct RGC electrical stimulation. This suggests that latency is primarily determined by the signaling speed through retinal pathways that provide subtype specific inputs to RGCs. In addition, response latency is significantly altered when GABA inhibition or gap junction signaling is disturbed, which further supports the key role of retinal microcircuits in latency tuning. Finally, modulation of stimulus parameters affects individual RGC response delays considerably. Based on these findings, we hypothesize that retinal microcircuits fine-tune RGC response latency, which in turn determines the context-dependent weighing of each signal and its contribution to visual perception.
... • Spike coding: Data is encoded into equal amplitude pulses (or digital signals), where information is carried by the frequency (rate-based) [26] of pulses or time of pulse emission (temporal) [27] [28]. It directly impacts computation requirements since the signals can now be represented with just 1s and 0s in the digital domain. ...
Preprint
Full-text available
Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency. They do so by using asynchronous spike-based data flow, event-based signal generation, processing, and modifying the neuron model to resemble biological neurons closely. While some initial works have shown significant initial evidence of applicability to common deep learning tasks, their applications in complex real-world tasks has been relatively low. In this work, we first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving. Secondly, we make a step-by-step demonstration of simulating spiking behavior using a pre-trained convolutional neural network. We closely model essential aspects of spiking neural networks in simulation and achieve equivalent run-time and accuracy on a GPU. When the model is realized on a neuromorphic hardware, we expect to have significantly improved power efficiency.
... Our modern knowledge of neural information processing, however, has discounted the doctrine of specific nerve energies and its relatives as an explanation for sensory qualia. Indeed, Adrian (1926) showed that all motor and sensory nerves function in the same way, via electro-chemical energy, so that it would be impossible to tell from a recording of a stream of spike potentials whether they were occurring in a visual nerve, an auditory nerve, or, indeed in any part of the central nervous system that produces such potentials, unless one knew how the relevant information was encoded in the stream of spikes. He proposed that it was where in the brain a sensory nerve projected that made the difference in qualia experienced. ...
Article
Full-text available
In this paper we address the following problems and provide realistic answers to them: (1) What could be the physical substrate for subjective, phenomenal, consciousness (P-consciousness)? Our answer: the electromagnetic (EM) field generated by the movement and changes of electrical charges in the brain. (2) Is this substrate generated in some particular part of the brains of conscious entities or does it comprise the entirety of the brain/body? Our answer: a part of the thalamus in mammals, and homologous parts of other brains generates the critical EM field. (3) From whence arise the qualia experienced in P-consciousness? Our answer, the relevant EM field is “structured” by emulating in the brain the information in EM fields arising from both external (the environment) and internal (the body) sources. (4) What differentiates the P-conscious EM field from other EM fields, e.g., the flux of photons scattered from object surfaces, the EM field of an electro-magnet, or the EM fields generated in the brain that do not enter P-consciousness, such as those generated in the retina or occipital cortex, or those generated in brain areas that guide behavior through visual information in persons exhibiting “blindsight”? Our answer: living systems express a boundary between themselves and the environment, requiring them to model (coarsely emulate) information from their environment in order to control through actions, to the extent possible, the vast sea of variety in which they are immersed. This model, expressed in an EM field, is P-consciousness. The model is the best possible representation of the moment-to-moment niche-relevant (action-relevant: affordance) information an organism can generate (a Gestalt). Information that is at a lower level than niche-relevant, such as the unanalyzed retinal vector-field, is not represented in P-consciousness because it is not niche-relevant. Living organisms have sensory and other systems that have evolved to supply such information, albeit in a coarse form.
Chapter
This chapter discusses basic operational principles and learning in Spiking Neural Networks (SNNs). It presents a few popular neuron models that are widely used in SNNs for various purposes. The chapter explains various methods for training SNNs. The leaky integrate‐and‐fire model is one of the most popular spiking neuron models used to simulate SNNs efficiently. How to encode information in spikes is a popular research topic that both neuroscientists and computational artificial‐intelligence researchers have studied for decades. In a non‐spiking neuron, most likely a multiply‐accumulate operation is needed, whereas a masked addition is enough for a spiking neuron as spikes are binary in nature. The chapter introduces the learning in a shallow network. The remote supervised method, which was proposed by F. Ponulak in 2005, is a powerful learning rule for SNNs. Inspired by the backpropagation‐based stochastic gradient descent, the driving horse of training deep ANNs, many researchers started looking into backpropagation in deep SNNs.
Article
Sensory information is believed to be encoded in neuronal spikes using two different neural codes, the rate code (spike firing rate) and the temporal code (precisely-timed spikes). Since the sensory cortex has a highly hierarchical feedforward structure, sensory information-carrying neural codes should reliably propagate across the feedforward network (FFN) of the cortex. Experimental evidence suggests that inhibitory interneurons, such as the parvalbumin-positive (PV) and somatostatin-positive (SST) interneurons, that have distinctively different electrophysiological and synaptic properties, modulate the neural codes during sensory information processing in the cortex. However, how PV and SST interneurons impact on the neural code propagation in the cortical FFN is unknown. We address this question by building a five-layer FFN model consisting of a physiologically realistic Hodgkin–Huxley-type models of excitatory neurons and PV/SST interneurons at different ratios. In response to different firing rate inputs (20–80 Hz), a higher ratio of PV over SST interneurons promoted a reliable propagation of all ranges of firing rate inputs. In contrast, in response to a range of precisely-timed spikes in the form of pulse-packets [with a different number of spikes (α, 40–400 spikes) and degree of dispersion (σ, 0–20 ms)], a higher ratio of SST over PV interneurons promoted a reliable propagation of pulse-packets. Our simulation results show that PV and SST interneurons differentially promote a reliable propagation of the rate and temporal codes, respectively, indicating that the dynamic recruitment of PV and SST interneurons may play critical roles in a reliable propagation of sensory information-carrying neural codes in the cortical FFN.
Article
Full-text available
Inspired by the structure and functions of the human skin, a highly sensitive capacitive‐piezoelectric flexible sensing skin with fingerprint‐like patterns to detect and discriminate between spatiotemporal tactile stimuli including static and dynamic pressures and textures is presented. The capacitive‐piezoelectric tandem sensing structure is embedded in the phalange of a 3D‐printed robotic hand, and a tempotron classifier system is used for tactile exploration. The dynamic tactile sensor, interfaced with an extended gate configuration to a common source metal oxide semiconductor field effect transistor (MOSFET), exhibits a sensitivity of 2.28 kPa−1. The capacitive sensing structure has nonlinear characteristics with sensitivity varying from 0.25 kPa−1 in the low‐pressure range (<100 Pa) to 0.002 kPa−1 in high pressure (≈2.5 kPa). The output from the presented sensor under a closed‐loop tactile scan, carried out with an industrial robotic arm, is used as latency‐coded spike trains in a spiking neural network (SNN) tempotron classifier system. With the capability of performing a real‐time binary naturalistic texture classification with a maximum accuracy of 99.45%, the presented bioinspired skin finds applications in robotics, prosthesis, wearable sensors, and medical devices. Fingerprint‐enhanced biomimetic tactile‐sensing electronic skin, inspired by the structure and functions of the human skin, is presented. The skin sensor stack embedded in the phalange of a 3D‐printed robotic hand can perform accurate naturalistic texture classifications. The presented bioinspired skin finds applications in robotics, prosthesis, wearable sensors, and medical devices.
Article
Full-text available
In the neocortex, neuronal processing of sensory events is significantly influenced by context. For instance, responses in sensory cortices are suppressed to repetitive or redundant stimuli, a phenomenon termed “stimulus-specific adaptation” (SSA). However, in a context in which that same stimulus is novel, or deviates from expectations, neuronal responses are augmented. This augmentation is termed “deviance detection” (DD). This contextual modulation of neural responses is fundamental for how the brain efficiently processes the sensory world to guide immediate and future behaviors. Notably, context modulation is deficient in some neuropsychiatric disorders such as schizophrenia (SZ), as quantified by reduced “mismatch negativity” (MMN), an electroencephalography waveform reflecting a combination of SSA and DD in sensory cortex. Although the role of NMDA-receptor function and other neuromodulatory systems on MMN is established, the precise microcircuit mechanisms of MMN and its underlying components, SSA and DD, remain unknown. When coupled with animal models, the development of powerful precision neurotechnologies over the past decade carries significant promise for making new progress into understanding the neurobiology of MMN with previously unreachable spatial resolution. Currently, rodent models represent the best tool for mechanistic study due to the vast genetic tools available. While quantifying human-like MMN waveforms in rodents is not straightforward, the “oddball” paradigms used to study it in humans and its underlying subcomponents (SSA/DD) are highly translatable across species. Here we summarize efforts published so far, with a focus on cortically measured SSA and DD in animals to maintain relevance to the classically measured MMN, which has cortical origins. While mechanistic studies that measure and contrast both components are sparse, we synthesize a potential set of microcircuit mechanisms from the existing rodent, primate, and human literature. While MMN and its subcomponents likely reflect several mechanisms across multiple brain regions, understanding fundamental microcircuit mechanisms is an important step to understand MMN as a whole. We hypothesize that SSA reflects adaptations occurring at synapses along the sensory-thalamocortical pathways, while DD depends on both SSA inherited from afferent inputs and resulting disinhibition of non-adapted neurons arising from the distinct physiology and wiring properties of local interneuronal subpopulations and NMDA-receptor function.
Chapter
Neuromorphic engineering represents one of the most promising computing paradigms for overcoming the limitations of the present-day computers in terms of energy efficiency and processing speed. While traditional neuromorphic circuits are based on complementary metal oxide semiconductor (CMOS) transistors and large capacitors, the recently emerging nanoelectronic devices stand out as promising candidates for building the fundamental neuromorphic elements: neurons and synapses. In this chapter, we illustrate how hafnium oxide-based ferroelectric field-effect transistors (FeFETs) can be used to realize both artificial neurons and synapses for spiking neural networks. In particular, the accumulative switching property of FeFETs will be exploited to mimic the integrate-and-fire neuronal functionality, whereas the continuously tunable synaptic weights and the plasticity will be implemented by the partial polarization switching in large-area devices. Finally, the use of FeFETs for deep neural networks will be briefly discussed.
Thesis
Welchen Einfluss die Struktur eines neuronalen Netzwerks auf seine Funktion ausübt, ist ein zentrales Thema der Neurowissenschaften. Obwohl gezeigt wurde, dass die Struktur vieler Gehirnnetzwerke nicht-triviale Gradkorrelationen aufweisen, ist deren Einfluss auf die neuronale Verarbeitung noch nicht vollständig verstanden. Diese Problem stellt einen Schwerpunkt dieser Arbeit dar – wir entwickeln ein „mean field“-Modell zur Untersuchung der Aktivitäten in rekurrenten neuronalen Netzwerken. Diese Untersuchung zeigt unter anderem, dass Gradkorrelationen in neuronalen Netzwerken zu komplexen, multistabilen Aktivitätsregimen führen können. Der zweite Schwerpunkt dieser Arbeit liegt auf Wellen auf der Netzhaut, von denen angenommen wird, dass sie für die Entwicklung des visuellen Systems eine wesentliche Rolle spielen. An Kaninchen durchgeführte Experimente zeigten, dass Wellen auf der Netzhaut im frühesten Stadium mit einer mittleren Rate von 36±18 1/sec auftreten und sich mit einer Geschwindigkeit von 451±91 μm/sec ausbreiten. Es ist bekannt, dass sich die Wellen in der Ganglienzellschicht der Netzhaut ausbreiten und auf Gap Junction-Kopplung zwischen den Neuronen beruhen. Da Gap Junctions (elektrische Synapsen) kurze Integrationszeiten haben, wurde vermutet, dass diese nicht für die niedrige Ausbreitungsgeschwindigkeit der Wellen auf der Netzhaut verantwortlich sein können. Wir entwickeln ein Modell, das aus einem zweidimensionalen Netzwerk von gekoppelten burstenden Neuronen besteht und die beobachtete Ausbreitungsgeschwindigkeit erklärt. Nach unserem Kenntnisstand ist dies das erste Mal, dass gezeigt wurde, dass die experimentell beobachtete Wellengeschwindigkeit und Nukleationsrate der frühen Wellen auf der Netzhaut für eine physiologisch plausible Gap Junction Kopplungsstärke und Rauschintensität zu erwarten sind.
Article
To assess Brette's proposal to expunge “coding” from the neuroscientist's lexicon, we must consider its origins. The coding metaphor is due largely to British nerve physiologist Edgar Adrian. I suggest two ways that the coding metaphor fueled his research. I conclude that the debate today should not be about the “truth” of the metaphor but about its continuing utility.
Article
Neurons in the gustatory cortex (GC) represent taste through time-varying changes in their spiking activity. The predominant view is that the neural firing rate represents the sole unit of taste information. It is currently not known whether the phase of spikes relative to lick timing is used by GC neurons for taste encoding. To address this question, we recorded spiking activity from >500 single GC neurons in male and female mice permitted to freely lick to receive four liquid gustatory stimuli and water. We developed a set of data analysis tools to determine the ability of GC neurons to discriminate gustatory information and then to quantify the degree to which this information exists in the spike rate versus the spike timing or phase relative to licks. These tools include machine learning algorithms for classification of spike trains and methods from geometric shape and functional data analysis. Our results show that while GC neurons primarily encode taste information using a rate code, the timing of spikes is also an important factor in taste discrimination. A further finding is that taste discrimination using spike timing is improved when the timing of licks is considered in the analysis. That is, the interlick phase of spiking provides more information than the absolute spike timing itself. Overall, our analysis demonstrates that the ability of GC neurons to distinguish among tastes is best when spike rate and timing is interpreted relative to the timing of licks.Significance Statement:Neurons represent information from the outside world via changes in their number of action potentials (spikes) over time. This study examines how neurons in the mouse gustatory cortex (GC) encode taste information when gustatory stimuli are experienced through the active process of licking. We use electrophysiological recordings and data analysis tools to evaluate the ability of GC neurons to distinguish tastants and then to quantify the degree to which this information exists in the spike rate versus the spike timing relative to licks. We show that the neuron's ability to distinguish between tastes is higher when spike rate and timing are interpreted relative to the timing of licks, indicating that the lick cycle is a key factor for taste processing.
Article
Diabetic polyneuropathy (DPN) can be classified based on fiber diameter into three subtypes: small fiber neuropathy (SFN), large fiber neuropathy (LFN), and mixed fiber neuropathy (MFN). We examined the effect of different diagnostic models on the frequency of polyneuropathy subtypes in type 2 diabetes patients with DPN. This study was based on patients from the Danish Center for Strategic Research in Type 2 Diabetes cohort. We defined DPN as probable or definite DPN according to the Toronto Consensus Criteria. DPN was then subtyped according to four distinct diagnostic models. A total of 277 diabetes patients (214 with DPN and 63 with no DPN) were included in the study. We found a considerable variation in polyneuropathy subtypes by applying different diagnostic models independent of the degree of certainty of DPN diagnosis. For probable and definite DPN, the frequency of subtypes across diagnostic models varied from: 1.4% to 13.1% for SFN, 9.3% to 21.5% for LFN, 51.4% to 83.2% for MFN, and 0.5% to 14.5% for non‐classifiable neuropathy (NCN). For the definite DPN group, the frequency of subtypes varied from: 1.6% to 13.5% for SFN, 5.6% to 20.6% for LFN, 61.9% to 89.7% for MFN, and 0.0% to 6.3% for NCN. The frequency of polyneuropathy subtypes depends on the type and number of criteria applied in a diagnostic model. Future consensus criteria should clearly define sensory functions to be tested, methods of testing, and how findings should be interpreted for both clinical practice and research purpose.
Article
The past decade has witnessed remarkable advances in the simultaneous measurement of neuronal activity across many brain regions, enabling fundamentally new explorations of the brain-spanning cellular dynamics that underlie sensation, cognition and action. These recently developed multiregion recording techniques have provided many experimental opportunities, but thoughtful consideration of methodological trade-offs is necessary, especially regarding field of view, temporal acquisition rate and ability to guarantee cellular resolution. When applied in concert with modern optogenetic and computational tools, multiregion recording has already made possible fundamental biological discoveries — in part via the unprecedented ability to perform unbiased neural activity screens for principles of brain function, spanning dozens of brain areas and from local to global scales. Modern approaches for multiregion investigation provide new opportunities and considerations for exploring brainwide neural dynamics. In this Review, Machado, Kauvar and Deisseroth discuss advances in the simultaneous measurement and analysis of neuronal activity across many brain regions.
Article
Full-text available
Neural correlates of external variables provide potential internal codes that guide an animal’s behavior. Notably, first-order features of neural activity, such as single-neuron firing rates, have been implicated in encoding information. However, the extent to which higher-order features, such as multineuron coactivity, play primary roles in encoding information or secondary roles in supporting single-neuron codes remains unclear. Here, we show that millisecond-timescale coactivity among hippocampal CA1 neurons discriminates distinct, short-lived behavioral contingencies. This contingency discrimination was unrelated to the tuning of individual neurons, but was instead an emergent property of their coactivity. Contingency-discriminating patterns were reactivated offline after learning, and their reinstatement predicted trial-by-trial memory performance. Moreover, optogenetic suppression of inputs from the upstream CA3 region during learning impaired coactivity-based contingency information in the CA1 and subsequent dynamic memory retrieval. These findings identify millisecond-timescale coactivity as a primary feature of neural firing that encodes behaviorally relevant variables and supports memory retrieval. El-Gaby et al. combine multiunit recordings and optogenetic silencing in the mouse hippocampus and uncover a primary role for millisecond-timescale neural coactivity in encoding behavioral contingency information and supporting memory retrieval.
Thesis
Une régression des principales modalités sensorielles (vision, audition, goût et odorat) est bien rapportée à l'âge. La perception tactile est influencée par différents paramètres. Un grand nombre de ces paramètres ont été bien étudiés dans la littérature tels que le système nerveux central, la densité des mécanorécepteurs dans la peau et les seuils de détection vibro-tactile. Ces paramètres ont été étudiés pour comprendre l'affaiblissement de la perception tactile au fil du temps et les différences naturelles entre les hommes et les femmes. Le doigt humain est toujours utilisé pour qualifier la qualité et les propriétés d'une surface. Par conséquent, un panel de différent âge et sexe est toujours utilisé par différents secteurs industriels afin de comprendre les besoins des consommateurs. Cependant, cette méthode est très coûteuse, longue et subjective. L’objectif de ce travail était de développer une méthodologie qui permette d'avoir une objectivation tactile semblable au panel, cela signifie qu'on doit prendre en compte les effets de l'âge et du sexe. Cette thèse a fourni des mesures in vivo pour 40 doigts humains. Nos développements proposent un doigt artificiel capable de mimer le doigt humain, en tenant l’effet de l'âge et du sexe. Pour atteindre notre objectif, nous avons développé plusieurs approches qui combinent à la fois, la topographie multi-échelle du doigt humain, ces propriétés mécaniques anisotropes, les propriétés tribologiques et l’effet de l’aire réelle de contact et la direction du toucher sur la force de frottement et les vibrations générées. Le développement d’un algorithme de traitement du signal, a permis d’identifier des coefficients représentatifs de la qualité de la surface touchée. L’ingénierie de l'émotion a été un autre axe de recherche dans de ces travaux. L’instrumentation du doigt humain avec un dispositif laser-doppler, a permis d’évaluer la qualité tactile des échantillons en fonction de l'émotion générée pendant le processus du toucher. Cette émotion est étudiée par la variation fréquentielle du débit sanguin. Le mise au point d’un doigt artificiel bio-inspiré qui possède des propriétés biophysiques proches du doigt humain, a permis de réaliser un démonstrateur de toucher qui peut intégrer l’âge et le sexe d’un panel. Ce dispositif répond au cahier des charges défini dans le projet ANR «plasticTouchDevice», et permet aux industriels de la plasturgie de mener des expertises de leurs innovations en ayant recours à un dispositif qui permet d’intégrer l’effet de l’âge et du sexe dans la métrologie de la qualité du toucher des matériaux plastiques.
Article
Adaptive response to the timely constant stimulus is the common feature of real neurons. The circuit of the adaptive neuron model consumes less power and requires less data transmission bandwidth compared to the circuit of the non-adaptive neuron model, especially for encoding time-varying signals. Memristor is a good candidate for mimicking the behavior of neurons so that the simple memristor-based circuit can directly emulate many specific behaviors of the neurons with low power and low area consumption. In this work, for the first time, we show that as the nonvolatile switching property of the memristor can be useful for representing long-term adaptation behavior in the memristor-based neuron, the short-term adaptation behavior can also be emulated directly using the same memristor-based circuit due to the volatile switching property of the memristor. Here, short term adaptation is realized using the volatile property of memristor, unlike neuron circuits where adaptation is realized using leakage modulation. As a result, in the memristor-based neuron extra power dissipation can be reduced. Two different types of memristors are used for implementing the proposed circuit of adaptive leaky integrate-and-fire neuron, the volatile/non-volatile memristor and threshold switching memristor are in the charge and discharge path of the capacitor, respectively. Results show that the volatile or non-volatile resistance change of charging memristor upon different input patterns to the neuron circuit determines the type of adaptive behavior of the neuron response, i.e. the neuron may show short-term adaptation or long-term adaptation or does not show an adaptation behavior at all. Comparison with similar works shows that the energy consumption per spiking of the proposed neuron is relatively low, while the circuit is very area-efficient.
Article
italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Editor’s notes: This extended editorial provides an introduction into stochastic computing (SC) and its usage in emerging neuromorphic applications. It covers SC primitives and the tradeoffs that occur when designing larger SC-based systems. The article shows how SC enables low-cost, low-power, and error-tolerant hardware implementation of neural networks suitable for edge computing. It provides a brief survey about recent proposals in this domain and introduces the articles in this special issue. — Jörg Henkel, Karlsruhe Institute of Technology</i
Article
The vagus nerve is an indispensable body-brain connection that controls vital aspects of autonomic physiology like breathing, heart rate, blood pressure, and gut motility, reflexes like coughing and swallowing, and survival behaviors like feeding, drinking, and sickness responses. Classical physiological studies and recent molecular/genetic approaches have revealed a tremendous diversity of vagal sensory neuron types that innervate different internal organs, with many cell types remaining poorly understood. Here, we review the state of knowledge related to vagal sensory neurons that innervate the respiratory, cardiovascular, and digestive systems. We focus on cell types and their response properties, physiological/behavioral roles, engaged neural circuits and, when possible, sensory receptors. We are only beginning to understand the signal transduction mechanisms used by vagal sensory neurons and upstream sentinel cells, and future studies are needed to advance the field of interoception to the level of mechanistic understanding previously achieved for our external senses.
Article
During tactile sensation by rodents, whisker movements across surfaces generate complex whisker motions, including discrete, transient stick–slip events, which carry information about surface properties. The characteristics of these events and how the brain encodes this tactile information remain enigmatic. We found that cortical neurons show a mixture of synchronized and nontemporally correlated spikes in their tactile responses. Synchronous spikes convey the magnitude of stick–slip events by numerous aspects of temporal coding. These spikes show preferential selectivity for kinetic and kinematic whisker motion. By contrast, asynchronous spikes in each neuron convey the magnitude of stick–slip events by their discharge rates, response probability, and interspike intervals. We further show that the differentiation between these two types of activity is highly dependent on the magnitude of stick–slip events and stimulus and response history. These results suggest that cortical neurons transmit multiple components of tactile information through numerous coding strategies.
Article
Full-text available
Mounting neurophysiological evidence suggests that interpersonal interaction relies on continual communication between cell assemblies within interacting brains and continual adjustments of these neuronal dynamic states between the brains. In this Hypothesis and Theory article, a Hyper-Brain Cell Assembly Hypothesis is suggested on the basis of a conceptual review of neural synchrony and network dynamics and their roles in emerging cell assemblies within the interacting brains. The proposed hypothesis states that such cell assemblies can emerge not only within, but also between the interacting brains. More precisely, the hyper-brain cell assembly encompasses and integrates oscillatory activity within and between brains, and represents a common hyper-brain unit, which has a certain relation to social behavior and interaction. Hyper-brain modules or communities, comprising nodes across two or several brains, are considered as one of the possible representations of the hypothesized hyper-brain cell assemblies, which can also have a multidimensional or multilayer structure. It is concluded that the neuronal dynamics during interpersonal interaction is brain-wide, i.e., it is based on common neuronal activity of several brains or, more generally, of the coupled physiological systems including brains.
Article
This work addresses how to naturally adopt the l ² -norm cosine similarity in the neuromemristive system and studies the unsupervised learning performance on handwritten digit image recognition. Proposed architecture is a two-layer fully connected neural network with a hard winner-take-all (WTA) learning module. For input layer, we propose single-spike temporal code that transforms input stimuli into the set of single spikes with different latencies and voltage levels. For a synapse model, we employ a compound memristor where stochastically switching binary-state memristors connected in parallel, which offers a reliable and scalable multi-state solution for synaptic weight storage. Hardware-friendly synaptic adaptation mechanism is proposed to realize spike-timing-dependent plasticity learning. Input spikes are sent out through those memristive synapses to each and every integrate-and-fire neuron in the fully connected output layer, where the hard WTA network motif introduces the competition based on cosine similarity for the given input stimuli. Finally, we present 92.64% accuracy performance on unsupervised digit recognition with only single-epoch MNIST dataset training via high-level simulations, including extensive analysis on the impact of system parameters.
Article
Full-text available
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s<1, the quotient ITRσ has a maximum and can tend to zero depending on transition probabilities, while for 1<s, the ITRσ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1<s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance.