Book

The organization of behavior

Authors:
... We further demonstrate how neuromodulatory mechanisms that modulate the shape of triplet STDP or the synaptic transmission function differentially promote connectivity motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. 2 sculpted by experience and has become a most relevant link between circuit structure and function [1]. The 3 original formulation of Hebbian plasticity, whereby "cells that fire together, wire together" [2,3], fostered the 4 concept of 'cell assemblies ' [4], defined as groups of neurons that are repeatedly co-activated leading to the 5 strengthening of synaptic connectivity between individual neurons. ...
... doi: bioRxiv preprint first posted online Jul. 26, 2019; tend to receive more common input than would be expected by chance, [12,[16][17][18] and cortical pyramidal neurons 12 tend to be more strongly connected to neurons that share stimulus preference [13,19,20], providing evidence for 13 clustered architecture. It has been proposed that this organization enables the cortex to intrinsically generate 14 reverberating patterns of neural activity when representing different stimulus features [1,21]. Thus, neuronal 15 assemblies can be interpreted as the building blocks of cortical microcircuits which are differentially recruited 16 during distinct functions, such as the binding of different features of a sensory stimulus [7,17,22]. ...
... Nevertheless, if spike triplets would also 124 be taken into account for depression, the derivation would be identical, with the corresponding modification to 125 the variables involved. After some calculations, we can rewrite Eq. (1) in the Fourier domain as 126 Ẇ ij = r i r j L 2 (0) + r iL3 (0, 0) Independent spikes ...
Preprint
Full-text available
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP) which depends on the interactions of three precisely-timed spikes and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We describe synaptic changes arising from emergent higher-order correlations, and investigate their influence on different connectivity motifs in the network. Our motif expansion framework reveals novel motif structures under the triplet STDP rule, which support the formation of bidirectional connections and loops in contrast to the classical pair-based STDP rule. Therefore, triplet STDP drives the spontaneous emergence of self-connected groups of neurons, or assemblies, proposed to represent functional units in neural circuits. Assembly formation has often been associated with plasticity driven by firing rates or external stimuli. We propose that assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule commonly assumed in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of triplet STDP or the synaptic transmission function differentially promote connectivity motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures.
... The generally accepted hypothesis of learning is that it is realized by changes of synaptic weights by the process of (long-term) synaptic plasticity (Hebb, 1949;Martin, Grimwood, Synaptic plasticity: General term for different kinds of biological mechanisms adapting the weights of synapses depending on neuronal activities. ...
... Synaptic weights are strengthened or weakened depending on the activity of the pre-and postsynaptic neurons (Bi & Poo, 1998;Bliss & Lømo, 1973;Markram, Lübke, Frotscher, & Sakmann, 1997). Hebbian plasticity describes the process of increasing a synaptic weight if the activity of the two connected neurons is correlated (Hebb, 1949). Several theoretical studies indicate that Hebbian plasticity alone would lead to divergent synaptic and neuronal dynamics, thus requiring homeostatic synaptic plasticity (Triesch, Vo, & Hafner, 2018; Homeostatic synaptic plasticity: Synaptic plasticity mechanism adapting the synaptic weights such that the neuronal dynamics remain in a desired "healthy" regime. ...
Article
Full-text available
Author Summary Everyday life requires living beings to continuously recognize and categorize perceived stimuli from the environment. To master this task, the representations of these stimuli become increasingly sparse and expanded along the sensory pathways of the brain. In addition, the underlying neuronal network has to be structured according to the inherent organization of the environmental stimuli. However, how the neuronal network learns the required structure even in the presence of noise remains unknown. In this theoretical study, we show that the interplay between synaptic plasticity—controlling the synaptic efficacies—and intrinsic plasticity—adapting the neuronal excitabilities—enables the network to encode the organization of environmental stimuli. It thereby structures the network to correctly categorize stimuli even in the presence of noise. After having encoded the stimuli’s organization, consolidating the synaptic structure while keeping the neuronal excitabilities dynamic enables the neuronal system to readapt to arbitrary levels of noise resulting in a near-optimal classification performance for all noise levels. These results provide new insights into the interplay between different plasticity mechanisms and how this interplay enables sensory systems to reliably learn and categorize stimuli from the surrounding environment.
... The original perceptron consisted of a single layer of input neurons fully interconnected in a feedforward way to a layer of output neurons. A learning hebbian rule [39] to adapt the weights was proposed [38]. This single layer perceptron was able to solve only linearly separable problems [40]. ...
... STDP is a Hebbian learning rule. The traditional Hebbian synaptic plasticity rule was formulated in 1940 suggesting that synapses increase their efficiency if they persistently take part in firing the post-synaptic neuron [39]. Much later in 1993, STDP learning algorithms were reported [31,32] as a refinement of this rule taking into account the precise relative timing of individual pre-and post-synaptic spikes, and not their average rates over time. ...
Article
Full-text available
Inspired by biology, neuromorphic systems have been trying to emulate the human brain for decades, taking advantage of its massive parallelism and sparse information coding. Recently, several large-scale hardware projects have demonstrated the outstanding capabilities of this paradigm for applications related to sensory information processing. These systems allow for the implementation of massive neural networks with millions of neurons and billions of synapses. However, the realization of learning strategies in these systems consumes an important proportion of resources in terms of area and power. The recent development of nanoscale memristors that can be integrated with Complementary Metal–Oxide–Semiconductor (CMOS) technology opens a very promising solution to emulate the behavior of biological synapses. Therefore, hybrid memristor-CMOS approaches have been proposed to implement large-scale neural networks with learning capabilities, offering a scalable and lower-cost alternative to existing CMOS systems.
... In the latter, this phenomenon has been linked to various cognitive functions including perception 7-9 , attention [10][11][12][13][14] , and learning [15][16][17][18][19][20][21][22][23][24][25] . Learning involves the dynamic adjustment of connections among neuronal populations in the form of synaptic plasticity 26 . ...
... We extend this model by dynamically adjusting conduction velocity (and hence transmission delays) in addition to synaptic weights. Changes in both synaptic weight and conduction depend on a Hebbian learning rule 26 , which is based on the frequency of the coactivations among pairs of network oscillators. That is, both connection weights and conduction velocity are time-dependent parameters influencing each other and the dynamics of the network as a whole. ...
Article
Full-text available
Models of learning typically focus on synaptic plasticity. However, learning is the result of both synaptic and myelin plasticity. Specifically, synaptic changes often co-occur and interact with myelin changes, leading to complex dynamic interactions between these processes. Here, we investigate the implications of these interactions for the coupling behavior of a system of Kuramoto oscillators. To that end, we construct a fully connected, one-dimensional ring network of phase oscillators whose coupling strength (reflecting synaptic strength) as well as conduction velocity (reflecting myelination) are each regulated by a Hebbian learning rule. We evaluate the behavior of the system in terms of structural (pairwise connection strength and conduction velocity) and functional connectivity (local and global synchronization behavior). We find that adaptive myelination is able to both functionally decouple structurally connected oscillators as well as to functionally couple structurally disconnected oscillators. With regard to the latter, we find that for conditions in which a system limited to synaptic plasticity develops two distinct clusters both structurally and functionally, additional adaptive myelination allows for functional communication across these structural clusters. These results confirm that network states following learning may be different when myelin plasticity is considered in addition to synaptic plasticity, pointing toward the relevance of integrating both factors in computational models of learning.
... Costa, Pannunzi, Deco, & Pickering, 2017), the co-activation of shared translation words could have the further consequence that words of one language modify the structure of lexical representations in the other language. According to the principles of Hebbian learning (Hebb, 1949), any two representations that are repeatedly active at the same time will tend to become associated. As a result, the words veer and boleshik, which are unrelated in Russian, could become associated by virtue of their common relation to the English translation fan. ...
... Degani et al. (2011) proposed that the observed effect was due to the co-activation of two words of one language and their meanings with a single word of the other language. They argued that following Hebbian principles (Hebb, 1949), co-activation leads to "an association between the two meanings and/or lexical representations", experimentally manifested in increased semantic relatedness ratings for words sharing a translation in a bilingual's other language. ...
Article
Words of one language often have multiple translations into another language. Does mapping of an L2 word onto multiple L1 words impact how these L1 words are represented in the bilingual lexicon? Russian-English bilinguals decided on the lexical status (Exp1) or the conceptual relatedness (Exp2) of pairs of Russian words that had the same or different translations in English. We obtained evidence for a facilitative effect of L2-to-L1 translation ambiguity. In Exp1, bilinguals were faster to respond to a Russian target if a prime had the same vs. different English translation as the target. Further, the magnitude of the N400 ERP component was reduced and the P200 was enhanced in the translation ambiguous compared to non-ambiguous condition. In Exp2, translation alternatives were rated as being more conceptually similar than words with different translations. Thus, the presence of a shared L2 translation leads to some convergence of corresponding L1 lexico-conceptual representations.
... Brain fibres grow and reach out to connect to other neurons, neuroplasticity allows new connections to be created or areas to move and change function, and synapses may strengthen or weaken based on their importance. "Neurons that fire together, wire together", as Hebb suggested [31]. ...
... In the natural world, even a single-cell organism can maneuver smartly to adapt to the environment and prey on food [39]. From the view of collective intelligence [29,30,31], human intelligence could be an automatic integrated outcome of smarter neurons. ...
Preprint
Full-text available
The recent success of Deep Neural Networks (DNNs) has revealed the significant capability of neuromorphic computing in many challenging applications. Although DNNs are derived from emulating biological neurons, there still exist doubts over whether or not DNNs are the final and best model to emulate the mechanism of human intelligence. In particular, there are two discrepancies between computational DNN models and the observed facts of biological neurons. First, human neurons are interconnected randomly, while DNNs need carefully-designed architectures to work properly. Second, human neurons usually have a long spiking latency (~100ms) which implies that not many layers can be involved in making a decision, while DNNs could have hundreds of layers to guarantee high accuracy. In this paper, we propose a new computational neuromorphic model, namely shallow unorganized neural networks (SUNNs), in contrast to DNNs. The proposed SUNNs differ from standard ANNs or DNNs in three fundamental aspects: 1) SUNNs are based on an adaptive neuron cell model, Smart Neurons, that allows each neuron to adaptively respond to its inputs rather than carrying out a fixed weighted-sum operation like the neuron model in ANNs/DNNs; 2) SUNNs cope with computational tasks using only shallow architectures; 3) SUNNs have a natural topology with random interconnections, as the human brain does, and as proposed by Turing's B-type unorganized machines. We implemented the proposed SUNN architecture and tested it on a number of unsupervised early stage visual perception tasks. Surprisingly, such shallow architectures achieved very good results in our experiments. The success of our new computational model makes it a working example of Turing's B-Type machine that can achieve comparable or better performance against the state-of-the-art algorithms.
... Yet, to co-relate how a particular motion/configuration produces a sensory stimuli, additional associative properties must be considered. One common model for linking different brain areas based on shared activity patterns is the so-called Hebbian rule [4]. It states that if two neuronal regions are persistently activated together, the connection between them is strengthened; the connection is weakened if no simultaneous activity is present. ...
... Moreover, many of these areas are connected together by some synapses which develop connections based on their joint activity. Among these rules is the well-known Hebbian learning rule [4]. ...
Chapter
Full-text available
In this work, we present the development of a neuro-inspired approach for characterizing sensorimotor relations in robotic systems. The proposed method has self-organizing and associative properties that enable it to autonomously obtain these relations without any prior knowledge of either the motor (e.g. mechanical structure) or perceptual (e.g. sensor calibration) models. Self-organizing topographic properties are used to build both sensory and motor maps, then the associative properties rule the stability and accuracy of the emerging connections between these maps. Compared to previous works, our method introduces a new varying density self-organizing map (VDSOM) that controls the concentration of nodes in regions with large transformation errors without affecting much the computational time. A distortion metric is measured to achieve a self-tuning sensorimotor model that adapts to changes in either motor or sensory models. The obtained sensorimotor maps prove to have less error than conventional self-organizing methods and potential for further development.
... The network generates internal activity during the closed state (closed). This SSM network was trained with the potential of long latency synaptic connections(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20). (B) Synaptic weight matrices for conduction-delay (latencies) of 1-20 time steps. ...
... Top left: sensory input into a two-dimensional field (20x20) generates activities in 400 neurons arranged as shown. Right: synaptic weight matrices (for latencies[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. Each of these is a 400 by 400matrix of synaptic weights (bottom left for latency=1). ...
Article
Full-text available
Here we consider the possibility that a fundamental function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single parsimonious computational framework. Beyond its utility as a potential model of cortical computation, artificial networks based on this principle have remarkable capacity for internalizing dynamical systems, making them useful in a variety of application domains including time-series prediction and machine intelligence.
... where the weight increment ∆w ℓ,n of neuron ξ ℓ at time n depends only on a ℓ,n (the desired behavior of this neuron at this time) and on the preceding firing vector a n−1 , and perhaps also on the previous weights w (n−1) ℓ of this neuron. This mode of learning may be called quasi-Hebbian since the stated restrictions on ∆w ℓ,n essentially agree with those of Hebbian learning [21], except that the term "Hebbian" is normally reserved for unsupervised learning. The point of these restrictions is their suitability for hardware implementation, both biological and neuromorphic. ...
Preprint
This paper studies the capability of a recurrent neural network model to memorize random dynamical firing patterns by a simple local learning rule. Two modes of learning/memorization are considered: The first mode is strictly online, with a single pass through the data, while the second mode uses multiple passes through the data. In both modes, the learning is strictly local (quasi-Hebbian): At any given time step, only the weights between the neurons firing (or supposed to be firing) at the previous time step and those firing (or supposed to be firing) at the present time step are modified. The main result of the paper is an upper bound on the probability that the single-pass memorization is not perfect. It follows that the memorization capacity in this mode asymptotically scales like that of the classical Hopfield model (which, in contrast, memorizes static patterns). However, multiple-rounds memorization is shown to achieve a higher capacity (with a nonvanishing number of bits per connection/synapse). These mathematical findings may be helpful for understanding the functions of short-term memory and long-term memory in neuroscience.
... The winning objects are assumed to be categorized in a vSTM map of locations positioned in the posterior thalamus and particularly in the thalamic reticular nucleus. In line with Hebb (1949) for example, the NTVA assumes that the activity of the neurons representing the winner objects in visual cortices is sustained and reactivated by a feedback loop gated by the thalamic reticular nucleus (Bundesen et al., 2005). Given the critical role assigned to posterior thalamus and visual cortices in NTVA, the structural connectivity between those two regions would be decisive for vSTM capacity (Bundesen et al., 2005) and any alterations in such connections would affect vSTM capacity. ...
Article
Aging impacts both visual short-term memory (vSTM) capacity and thalamo-cortical connectivity. According to the Neural Theory of Visual Attention, vSTM depends on the structural connectivity between posterior thalamus and visual occipital cortices (PT-OC). We tested whether aging modifies the association between vSTM capacity and PT-OC structural connectivity. To do so, 66 individuals aged 20-77 years were assessed by diffusion-weighted imaging used for probabilistic tractography and performed a psychophysical whole-report task of briefly presented letter arrays, from which vSTM capacity estimates were derived. We found reduced vSTM capacity, and aberrant PT-OC connection probability in aging. Critically, age modified the relationship between vSTM capacity and PT-OC connection probability: in younger adults, vSTM capacity was negatively correlated with PT-OC connection probability while in older adults, this association was positive. Furthermore, age modified the microstructure of PT-OC tracts suggesting that the inversion of the association between PT-OC connection probability and vSTM capacity with aging might reflect age-related changes in white-matter properties. Accordingly, our results demonstrate that age-related differences in vSTM capacity links with the microstructure and connectivity of PT-OC tracts.
... 19 Then, instead of creating a new record for incoming sensory information, the brain hardware would rather store it in the form of a specific (distributed) pattern of neurons placed on a pathway and linked to all other associated patterns of previously stored relevant concepts and memories. This is consistent with neuropsychological findings 2,6 . So, when new information arrives, it lights up all related neurons and pathways in a distributive process that is similar to a top-down action, where a concept/memory is broken up into related pieces. ...
Conference Paper
Full-text available
is an endowed professor and director of the CMST Institute at The College at Brockport, SUNY. He established the first undergraduate degree program in computational science in the United States, and his research interests include engineering and science education, computational pedagogy, fluid and particle dynamics, engine ignition modeling, and parallel computing. Yasar has a PhD in engineering physics and an MS in computer science from the University of Wisconsin-Madison. Abstract We report about memory retrieval experiences to help students retrieve content they learned in class, retain it, and apply it in different contexts to solve novel problems. Supported by multi-year fall/spring professional development opportunities for teachers, these technological and pedagogical experiences range in complexity from simple electronic flashcards for basic retrieval strategies to low-stakes quizzes for spaced-out (initial exposure and retrieval effort are spaced out) and interleaved (two or more spaced-out topics are interleaved) practices. A sequential mixed-methods approach was used to collect quantitative data from a large number of participating teachers (N=180), followed by an enriched case study with a qualitative component to explore the meaning of the quantitative trends/findings in the first part of the study. Participants reported that they gained a greater understanding of the science behind the concept of interleaving, a greater understanding of how it can be implemented and tested in the classroom, and a higher level of confidence in the effectiveness of interleaving on knowledge retention than they had prior to training. While deployment of retrieval strategies in the classroom has been required of all participants, those who attended additional training in the summers (N=68) have also conducted Action Research to measure the effect of new strategies on learning. These teachers randomly selected control and target student groups within the same school, grade and course environment. They also self-selected an area of content within their respective science disciplines or mathematics curriculum and created two different retrieval practices-a blocked practice that examines student knowledge and skills for applying a certain method to the solution of various questions on only one topic or type, and the interleaved practice that involves questions on two or more topics that need different methods to solve. Results from the first summer cohort (N=16) show that students who learned math and science topics through interleaved practices consistently scored 5-30% better than those who learned it in the more traditional blocked practice. In many cases, the differences were statistically significant (p <0.05). While the second summer cohort (N=42) continues its action research, our future work will attempt to reduce confounding variables in research experiments and repeat them with more robust techniques and another level of memory retrieval strategy to help students not only recall what they learned in a classroom but also apply their content knowledge and computational skills to problem solving in a generative fashion beyond just answering multiple-choice questions.
... A person is able to feel and to decide his actions through prediction of these feelings [9]. This is learnt over time and experience, through 'hebbian learning' [14]; however this should be modulated at the level of maturity [15]. Based upon the literature mentioned above, we aim to design a temporal-causal network model (addressed in Section 3), that represents a real-time agent, who is able to get angry and her or his possible responses to a negative feedback. ...
Conference Paper
Full-text available
ocial media is one of the widely used channels for interpersonal communication, to express feelings and thoughts through certain feedback. Blogs or ecommerce websites share plenty of such information, which serves as a valuable asset, and is also used to make predictions. However, negative feedback can ruin the essence of such platforms, causing frustration among peers. This paper presents a computational network model of a humanoid agent for getting inappropriate feed-backs, who learns to react with a level of competence over aggression due to feedback. Tuning and evaluation of the model are done by performing simulation experiments based on public tweets and mathematical analysis respectively. This model can serve as an input to detect and handle aggression.
... They are "emerging" brain functions in the interaction of numerous neurons, several neural networks, and the neurodynamics of the brain's overall system. On the other hand, since the mid-20th century, neuroscientists have discovered that brain neurons possess perceptual and learning [14] and pattern recognition abilities [15], and are associated with feelings, learning and consciousness at the higher levels of the brain system, i.e. there is a "neural correlation of consciousness (NCC)". ...
... A strength of the dynamic network approach is that it offers a method of representing language growth that minimally differs from the way language is actually used, and that means the gap between theoretical construct and data is kept small. It also presents a way to ground linguistic representation in a medium that is psychologically plausible; for example, the usage-based proposal that frequently occurring patterns are stored together as templates or schemas can be grounded in the community structure of the network which in turn can be grounded in the Hebbian learning principle that neurons that fire together wire together (Hebb, 1949;Lowel & Singer, 1992). Here we formally make this link between distributional learning, the schemas of usage-based theory and the community structure of a network. ...
Article
Full-text available
For languages to survive as complex cultural systems, they need to be learnable. According to traditional approaches, learning is made possible by constraining the degrees of freedom in advance of experience and by the construction of complex structure during development. This article explores a third contributor to complexity: namely, the extent to which syntactic structure can be an emergent property of how simpler entities - words - interact with one another. The authors found that when naturalistic child directed speech was instantiated in a dynamic network, communities formed around words that were more densely connected with other words than they were with the rest of the network. This process is designed to mirror what we know about distributional patterns in natural language: namely, the network communities represented the syntactic hubs of semi-formulaic slot-and-frame patterns, characteristic of early speech. The network itself was blind to grammatical information and its organization reflected (a) the frequency of using a word and (b) the probabilities of transitioning from one word to another. The authors show that grammatical patterns in the input disassociate by community structure in the emergent network. These communities provide coherent hubs which could be a reliable source of syntactic information for the learner. These initial findings are presented here as proof-of-concept in the hope that other researchers will explore the possibilities and limitations of this approach on a larger scale and with more languages. The implications of a dynamic network approach are discussed for the learnability burden and the development of an adult-like grammar.
... Researchers have proposed various theories to interpret these changes. The most influential theory is Spike-Timing-Dependent Plasticity (STDP), which studies changes of synaptic connections as a function of presynaptic and postsynaptic neuronal activities (Hebb, 1949;Caporale and Dan, 2008). Nowadays, increasing studies indicate that in addition to neuronal activities, other factors can also modulate the changes of synaptic connections, such as neuromodulators and glia (Seol et al., 2007;Nadim and Bucher, 2014;Fremaux and Gerstner, 2015). ...
Article
Full-text available
The human brain is thought to be an extremely complex but efficient computing engine, processing vast amounts of information from a changing world. The decline in the synaptic density of neuronal networks is one of the most important characteristics of brain development, which is closely related to synaptic pruning, synaptic growth, synaptic plasticity, and energy metabolism. However, because of technical limitations in observing large-scale neuronal networks dynamically connected through synapses, how neuronal networks are organized and evolve as their synaptic density declines remains unclear. Here, by establishing a biologically reasonable neuronal network model, we show that despite a decline in the synaptic density, the connectivity, and efficiency of neuronal networks can be improved. Importantly, by analyzing the degree distribution, we also find that both the scale-free characteristic of neuronal networks and the emergence of hub neurons rely on the spatial distance between neurons. These findings may promote our understanding of neuronal networks in the brain and have guiding significance for the design of neuronal network models.
... . 19 40 , (Hebb) [1] , , , ...
... The TMS leads to a contralateral muscle contraction that can be measured in the form of a motor evoked potential (MEP). PAS is related to Hebbian principle of activity-dependent long-term modification of synaptic plasticity (Hebb, 1949). Depending on the inter-stimulus intervals and stimulation duration, PAS may induce either long-term potentiation (LTP)-like or long-term depression (LTD)-like effects . ...
Article
Full-text available
Transcranial magnetic stimulation (TMS) is a well-established tool in probing cortical plasticity in vivo. Changes in corticomotor excitability can be induced using paired associative stimulation (PAS) protocol, in which TMS over the primary motor cortex is conditioned with an electrical peripheral nerve stimulation of the contralateral hand. PAS with an inter-stimulus interval of 25 ms induces long-term potentiation (LTP)-like effects in cortical excitability. However, the response to a PAS protocol tends to vary substantially across individuals. In this study, we used univariate and multivariate data-driven methods to investigate various previously proposed determinants of inter-individual variability in PAS efficacy, such as demographic, cognitive, clinical, neurophysiological, and neuroimaging measures. Forty-one right-handed participants, comprising 22 patients with amnestic mild cognitive impairment (MCI) and 19 healthy controls (HC), underwent the PAS protocol. Prior to stimulation, demographic, genetic, clinical, as well as structural and resting-state functional MRI data were acquired. The two groups did not differ in any of the variables, except by global cognitive status. Univariate analysis showed that only 61% of all participants were classified as PAS responders, irrespective of group membership. Higher PAS response was associated with lower TMS intensity and with higher resting-state connectivity within the sensorimotor network, but only in responders, as opposed to non-responders. We also found an overall positive correlation between PAS response and structural connectivity within the corticospinal tract, which did not differ between groups. A multivariate random forest (RF) model identified age, gender, education, IQ, global cognitive status, sleep quality, alertness, TMS intensity, genetic factors, and neuroimaging measures (functional and structural connectivity, gray matter (GM) volume, and cortical thickness as poor predictors of PAS response. The model resulted in low accuracy of the RF classifier (58%; 95% CI: 42 − 74%), with a higher relative importance of brain connectivity measures compared to the other variables. We conclude that PAS variability in our sample was not well explained by factors known to influence PAS efficacy, emphasizing the need for future replication studies.
... Mechanistically, memories are formed by lasting changes in the strength of synaptic connections between neurons ( Citri and Malenka 2008). In 1949, Hebb postulated that for two cells connected by an excitatory synapse, if the activation of one cell leads to the activation of the second one, the connection between the two cells is strengthened (Hebb 1949). Long-lasting changes in synaptic strength can thus be caused by Hebbian learning. ...
Article
Full-text available
Emotional responses are not static but change as a consequence of learning. Organisms adapt to emotional events and these adaptations influence the way we think, behave, and feel when we encounter similar situations in the future. Integrating recent work from rodent models and research on human psychopathology, this article lays out a model describing how affective events cause learning and can lead to anxiety and depression: affective events are linked to conditioned stimuli and contexts. Affective experiences entrain oscillatory synchrony across distributed neural circuits, including the prefrontal cortex, hippocampus, amygdala, and nucleus accumbens, which form associations that constitute the basis of emotional memories. Consolidation of these experiences appears to be supported by replay in the hippocampus—a process by which hippocampal firing patterns recreate the firing pattern that occurred previously. Generalization of learning occurs to never before experienced contexts when associations form across distinct but related conditioned stimuli. The process of generalization, which requires cortical structures, can cause memories to become abstracted. During abstraction, the latent, overlapping features of the learned associations remain and result in the formation of schemas. Schemas are adaptive because they facilitate the rapid processing of conditioned stimuli and prime behavioral, cognitive, and affective responses that are the manifestations of the accumulation of an individual’s conditioned experiences. However, schemas can be maladaptive when the generalization of aversive emotional responses are applied to stimuli and contexts in which affective reactions are unnecessary. I describe how this process can lead to not only mood and anxiety disorders but also psychotherapeutic treatment.
... The theory of neural networks (TNN) (Hebb, 1949) claims that strongly connected cell assemblies will be formed between semantic memory and motor or somatosensory neural networks when neurons of semantic brain areas are frequently activated simultaneously with motor or somatosensory brain areas. For example, studies by Pulvermüller et al. (Pulvermuller, 1999;Pulvermüller & Fadiga, 2010) showed that the presentation of action words such as singing and throwing not only activated semantic memory networks but simultaneously also representations of the mouth or arm in the primary motor cortex indicating that language might be embodied. ...
Article
Full-text available
Introduction: The wording used before and during painful medical procedures might significantly affect the painfulness and discomfort of the procedures. Two theories might account for these effects: the motivational priming theory (Lang, 1995, American Psychologist, 50, 372) and the theory of neural networks (Hebb, 1949, The organization of behavior. New York, NY: Wiley; Pulvermuller, 1999, Behavioral and Brain Sciences, 22, 253; Pulvermüller and Fadiga, 2010, Nature Reviews Neuroscience, 11, 351). Methods: Using fMRI, we investigated how negative, pain-related, and neutral words that preceded the application of noxious stimuli as priming stimuli affect the cortical processing and pain ratings of following noxious stimuli. Results: Here, we show that both theories are applicable: Stronger pain and stronger activation were observed in several brain areas in response to noxious stimuli preceded by both, negative and pain-related words, respectively, as compared to preceding neutral words, thus supporting motivational priming theory. Furthermore, pain ratings and activation in somatosensory cortices, primary motor cortex, premotor cortex, thalamus, putamen, and precuneus were even stronger for preceding pain-related than for negative words supporting the theory of neural networks. Conclusion: Our results explain the influence of wording on pain perception and might have important consequences for clinical work.
... Основные концепции искусственных нейронных сетей (ИНС) были предложены в 40-50-х гг. XX в. [2]- [4], в частности, многослойный персептрон -нейронная сеть прямого распространения с одним скрытым слоем. Как научное направление теория искусственных нейронных сетей была определена в классической работе У. Мак-Каллока и У. Питтса [3], заложившей основы двух направлений исследований нейронных сетей. ...
Article
Full-text available
Based on the analysis of a series of feedforward artificial neural networks, a method has been developed for determining the optimal neural network architecture for the task of classifying cyanobacterial strains according to the fluorescence spectra. The analysis of six gradient methods of training neural networks and their parameters was carried out, the optimal number of neurons in the hidden layer for neural networks trained by each of the methods was found, various methods of initializing the weights of neurons and methods for splitting the initial sample into training, test and control samples were evaluated. The choice of the optimal architecture was carried out on the basis of the classification results, namely, on the basis of the classification accuracy graphs and the classification error graphs. The research was conducted on the example of recognition of 16 classes, representing 16 strains of cyanobacteria. A number of shortcomings were identified in the method of testing feedforward neural networks and directions for additional researches of neural networks for classification in terms of extending the testing methodology of their internal logic were determined.
... Decay would be observable both behaviorally and, presumably, neurally. Originally, my conception was probably largely derived from a book I read in college, Hebb (1949), describing cell assemblies that underlie thoughts via reverberating neural circuits for concepts, presumably only until the circuit runs out of some physiological resources and activation collapses, making the representation dormant. In that conception, the cell assembly is an LTM concept that carries with it aLTM as an activated state of neural reverberation. ...
Article
Full-text available
Short-term memory (STM), the limited information temporarily in a state of heightened accessibility, includes just-presented events and recently retrieved information. Norris (2017) argued for a prominent class of theories in which STM depends on the brain keeping a separate copy of new information, and against alternatives in which the information is held only in a portion of long-term memory (LTM) that is currently activated (aLTM). Here I question premises of Norris' case for separate-copy theories in the following ways. (a) He did not allow for implications of the common assumption (e.g., Cowan, 1999; Cowan & Chen, 2009) that aLTM can include new, rapidly formed LTM records of a trial within an STM task. (b) His conclusions from pathological cases of impaired STM along with intact LTM are tenuous; these rare cases can be explained by impairments in encoding, processing, or retrieval related to LTM rather than passive maintenance. (c) Although Norris reasonably allowed structured pointers to aLTM instead of separate copies of the actual item representations in STM, the same structured pointers may well be involved in long-term learning. (d) Last, models of STM storage can serve as the front end of an LTM learning system rather than being separate. I summarize evidence for these premises and an updated version of an alternative theory in which storage depends on aLTM (newly clarified), and, embedded within it, information enhanced by the current focus of attention (Cowan, 1988, 1999), with no need for a separate STM copy. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
... Research on RNN memory storage was influenced by the pioneering observations discussed by Hebb [21], which lead to the Hebbian rule, and the "Hebbian" approach for memory storage. This approach emphasizes the role of "synaptic plasticity", and is based on the concept that a neural network is born without connections, such that, for all i and j, J ij = 0. ...
Article
Full-text available
In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14N, as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size.
... Methods targeting all neurons within a region are not capable of this high resolution. Instead, learned associations are thought to be encoded by specific patterns of neurons, called neuronal ensembles, which are selected by cues and reinforcers during learning (Hebb, 1949). The technologies to target neuronal ensembles based on their activity have just become available (Garner et al., 2012;Cruz et al., 2013Cruz et al., , 2015. ...
Article
Recent studies suggest that the ventral medial prefrontal cortex (vmPFC) encodes both operant drug self-administration and extinction memories. Here, we examined whether these opposing memories are encoded by distinct neuronal ensembles within the vmPFC with different outputs to the nucleus accumbens (NAc) in male and female rats. Using cocaine self-administration (3 h/d for 14 d) and extinction procedures, we demonstrated that vmPFC was similarly activated (indexed by Fos) during cocaine-seeking tests after 0 (no-extinction) or 7 extinction sessions. Selective Daun02 lesioning of the self-administration ensemble (no-extinction) decreased cocaine seeking, whereas Daun02 lesioning of the extinction ensemble increased cocaine seeking. Retrograde tracing with fluorescent cholera toxin subunit B injected into NAc combined with Fos colabeling in vmPFC indicated that vmPFC self-administration ensembles project to NAc core while extinction ensembles project to NAc shell. Functional disconnection experiments (Daun02 lesioning of vmPFC and acute dopamine D1-receptor blockade with SCH39166 in NAc core or shell) confirm that vmPFC ensembles interact with NAc core versus shell to play dissociable roles in cocaine self-administration versus extinction, respectively. Our results demonstrate that neuronal ensembles mediating cocaine self-administration and extinction comingle in vmPFC but have distinct outputs to the NAc core and shell that promote or inhibit cocaine seeking.SIGNIFICANCE STATEMENT Neuronal ensembles within the vmPFC have recently been shown to play a role in self-administration and extinction of food seeking. Here, we used the Daun02 chemogenetic inactivation procedure, which allows selective inhibition of neuronal ensembles identified by the activity marker Fos, to demonstrate that different ensembles for cocaine self-administration and extinction memories coexist in the ventral mPFC and interact with distinct subregions of the nucleus accumbens.
... It specifies that the neurons interact with the surrounding neural extracellular matrix (nECM) with dopants (trace metals and neurotransmitters (NTs) to generate a biochemical neural code as "cognitive units of information" (cuinfo) within the nECM. Hebb (1949) enunciated a theory of "synaptic plasticity" as the basis of learning and memory, ascribed to the increased number and functionality of neural synaptic contacts, a "reverberating circuit" which is still popular among neuroscientists (Kandel et al., 2012(Kandel et al., , 2014. Subsequently, Hebb's was accused of seven "sins"; failing to address many issues critical to modeling neural memory (Arshavsky, 2006). ...
Article
Full-text available
In this paper, we address the enigma of the memory engram, the physical trace of memory in terms of its composition, processes, and location. A neurochemical approach assumes that neural processes hinge on the same terms used to describe the biochemical functioning of other biological tissues and organs. We define a biochemical process, a tripartite mechanism involving the interactions of neurons with their neural extracellular matrix, trace metals, and neurotransmitters as the basis of a biochemical memory engram. The latter inextricably link physiological responses, including sensations with affective states, such as emotions.
... Environmental enrichment (EE) refers to refined conditions for housing animals, which result in enhanced motor, social, sensory and cognitive performances (Nithianantharajah and Hannan, 2006). In the 1940s, Donald Hebb used EE and showed that rats, which were raised in his home, had superior problem solving abilities compared to laboratory-raised rats (Hebb, 1947(Hebb, , 1949. In addition, EE has been reported to improve motor performance when checked with assays such as rotarod, eyeblink conditioning, grid walking, rope suspension, footfault, and walk initiation tests (Madroñal et al., 2010;Horvath et al., 2013;Lee et al., 2013). ...
Article
Full-text available
Environmental enrichment for rodents is known to enhance motor performance. Structural and molecular changes have been reported to be coupled with an enriched environment, but functional alterations of single neurons remain elusive. Here, we compared mice raised under control conditions and an enriched environment. We tested the motor performance on a rotarod and subsequently performed whole-cell patch-clamp recordings in cerebellar slices focusing on granule cells of lobule IX, which is known to receive vestibular input. Mice raised in an enriched environment were able to remain on an accelerating rotarod for a longer period of time. Electrophysiological analyses revealed normal passive properties of granule cells and a functional adaptation to the enriched environment, manifested in faster action potentials (APs) with a higher depolarized voltage threshold and larger AP overshoot. Furthermore, the maximal firing frequency of APs was higher in mice raised in an enriched environment. These data show that enriched environment causes specific alterations in the biophysical properties of neurons. Furthermore, we speculate that the ability of cerebellar granule cells to generate higher firing frequencies improves motor performance.
... gradient descent), 10 reinforcement learning, 11 and Hebbian learning. 12 These models yield an exponentially discounted influence of past trials, which explains the inverted-V pattern common to many 2AFC experiments (as in Figure 1a). Similarly, models from optimal control theory for tracking nonstationary environments, such as the Kalman filter, 13 also produce exponential decay. ...
Article
Full-text available
As we perform daily activities-- driving to work, unlocking the office door, grabbing a coffee cup-- our actions seem automatic and preprogrammed. Nonetheless, routine, well-practiced behavior is continually modulated by incidental experience: in repetitive experimental tasks, recent (~4) trials reliably influence performance and action choice. Psychological theories downplay the significance of sequential effects, explaining them as rapidly decaying perturbations of behavior with no long-term consequences. We challenge this traditional perspective in two studies designed to probe the impact of more distant experience, finding evidence for effects spanning up to a thousand intermediate events. We present a normative theory in which these persistent effects reflect optimal adaptation to a dynamic environment exhibiting varying rates of change. The theory predicts a heavy-tailed decaying influence of past experience, consistent with our data, and suggests that individual incidental experiences are catalogued in a temporally extended memory utilized to optimize subsequent behavior.
... For every classiication usage, it aims to ind a reasonable it to establish a relationship between the presence or absence of the proposed target event and its key factors. Finally, it calculates the results through developing a linear equation in which a weight is multiplied by each conditioning factor [59]. Multi-layer perceptron (MLP) is the most common notion of ANNs which its idea was irst designed in 1943 [60]. The MLP is capable to discover the non-linear relationship between the proposed variables. ...
Article
Full-text available
The present study aims to assess the superiority of the metaheuristic evolutionary when compared to the conventional machine learning classification techniques for landslide occurrence estimation. To evaluate and compare the applicability of these metaheuristic algorithms, a real-world problem of landslide assessment (i.e., including 266 records and fifteen landslide conditioning factors) is selected. In the first step, seven of the most common traditional classification techniques are applied. Then, after introducing the elite model, it is optimized using six state-of-the-art metaheuristic evolutionary techniques. The results show that applying the proposed evolutionary algorithms effectively increases the prediction accuracy from 81.6 to the range (87.8–98.3%) and the classification ratio from 58.3% to the range (60.1–85.0%).
Article
Our understanding of concepts can differ depending on the modality — such as vision, text or speech — through which we learn this concept. A recent study uses computational modelling to demonstrate how conceptual understanding aligns across modalities.
Article
The key question behind this article is, How is AI impacting futures research, with particular focus on foresight and design futures? To address such question, this article reports on selected epistemological and methodological traits of design futures research, on the nature of Artificial Intelligence (AI), and the potential impact of AI on the design futures domain, offering a conceptual model as theoretical junction between the domains of foresight and design. Key topics and themes addressed were selectively focused on foresight, design, design futures, weak signals, and their role in relationship with visioning and the image, AI, and a multidisciplinary discussion of how these fields relate to each other. This article is based on desk research and dialogue among the two authors, both profiled professional experts in their respective fields of design futures and AI, with a track of experience both in academic and applied research. Therefore, sources include a rich texture of bibliographic information, combined with selected input from ongoing research activities on AI and related trends, as performed in the professional field.
Article
Full-text available
In Caenorhabditis elegans, optogenetic stimulation has been widely used to assess neuronal function, control animal movement, or assay circuit responses to controlled stimuli. Most studies are performed on single animals and require high-end components such as lasers and shutters. We present an accessible platform that enables controlled optogenetic stimulation of C. elegans in two modes: single animal stimulation with locomotion tracking and entire population stimulation for neuronal exercise regimens. The system consists of accessible electronic components: a high-power light-emitting diode, Arduino board, and relay are integrated with MATLAB to enable programmable optogenetic stimulation regimens. This system provides flexibility in optogenetic stimulation in freely moving animals while providing quantitative information of optogenetic-driven locomotion responses. We show the applicability of this platform in single animals by stimulation of cholinergic motor neurons in C. elegans and quantitative assessment of contractile responses. In addition, we tested synaptic plasticity by coupling the entire-population stimulation mode with measurements of synaptic strength using an aldicarb assay, where clear changes in synaptic strength were observed after regimens of neuronal exercise. This platform is composed of inexpensive components, while providing the illumination strength of high-end systems, which require expensive lasers, shutters, or automated stages. This platform requires no moving parts but provides flexibility in stimulation regimens.
Article
Full-text available
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed.
Article
Recent advances in neural network (NN) and machine learning algorithms have sparked a wide array of research in specialized hardware, ranging from high-performance NN accelerators for use inside the server systems to energy-efficient edge computing systems. While most of these studies have focused on designing inference engines, implementing the training process of an NN for energy-constrained mobile devices has remained to be a challenge due to the requirement of higher numerical precision. In this article, we aim to build an on-chip learning system that would show highly energy-efficient training for NNs without degradation in the performance for machine learning tasks. To achieve this goal, we adapt and optimize a neuromorphic learning algorithm and propose hardware design techniques to fully exploit the properties of the modifications. We verify that our system achieves energy-efficient training with only 7.5% more energy consumption compared with its highly efficient inference of 236 nJ/image on the handwritten digit [Modified National Institute of Standards and Technology database (MNIST)] images. Moreover, our system achieves 97.83% classification accuracy on the MNIST test data set, which outperforms prior neuromorphic on-chip learning systems and is close to the performance of the conventional method for training deep neural networks (NNs), the backpropagation.
Article
We present the design for a bidirectional coherent optical Rectifying Linear Unit (ReLU) device capable of phase thresholding and appropriately rectifying forward-propagating optical-neuron activity and gating optically back-propagating error that will enable the construction of an all-optical deep learning system. The ReLU device can be fabricated in large arrays using high-speed liquid-crystal-on-Silicon (LCoS) smart-pixel technology capable of implementing arrays of feature planes needed for convolutional neural networks. Interferometric detection of the phase of a split-off fraction of the forward-propagating neuron input is used to set the state of the bidirectional switch and gate both the rest of the forward-propagating neuron input as well as the back-propagating error signals. We show how the array of convolutional adaptive interconnections needed for deep learning can be physically implemented and learned in an all-optical multistage dynamic holographically-interconnected architecture using lenslet arrays addressing thick Fourier-plane dynamic holograms. This optical architecture is self-aligned, phase-calibrated, and aberration compensated by using phase-conjugate mirrors to record the dynamic-holographic interconnections in each layer. This system has the potential to achieve a computational throughput approaching that of supercomputer clusters at a much lower energy cost by synergistically combining the analog computational properties of coherent Fourier optics with the hardware fault-tolerance provided by error-driven deep-learning algorithms.
Chapter
Subspace learning techniques project high-dimensional data onto low-dimensional spaces. They are typically unsupervised. Well-known subspace learning algorithms are PCA, ICA, locality-preserving projection, and NMF. Discriminant analysis is a supervised subspace learning method and uses the data class label information. PCA is a classical statistical method for signal processing and data analysis. It is a feature extractor in the neural network processing setting, and is related to eigenvalue decomposition and singular value decomposition. This chapter introduces PCA, and the associated methods such as minor component analysis, generalized eigenvalue decomposition, singular value decomposition, factor analysis, and canonical correlation analysis.
Article
This paper aims to propose a design method of interval type-2 wavelet cerebellar model articulation controller (IT2WCMAC) and applies it to control uncertain nonlinear systems. The proposed controller incorporates an IT2WCMAC as the main controller to mimic an ideal controller, and a robust compensator is used to eliminate the approximation error between the IT2WCMAC and the ideal controller. The self-evolving algorithm is used to automatically construct the network structure from the blank rule-base. In the proposed control scheme, the steepest descent gradient algorithm is applied to online tune the network parameters, and a Lyapunov stability theorem is applied to guarantee the system’s stability. Moreover, the learning rates of the parameter adaptive laws can be optimized by the particle swarm optimization (PSO) to promote the parameter learning efficiency. Finally, the proposed control system is applied to control nonlinear chaotic systems to verify the control performance and to show the superiority of the proposed algorithm than the other control methods.
Book
Full-text available
La Neurociencia está conformada por un conjunto de disciplinas con base científica que tienen un objetivo en común: comprender cómo el Sistema Nervioso, percibe, interpreta, procesa, y finalmente actúa sobre el mundo. El Sistema Nervioso posee una serie de características que ha ido adquiriendo a lo largo de millones de años de evolución y que lo ayuda a generar acciones veloces para escapar de ciertos peligros, pero también posee capacidad de aprender y adaptarse al medio en el que se ve envuelto. En el caso de los seres humanos, ese medio es la sociedad y por lo tanto el impacto de la vida en sociedad sobre el Sistema Nervioso forma parte del estudio de las neurociencias.
Article
Full-text available
Hebbian learning of excitatory synapses plays a central role in storing activity patterns in associative memory models. Interstimulus Hebbian learning associates multiple items by converting temporal correlation to spatial correlation between attractors. Growing evidence suggests the importance of inhibitory plasticity in memory processing, but the consequence of such regulation in associative memory has not been understood. Noting that Hebbian learning of inhibitory synapses yields an anti-Hebbian effect, we show that the combination of Hebbian and anti-Hebbian learning can significantly increase the span of temporal association between correlated attractors as well as the sensitivity of these states to external input. Furthermore, these effects are regulated by changing the ratio of local and global recurrent inhibition after learning weights for excitation-inhibition balance. Our results suggest a nontrivial role of plasticity and modulation of inhibitory circuits in associative memory.
Chapter
In this work we introduce a new approach to learn the synaptical weights of neural biological networks. At this aim, we consider networks of Leaky Integrate and Fire neurons and model them as timed automata networks. Each neuron receives input spikes through its incoming synapses (modelled as channels) and computes its membrane potential value according to the (present and past) received inputs. Whenever the potential value overtakes a given firing threshold, the neuron emits a spike (modelled as a broadcast signal over the output channel of the corresponding automaton). After each spike emission, the neuron enters first an absolute refractory period, in which signal emission is not allowed, and then a relative refractory period, in which the firing threshold is higher than usual. Neural networks are modelled as sets of timed automata running in parallel and sharing channels in compliance with the network structure. Such a formal encoding allows us to propose an algorithm which automatically infers the synaptical weights of neural networks such that a given dynamical behaviour can be displayed. Behaviours are encoded as temporal logic formulae and the algorithm modifies the network weights until un assignment satisfying the specification is found.
Book
Full-text available
D.A. Rachkovskij. Codevectors: Sparse Binary Distributed Representations of Numerical Data. Kiev: Interservice, 2019. 200 p. (in Russian) The monograph is devoted to methods and algorithms for the formation of codevectors-sparse binary vector representations with an adjustable fraction of non-zero components. The considered codevectors are formed from the initial real-valued numerical vectors without using training. Codevectors can be used in machine learning for nonlinear classification and approximation, for estimating similarity measures and similarity search, etc. The codevector format makes it possible to efficiently use them in matrix-type auto-associative memory, associative-projective neural networks and other algorithms specialized for such a format. Fast processing of codevectors can be performed using the computing infrastructure of search engines and specialized computational hardware, such as associative-projective neurocomputers. For scientific and technical workers, programmers, graduate students, students and readers interested in neural network distributed data representation and new promising areas of computer science and artificial intelligence.
Article
Full-text available
Simulating biological synapses with electronic devices is a re‐emerging field of research. It is widely recognized as the first step in hardware building brain‐like computers and artificial intelligent systems. Thus far, different types of electronic devices have been proposed to mimic synaptic functions. Among them, transistor‐based artificial synapses have the advantages of good stability, relatively controllable testing parameters, clear operation mechanism, and can be constructed from a variety of materials. In addition, they can perform concurrent learning, in which synaptic weight update can be performed without interrupting the signal transmission process. Synergistic control of one device can also be implemented in a transistor‐based artificial synapse, which opens up the possibility of developing robust neuron networks with significantly fewer neural elements. These unique features of transistor‐based artificial synapses make them more suitable for emulating synaptic functions than other types of devices. However, the development of transistor‐based artificial synapses is still in its very early stages. Herein, this article presents a review of recent advances in transistor‐based artificial synapses in order to give a guideline for future implementation of synaptic functions with transistors. The main challenges and research directions of transistor‐based artificial synapses are also presented. Recently, transistor‐based artificial synapses have received much attention due to their good stability, relatively controllable test parameters, and clear operating mechanisms. In addition, they can perform concurrent learning, in which synaptic weight can be performed without interrupting the signal transmission process. This review summarizes recent advances in transistor‐based artificial synapses.
Article
Full-text available
Cortical gamma rhythm is involved in transmission of information (communication) between brain areas that are believed to be involved in the pathogenesis of cognitive dysfunctions. Trace amines represent a group of endogenous biogenic amines that are known to be involved in modulation of function of classical monoamines, such as dopamine. To evaluate potential modulatory influence of a specific receptor for trace amines Trace Amine-Associated Receptor 5 (TAAR5) on the dopamine system, we used HPLC measurements of dopamine and its metabolites in the mouse striatum following administration of the putative TAAR5 agonist α-NETA. Administration of α-NETA caused significant modulation of dopaminergic system as evidenced by an altered dopamine turnover rate in the striatum. Then, to evaluate potential modulatory influence of TAAR5 on the rat brain gamma rhythm, we investigated the changes of electrocorticogram (ECoG) spectral power in the gamma-frequency range (40–50 Hz) following administration of the putative TAAR5 agonist α-NETA. In addition, we analyzed the changes of spatial synchronization of gamma oscillations of rat ECoG by multichannel recording. Significant complex changes were observed in the ECoG spectrum, including an increase in the spectral power in the ranges of delta (1 Hz), theta (7 Hz), and gamma rhythms (40–50 Hz) after the introduction of α-NETA. Furthermore, a decrease in the spatial synchronization of gamma oscillations of 40-50 Hz and its increase for theta oscillations of 7 Hz were detected after the introduction of α-NETA. In conclusion, putative TAAR5 agonist α-NETA can modulate striatal dopamine transmission and cause significant alterations of gamma rhythm of brain activity in a manner consistent with schizophrenia-related deficits described in humans and experimental animals. These observations suggest a role of TAAR5 in the modulation of cognitive functions affected in brain pathologies.
Chapter
In this book, the new field of research, namely, machine perception MU that is placed in the broader context of research in machine understanding, is presented.
Chapter
Full-text available
Many neural networks, ranging from in vitro cell cultures to the neocortex in vivo, exhibit bursts of activity (“neuronal avalanches”) with size and duration distributions characterized by power laws. The exponents of these power laws point to a critical state in which network connectivity is such that, on average, activity neither dies out nor explodes, a condition that optimizes information processing. Various neural properties, including short- and long-term synaptic plasticity, have been proposed to underlie criticality. Reviewing several model studies, here we show that during development, activity-dependent neurite outgrowth, a form of homeostatic structural plasticity, can build critical networks. In the models, each neuron has a circular neuritic field, which expands when the neuron’s average electrical activity is below a homeostatic set-point and shrinks when it is above the set-point. Neurons connect when their neuritic fields overlap. Without any external input, the initially disconnected neurons organize themselves into a connected network, in which all neurons attain the set-point level of activity. Both numerical and analytical results show that in this equilibrium configuration, the network is in a critical state, with avalanche distributions described by precisely the same power laws as observed experimentally. Thus, in building critical networks during development, homeostatic structural plasticity can lay down the basis for optimal network function in adulthood.
Preprint
Full-text available
The brain creates a physical response out of signals in a cascade of streaming transformations. These transformations occur over networks, which have been described in anatomical, cyto-, myeloarchitectonic and functional research. The totality of these networks was modelled and synthesised in phases across a continuous time-space-function axis, through ascending and descending hierarchical levels of association via traveling netwaves, where localised disorders might spread locally throughout the neighbouring tissues. This study quantified this model empirically with time-resolving functional magnetic resonance imaging of an imperative, visually-triggered, self-delayed, therefor double-event related response task. The resulting time series unfold in the range of slow cortical potentials the temporal integrity of a cortical pathway from the source of perception to the mouth of reaction in and out of known functional, anatomical and cytoarchitectonic networks. These pathways are consolidated in phase images described by a small vector matrix, which leads to massive simplification of causal and connectivity modelling and even to simple technical applications.
Article
Full-text available
Nerve growth factor (NGF) is an essential neurotrophic factor for the development and maintenance of the central and the peripheral nervous system. NGF deficiency in the basal forebrain precedes degeneration of basal forebrain cholinergic neurons in Alzheimer's disease, contributing to memory decline. NGF mediates neurotrophic support via its high‐affinity receptor, the tropomyosin‐related kinase A (TrkA) receptor, and mediates mitogenic and differentiation signals via the extracellular signal‐regulated protein kinases 1 and 2 (ERK1/2). However, the molecular mechanisms underlying the different NGF/TrkA/ERK signalling pathways are far from clear. In this study, we have investigated the role of human NGF and three NGF mutants, R100E, W99A and K95A/Q96A, their ability to activate TrkA or ERK1/2, and their ability to induce proliferation or differentiation in human foetal dorsal root ganglion (DRG) neurons or in PC12 cells. We show that the R100E mutant was significantly more potent than NGF itself to induce proliferation and differentiation, and significantly more potent in activation of ERK1/2 in DRG neurons. The W99A and K95A/Q96A mutants, on the other hand, were less effective than the wild‐type protein. An unexpected finding was the high efficacy of the K95A/Q96A mutant to activate TrkA and to induce differentiation of DRG neurons at elevated concentrations. These data demonstrate an NGF mutant with improved neurotrophic properties in primary human neuronal cells. The R100E mutant represents an interesting candidate for further drug development in Alzheimer's disease and other neurodegenerative disorders.
Article
Full-text available
An underlying bias of contemporary cognitive science is that the brain and nervous system are in the business of carrying out computations and building representations. Gibson’s ecological approach, in contrast, is decidedly noncomputational and nonrepresentational. How, then, are we to construe the role of brain and nervous system? We consider this question against the backdrop of evidence for rich achievements in perception and action by agents without brains or nervous systems. If fundamental coordination of perception and action does not require a neural substrate, then what value is added in having one? And if the neural substrate is not in the representational–computational business, then what business is it in? We pursue answers grounded in the constraints of macroscopic, multicellular life and thermodynamics.
Article
Full-text available
This paper investigates the effect of the high frequency of occurrence of a verb in a syntactic frame on speakers’ selection of that syntactic frame for other verbs. We hypothesize that the frequent co-occurrence of a syntactic frame and a particular verb (what we call an anchor verb) leads to a strong association between the verb and the frame analogous to the relationship between a category and its best exemplar. Our Verb Anchor Hypothesis claims that verbs that are more semantically similar to the anchor are more likely to occur in that syntactic frame than verbs that are less semantically similar to the anchor. We tested the Verb Anchor Hypothesis on the dative alternation which involves the meaning-preserving ditransitive and prepositional frames. A corpus study determined that give was the anchor verb for the ditransitive frame. We then examined whether high semantic similarity to give increases the likelihood of an alternating verb (e.g. to hand ) occurring in the ditransitive frame ( Mary handed the boy a book ) rather than in the prepositional frame ( Mary handed a book to the boy ). The results of several logistic regression analyses show that semantic similarity to give makes a unique contribution to predicting the choice of the ditransitive frame aside from other factors known to affect syntactic frame selection. Additional analyses suggest that the Verb Anchor Hypothesis might also hold for more narrowly-defined subclasses of alternating verbs.
Article
Full-text available
Smith and Church (2018) present a “testimonial” review of dissociable learning processes in comparative and cognitive psychology, by which we mean they include only the portion of the available evidence that is consistent with their conclusions. For example, they conclude that learning the information-integration category-learning task with immediate feedback is implicit, but do not consider the evidence that people readily report explicit strategies in this task, nor that this task can be accommodated by accounts that make no distinction between implicit and explicit processes. They also consider some of the neuroscience relating to information-integration category learning, but do not report those aspects that are more consistent with an explicit than an implicit account. They further conclude that delay conditioning in humans is implicit, but do not report evidence that delay conditioning requires awareness; nor do they present the evidence that conditioned taste aversion, which should be explicit under their account, can be implicit. We agree with Smith and Church that it is helpful to have a clear definition of associative theory, but suggest that their definition may be unnecessarily restrictive. We propose an alternative definition of associative theory and briefly describe an experimental procedure that we think may better distinguish between associative and non-associative processes.
ResearchGate has not been able to resolve any references for this publication.